News Posts matching #HBM3E

Return to Keyword Browsing

Micron HBM Designed into Leading AMD AI Platform

Micron Technology, Inc. today announced the integration of its HBM3E 36 GB 12-high offering into the upcoming AMD Instinct MI350 Series solutions. This collaboration highlights the critical role of power efficiency and performance in training large AI models, delivering high-throughput inference and handling complex HPC workloads such as data processing and computational modeling. Furthermore, it represents another significant milestone in HBM industry leadership for Micron, showcasing its robust execution and the value of its strong customer relationships.

Micron HBM3E 36 GB 12-high solution brings industry-leading memory technology to AMD Instinct MI350 Series GPU platforms, providing outstanding bandwidth and lower power consumption. The AMD Instinct MI350 Series GPU platforms, built on AMD advanced CDNA 4 architecture, integrate 288 GB of high-bandwidth HBM3E memory capacity, delivering up to 8 TB/s bandwidth for exceptional throughput. This immense memory capacity allows Instinct MI350 series GPUs to efficiently support AI models with up to 520 billion parameters—on a single GPU. In a full platform configuration, Instinct MI350 Series GPUs offers up to 2.3 TB of HBM3E memory and achieves peak theoretical performance of up to 161 PFLOPS at FP4 precision, with leadership energy efficiency and scalability for high-density AI workloads. This tightly integrated architecture, combined with Micron's power-efficient HBM3E, enables exceptional throughput for large language model training, inference and scientific simulation tasks—empowering data centers to scale seamlessly while maximizing compute performance per watt. This joint effort between Micron and AMD has enabled faster time to market for AI solutions.

Compal Optimizes AI Workloads with AMD Instinct MI355X at AMD Advancing AI 2025 and International Supercomputing Conference 2025

As AI computing accelerates toward higher density and greater energy efficiency, Compal Electronics (Compal; Stock Ticker: 2324.TW), a global leader in IT and computing solutions, unveiled its latest high-performance server platform: SG720-2A/ OG720-2A at both AMD Advancing AI 2025 in the U.S. and the International Supercomputing Conference (ISC) 2025 in Europe. It features the AMD Instinct MI355X GPU architecture and offers both single-phase and two-phase liquid cooling configurations, showcasing Compal's leadership in thermal innovation and system integration. Tailored for next-generation generative AI and large language model (LLM) training, the SG720-2A/OG720-2A delivers exceptional flexibility and scalability for modern data center operations, drawing significant attention across the industry.

With generative AI and LLMs driving increasingly intensive compute demands, enterprises are placing greater emphasis on infrastructure that offers both performance and adaptability. The SG720-2A/OG720-2A emerges as a robust solution, combining high-density GPU integration and flexible liquid cooling options, positioning itself as an ideal platform for next-generation AI training and inference workloads.

AMD Instinct MI355X Draws up to 1,400 Watts in OAM Form Factor

Tomorrow evening, AMD will host its "Advancing AI" livestream to introduce the Instinct MI350 series, a new line of GPU accelerators designed for large-scale AI training and inference. First shown in prototype form at ISC 2025 in Hamburg just a day ago, each MI350 card features 288 GB of HBM3E memory, delivering up to 8 TB/s of sustained bandwidth. Customers can choose between the single-card MI350X and the higher-clocked MI355X or opt for a full eight-GPU platform that aggregates to over 2.3 TB of memory. Both chips are built on the CDNA 4 architecture, which now supports four different precision formats: FP16, FP8, FP6, and FP4. The addition of FP6 and FP4 is designed to boost throughput in modern AI workloads, where models of tomorrow with tens of trillions of parameters are trained on FP6 and FP4.

In half-precision tests, the MI350X achieves 4.6 PetaFLOPS on its own and 36.8 PetaFLOPS in eight-GPU platform form, while the MI355X surpasses those numbers, reaching 5.03 PetaFLOPS and just over 40 PetaFLOPS. AMD is also aiming to improve energy efficiency by a factor of thirty compared with its previous generation. The MI350X card runs within a 1,000 Watt power envelope and relies on air cooling, whereas the MI355X steps up to 1,400 Watts and is intended for direct-liquid cooling setups. That 400 Watt increase puts it right at NVIDIA's upcoming GB300 "Grace Blackwell Ultra" superchip, which is also a 1,400 W design. With memory capacity, raw computing, and power efficiency all pushed to new heights, the question remains whether real-world benchmarks will match these ambitious specifications. AMD now only lacks platform scaling beyond eight GPUs, which the Instinct MI400 series will address.

Micron Ships HBM4 Samples: 12-Hi 36 GB Modules with 2 TB/s Bandwidth

Micron has achieved a significant advancement of the HBM4 architecture, which will stack 12 DRAM dies (12-Hi) to provide 36 GB of capacity per package. According to company representatives, initial engineering samples are scheduled to ship to key partners in the coming weeks, paving the way for full production in early 2026. The HBM4 design relies on Micron's established 1β ("one-beta") process node for DRAM tiles, in production since 2022, while it prepares to introduce EUV-enabled 1γ ("one-gamma") later this year for DDR5. By increasing the interface width from 1,024 to 2,048 bits per stack, each HBM4 chip can achieve a sustained memory bandwidth of 2 TB/s, representing a 20% efficiency improvement over the existing HBM3E standard.

NVIDIA and AMD are expected to be early adopters of Micron's HBM4. NVIDIA plans to integrate these memory modules into its upcoming Rubin-Vera AI accelerators in the second half of 2026. AMD is anticipated to incorporate HBM4 into its next-generation Instinct MI400 series, with further information to be revealed at the company's Advancing AI 2025 conference. The increased capacity and bandwidth of HBM4 will address growing demands in generative AI, high-performance computing, and other data-intensive applications. Larger stack heights and expanded interface widths enable more efficient data movement, a critical factor in multi-chip configurations and memory-coherent interconnects. As Micron begins mass production of HBM4, major obstacles to overcome will be thermal performance and real-world benchmarks, which will determine how effectively this new memory standard can support the most demanding AI workloads.
Micron HBM4 Memory

SK hynix Presents Groundbreaking AI & Server Memory Solutions at DTW 2025

SK hynix presented its leading memory solutions optimized for AI servers and AI PCs at Dell Technologies World (DTW) 2025 in Las Vegas from May 19-22. Hosted by Dell Technologies, DTW is an annual conference which introduces future technology trends. In line with DTW 2025's theme of "Accelerate from Ideas to Innovation," a wide range of products and technologies aimed at driving AI innovation was showcased at the event.

Based on its close partnership with Dell, SK hynix has participated in the event every year to reinforce its leadership in AI. This year, the company organized its booth into six sections: HBM, CMM (CXL Memory Module)-DDR5, server DRAM, PC DRAM, eSSDs, and cSSDs. Featuring products with strong competitiveness across all areas of DRAM and NAND flash for the AI server, storage and PC markets, the booth garnered strong attention from visitors.

ASUS Announces ESC A8A-E12U Support for AMD Instinct MI350 Series GPUs

ASUS today announced that its flagship high-density AI server, ESC A8A-E12U, now supports the latest AMD Instinct MI350 series GPUs. This enhancement empowers enterprises, research institutions, and cloud providers to accelerate their AI and HPC workloads with next-generation performance and efficiency—while preserving compatibility with existing infrastructure.

Built on the 4th Gen AMD CDNA architecture, AMD Instinct MI350 series GPUs deliver powerful new capabilities, including 288 GB of HBM3E memory and up to 8 TB/s of bandwidth—enabling faster, more energy-efficient execution of large AI models and complex simulations. With expanded support for low-precision compute formats such as FP4 and FP6, the Instinct MI350 series significantly accelerates generative AI, inference, and machine-learning workloads. Importantly, Instinct MI350 series GPUs maintain drop-in compatibility with existing AMD Instinct MI300 series-based systems, such as those running Instinct MI325X—offering customers a cost-effective and seamless upgrade path. These innovations reduce server resource requirements and simplify scaling and workload management, making Instinct MI350 series GPUs an ideal choice for efficient, large-scale AI deployments.

China Rumored to Acquire 12-High HBM3E Bonders Through Korean Companies

China is pushing forward with its HBM (High Bandwidth Memory) progress as part of its plan to be self-sufficient in the semiconductor industry. JCET Group, China's top semiconductor packaging firm, has bought advanced TC (thermal compression) bonders that are usually used for 12-high stacks of HBM3E chips, according to Money Today Korea (MTN). This state-of-the-art equipment comes from Korean companies where export rules are less strict. This lets China jump from its current HBM2 technology to more advanced memory solutions. However, even if China isn't yet going to produce HBM3E chips soon, having this equipment is useful to boost manufacturing yields even for lower-spec HBM products.

China's desire to make HBM chips at home is a reaction to U.S. rules and taxes meant to hold back its chip abilities. These steps haven't slowed progress; instead, they have made China more determined to stand on its own in chip-making. The AI chip design market hit $18.4 billion last year and is set to grow 28% each year until 2032. The plan aims to supply Chinese-made HBM chips to big tech firms like Huawei, Tencent, and DeepSeek. This helps China get around U.S. export limits while moving its chip industry up in value. Choi Jae-hyeok, Professor of Electrical and Information Engineering, Seoul National University says, "In China's case, it is government-led... In the case of DDR, they are making up to DDR4 and DDR5. China has always wanted to move from low-value-added products to high-value-added products. The next direction is HBM..."

Samsung Reportedly Courting HBM4 Supply Interest From Big Players

The vast majority of High Bandwidth Memory (HBM) new stories—so far, in 2025—have involved or alluded to new-generation SK hynix and Micron products. As mentioned in recently published Samsung Electronics Q1 financial papers, company engineers are still working on "upcoming enhanced HBM3E products." Late last month, a neighbor/main rival publicly showcased their groundbreaking HBM4 memory solution—indicating a market leading development position. Samsung has officially roadmapped a futuristic "sixth-generation" HBM4 technology, but their immediate focus seems to be a targeted sales expansion of incoming "enhanced HBM3E 12H" products. Previously, the firm's Memory Business has lost HBM3 ground—within AI GPU/accelerator market segments—to key competitors.

Industry insiders believe that company leadership will attempt to regain lost market shares in a post-2025 world. As reported by South Korean news outlets, Kim Jae-joon (VP of Samsung's memory department) stated—during a recent earnings call, with analysts—that his team is: "already collaborating with multiple customers on custom versions based on both HBM4 and the enhanced HBM4E." The initiation of commercial shipments is anticipated at some point in 2026, hinging on mass production starting by the second half of this year. The boss notified listeners about development "running on schedule." A Hankyung article alleges that Samsung HBM4 evaluation samples have been sent out to "NVIDIA, Broadcom, and Google." Wccftech posits a positive early outlook: "Samsung will use its own 4 nm process from the foundry division and utilize the 10 nm 6th-generation 1c DRAM, which is known as one of the highest-end in the market. On paper, (their) HBM4 solution will be on par with competing models (from SK hynix), but we will have to wait and see."

Samsung Electronics Announces First Quarter 2025 Results

Samsung Electronics today reported financial results for the first quarter ended March 31, 2025. The Company posted KRW 79.14 trillion in consolidated revenue, an all-time quarterly high, on the back of strong sales of flagship Galaxy S25 smartphones and high-value-added products. Operating profit increased to KRW 6.7 trillion despite headwinds for the DS Division, which experienced a decrease in quarterly revenue.

The Company has allocated its highest-ever annual R&D expenditure for 2024, and in the first quarter of this year, it has also increased its R&D expenditure by 16% compared to the same period last year, amounting to 9 trillion won. Despite the growing macroeconomic uncertainties due to recent global trade tensions and slowing global economic growth, making it difficult to predict future performance, the Company will continue to make various efforts to secure growth. Additionally, assuming that the uncertainties are diminished, it expects its performance to improve in the second half of the year.

TSMC Outlines Roadmap for Wafer-Scale Packaging and Bigger AI Packages

At this year's Technology Symposium, TSMC unveiled an engaging multi-year roadmap for its packaging technologies. TSMC's strategy splits into two main categories: Advanced Packaging and System-on-Wafer. Back in 2016, CoWoS-S debuted with four HBM stacks paired to N16 compute dies on a 1.5× reticle-limited interposer, which was an impressive feat at the time. Fast forward to 2025, and CoWoS-S now routinely supports eight HBM chips alongside N5 and N4 compute tiles within a 3.3× reticle budget. Its successor, CoWoS-R, increases interconnect bandwidth and brings N3-node compatibility without changing that reticle constraint. Looking toward 2027, TSMC will launch CoWoS-L. First up are large N3-node chiplets, followed by N2-node tiles, multiple I/O dies, and up to a dozen HBM3E or HBM4 stacks—all housed within a 5.5× reticle ceiling. It's hard to believe that eight HBM stacks once sounded ambitious—now they're just the starting point for next-gen AI accelerators inspired by AMD's Instinct MI450X and NVIDIA's Vera Rubin.

Integrated Fan-Out, or InFO, adds another dimension with flexible 3D assemblies. The original InFO bridge is already powering AMD's Instinct cards. Later this year, InFO-POP (package-on-package) and InFO-2.5D arrive, promising even denser chip stacking and unlocking new scaling potential on a single package, away from the 2D and 2.5D packaging we were used to, going into the third dimension. On the wafer scale, TSMC's System-on-Wafer lineup—SoW-P and SoW-X—has grown from specialized AI engines into a comprehensive roadmap mirroring logic-node progress. This year marks the first SoIC stacks from N3 to N4, with each tile up to 830 mm² and no hard limit on top-die size. That trajectory points to massive, ultra-dense packages, which is exactly what HPC and AI data centers will demand in the coming years.

SK Hynix Announces 1Q25 Financial Results

SK hynix Inc. (or "the company", www.skhynix.com) announced today that it recorded 17.6391 trillion won in revenues, 7.4405 trillion won in operating profit (with an operating margin of 42%), and 8.1082 trillion won in net profit (with a net margin of 46%) in the first quarter this year. Both revenues and operating profit are the 2nd highest records following last quarter when the company achieved its best quarterly results. Operating margin improved by 1% compared to the previous quarter to 42%, resulting in 8th consecutive quarterly growth.

SK hynix explained that memory market ramped up faster than expected due to competition to develop AI systems and inventory accumulation demand. The company responded to the demand with an expansion in sales of high value-added products such as 12-layer HBM3E and DDR5. The company believes the strong financial results despite a low seasonality reflect its outstanding competitiveness compared to the past. The company plans to focus on enhancing the business fundamentals to achieve distinguished financial outcome, even in times of market correction.

Micron Announces Memory Price Increases for 2025-2026 Amid Supply Constraints

In a letter to customers, Micron has announced upcoming memory price increases extending through 2025 and 2026, citing persistent supply constraints coupled with accelerating demand across its product portfolio. The manufacturer points to significant demand growth in DRAM, NAND flash, and high-bandwidth memory (HBM) segments as key drivers behind the pricing strategy. The memory market is rebounding from a prolonged oversupply cycle that previously depressed revenues industry-wide. Strategic production capacity reductions implemented by major suppliers have contributed to price stabilization and subsequent increases over the past twelve months. This pricing trajectory is expected to continue as data center operators, AI deployments, and consumer electronics manufacturers compete for limited memory allocation.

In communications to channel partners, Micron emphasized AI and HPC requirements as critical factors necessitating the price adjustments. The company has requested detailed forecast submissions from partners to optimize production planning and supply chain stability during the constrained market period. With its pricing announcement, Micron disclosed a $7 billion investment in a Singapore-based HBM assembly facility. The plant will begin operations in 2026 and will focus on HBM3E, HBM4, and HBM4E production—advanced memory technologies essential for next-generation AI accelerators and high-performance computing applications from NVIDIA, AMD, Intel, and other companies. The price increases could have cascading effects across the AI and GPU sector, potentially raising costs for products ranging from consumer gaming systems to enterprise data infrastructure. We are monitoring how these adjustments will impact hardware refresh cycles and technology adoption rates as manufacturers pass incremental costs to end customers.

Micron Innovates From the Data Center to the Edge With NVIDIA

Secular growth of AI is built on the foundation of high-performance, high-bandwidth memory solutions. These high-performing memory solutions are critical to unlock the capabilities of GPUs and processors. Micron Technology, Inc., today announced it is the world's first and only memory company shipping both HBM3E and SOCAMM (small outline compression attached memory module) products for AI servers in the data center. This extends Micron's industry leadership in designing and delivering low-power DDR (LPDDR) for data center applications.

Micron's SOCAMM, a modular LPDDR5X memory solution, was developed in collaboration with NVIDIA to support the NVIDIA GB300 Grace Blackwell Ultra Superchip. The Micron HBM3E 12H 36 GB is also designed into the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms, while the HBM3E 8H 24 GB is available for the NVIDIA HGX B200 and GB200 NVL72 platforms. The deployment of Micron HBM3E products in NVIDIA Hopper and NVIDIA Blackwell systems underscores Micron's critical role in accelerating AI workloads.

SK hynix Showcases Industry-Leading Memory Technology at GTC 2025

SK hynix Inc. announced today that it will participate in the GTC 2025, a global AI conference taking place March 17-21 in San Jose, California, with a booth titled "Memory, Powering AI and Tomorrow". The company will present HBM and other memory products for AI data centers and on-device and memory solutions for automotive business essential for AI era.

Among the industry-leading AI memory technology to be displayed at the show are 12-high HBM3E and SOCAMM (Small Outline Compression Attached Memory Module), a new memory standard for AI servers.

Oracle Plans to Use 30,000 AMD Instinct MI355X GPUs for AI Cloud

AMD's Instinct MI355X accelerators for AI workloads are gaining traction, and Oracle just became one of the bigger customers. According to Oracle's latest financial results, the company noted that it had acquired 30,000 AMD Instinct MI355X accelerators. "In Q3, we signed a multi billion dollar contract with AMD to build a cluster of 30,000 of their latest MI355X GPUs," noted Larry Ellison, adding that "And all four of the leading cloud security companies, CrowdStrike, Cyber Reason, Newfold Digital and Palo Alto, they all decided to move to the Oracle Cloud. But perhaps most importantly, Oracle has developed a new product called the AI data platform that enables our huge install base of database customers to use the latest AI models from OpenAI, XAI and Meta to analyze all of the data they have stored in their millions of existing Oracle databases. By using Oracle version 23 AI's vector capabilities, customers can automatically put all of their existing data into the vector format that is understood by AI models. This allows those AI models to learn, understand and analyze every aspect of your company or government agency, instantly unlocking the value in your data while keeping your data private and secure."

AMD's Instinct MI355X accelerator introduces the CDNA4 architecture on TSMC's N3 process node with a focus on AI workload acceleration. The chiplet-based GPU delivers 2.3 petaflops of FP16 compute and 4.6 petaflops of FP8 compute, marking a 77% performance increase over the MI300X series. The MI355X's key advancement comes through support for reduced-precision FP4 and FP6 numerical formats, enabling up to 9.2 petaflops of FP4 compute. Memory specifications include 288 GB of HBM3E across eight stacks, providing 8 TB/s of total bandwidth. Production timelines place the MI355X's market entry in the second half of 2025, continuing AMD's annual cadence for data center GPU launches. By second half, Oracle will likely prepare data center space for these GPUs and just power them on once AMD ships these accelerators.

SK hynix Announces 4Q24 Financial Results

SK hynix Inc. announced today that it recorded best-ever yearly performance with 66.1930 trillion won in revenues, 23.4673 trillion won in operating profit (with an operating margin of 35%), and 19.7969 trillion won in net profit (with a net margin of 30%). Yearly revenues marked all-time high, exceeding the previous record in 2022 by over 21 trillion won and operating profit exceeded the record in 2018 during the semiconductor super boom.

In particular, fourth quarter revenues went up by 12% to 19.7670 trillion won, operating profit up 15% to 8.0828 trillion won (with an operating margin of 41%) from the previous quarter and net profit recorded 8.0065 trillion won (with a net margin of 41%). SK hynix emphasized that with prolonged strong demand for AI memory, the company achieved all-time high result through world-leading HBM technology and profitability-oriented operation. HBM continued its high growth in fourth quarter marking over 40% of total DRAM revenue and eSSD also showed constant increase in sales. With remarkable product competitiveness based profitability-oriented operation, the company established a stable financial condition which led to improved outcome.

SK hynix Showcases AI-Driven Innovations for a Sustainable Tomorrow at CES 2025

SK hynix has returned to Las Vegas for Consumer Electronics Show (CES) 2025, showcasing its latest AI memory innovations reshaping the industry. Held from January 7-10, CES 2025 brings together the brightest minds and groundbreaking technologies from the world's leading tech companies. This year, the event's theme is "Dive In," inviting attendees to immerse themselves in the next wave of technological advancement. SK hynix is emphasizing how it is driving this wave through a display of leading AI memory technologies at the SK Group exhibit. Along with SK Telecom, SKC, and SK Enmove, the company is highlighting how the Group's AI infrastructure brings about true change under the theme "Innovative AI, Sustainable Tomorrow."

Groundbreaking Memory Tech Driving Change in the AI Era
Visitors enter SK Group's exhibit through the Innovation Gate, greeted by a video of dynamic wave-inspired visuals which symbolize the power of AI. The video shows the transformation of binary data into a wave which flows through the exhibition, highlighting how data and AI drives change across industries. Continuing deeper into the exhibit, attendees make their way into the AI Data Center area, the focal point of SK hynix's display. This area features the company's transformative memory products driving progress in the AI era. Among the cutting-edge AI memory technologies on display are SK hynix's HBM, server DRAM, eSSD, CXL, and PIM products.

SK hynix to Unveil Full Stack AI Memory Provider Vision at CES 2025

SK hynix Inc. announced today that it will showcase its innovative AI memory technologies at CES 2025, to be held in Las Vegas from January 7 to 10 (local time). A large number of C-level executives, including CEO Kwak No-jung, CMO (Chief Marketing Officer) Justin Kim and Chief Development Officer (CDO) Ahn Hyun, will attend the event. "We will broadly introduce solutions optimized for on-device AI and next-generation AI memories, as well as representative AI memory products such as HBM and eSSD at this CES," said Justin Kim. "Through this, we will publicize our technological competitiveness to prepare for the future as a Full Stack AI Memory Provider."

SK hynix will also run a joint exhibition booth with SK Telecom, SKC and SK Enmove, under the theme "Innovative AI, Sustainable Tomorrow." The booth will showcase how SK Group's AI infrastructure and services are transforming the world, represented in waves of light. SK hynix, which is the world's first to produce 12-layer HBM products for 5th generation and supply them to customers, will showcase samples of HBM3E 16-layer products, which were officially developed in November last year. This product uses the advanced MR-MUF process to achieve the industry's highest 16-layer configuration while controlling chip warpage and maximizing heat dissipation performance.

Samsung Hopes PIM Memory Technology Can Replace HBM in Next-Gen AI Applications

The 8th edition of the Samsung AI Forum was held on November 4th and 5th in Seoul, and among all the presentations and keynote speeches, one piece of information caught our attention. As reported by The Chosun Daily, Samsung is (again) turning its attention to Processing-in-Memory (PIM) technology, in what appears to be the company's latest attempt to keep up with its rival SK Hynix in this area. In 2021, Samsung introduced the world's first HBM-PIM, the chips showing impressive gains in performance (nearly double) while reducing energy consumption by almost 50% on average. PIM technology basically adds the processor functions necessary for computational tasks, reducing data transfer between the CPU and memory.

Now, the company hopes that PIM memory chips could replace HBM in the future, based on the advantages this next-generation memory technology possesses, mainly for artificial intelligence (AI) applications. "AI is transforming our lives at an unprecedented rate, and the question of how to use AI more responsibly is becoming increasingly important," said Samsung Electronics CEO Han Jong-hee in his opening remarks. "Samsung Electronics is committed to fostering a more efficient and sustainable AI ecosystem." During the event, Samsung also highlighted its partnership with AMD, which reportedly supplies AMD with its fifth-generation HBM, the HBM3E.

NVIDIA CEO Jensen Huang Asks SK hynix to Speed Up HBM4 Delivery by Six Months

SK hynix announced the first 48 GB 16-high HBM3E in the industry at the SK AI Summit in Seoul today. During the event, news came out about newer plans to develop their next-gen memory tech. Reuters and ZDNet Korea reported that NVIDIA CEO Jensen Huang asked SK hynix to speed up their HBM4 delivery by six months. SK Group Chairman Chey Tae-won shared this info at the Summit. The company had earlier said they would give HBM4 chips to customers in the second half of 2025.

When ZDNet asked about this sped-up plan, SK hynix President Kwak Noh-Jung gave a careful answer saying "We will give it a try." A company spokesperson told Reuters that this new schedule would be quicker than first planned, but they didn't share more details. In a video interview shown at the Summit, NVIDIA's Jensen Huang pointed out the strong team-up between the companies. He said working with SK hynix has helped NVIDIA go beyond Moore's Law performance gains. He stressed that NVIDIA will keep needing SK hynix's HBM tech for future products. SK hynix plans to supply the latest 12-layer HBM3E to an undisclosed customer this year, and will start sampling of the 16-layer HBM3E early next year.

SK hynix Introduces World's First 16-High HBM3E at SK AI Summit 2024

SK hynix CEO Kwak Noh-Jung, during his keynote speech titled "A New Journey in Next-Generation AI Memory: Beyond Hardware to Daily Life" at SK AI Summit in Seoul, made public development of the industry's first 48 GB 16-high - the world's highest number of layers followed by the 12-high product—HBM3E. Kwak also shared the company's vision to become a "Full Stack AI Memory Provider", or a provider with a full lineup of AI memory products in both DRAM and NAND spaces, through close collaboration with interested parties.

Samsung Electronics Announces Results for Third Quarter of 2024, 7 Percent Revenue Increase

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2024. The Company posted KRW 79.1 trillion in consolidated revenue, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. Operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division. The strength of the Korean won against the U.S. dollar resulted in a negative impact on company-wide operating profit of about KRW 0.5 trillion compared to the previous quarter.

In the fourth quarter, while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels. Against this backdrop, the Company will concentrate on driving sales of High Bandwidth Memory (HBM) and high-density products. The Foundry Business aims to increase order volumes by enhancing advanced process technologies. Samsung Display Corporation (SDC) expects the demand of flagship products from major customers to continue, while maintaining a quite conservative outlook on its performance. The Device eXperience (DX) Division will continue to focus on premium products, but sales are expected to decline slightly compared to the previous quarter.

SK Hynix Reports Third Quarter 2024 Financial Results

SK hynix Inc. announced today that it recorded 17.5731 trillion won in revenues, 7.03 trillion won in operating profit (with an operating margin of 40%), and 5.7534 trillion won in net profit (with a net margin of 33%) in the third quarter this year. Quarterly revenues marked all-time high, exceeding the previous record of 16.4233 trillion won in the second quarter of this year by more than 1 trillion won. Operating profit and net profit also far exceeded the record of 6.4724 trillion won and 4.6922 trillion won in the third quarter of 2018 during the semiconductor super boom.

SK hynix emphasized that the demand for AI memory continued to be strong centered on data center customers, and the company marked its highest revenue since its foundation by expanding sales of premium products such as HBM and eSSD. In particular, HBM sales showed excellent growth, up more than 70% from the previous quarter and more than 330% from the same period last year.

SK hynix Showcases Memory Solutions at the 2024 OCP Global Summit

SK hynix is showcasing its leading AI and data center memory products at the 2024 Open Compute Project (OCP) Global Summit held October 15-17 in San Jose, California. The annual summit brings together industry leaders to discuss advancements in open source hardware and data center technologies. This year, the event's theme is "From Ideas to Impact," which aims to foster the realization of theoretical concepts into real-world technologies.

In addition to presenting its advanced memory products at the summit, SK hynix is also strengthening key industry partnerships and sharing its AI memory expertise through insightful presentations. This year, the company is holding eight sessions—up from five in 2023—on topics including HBM and CMS.

ASRock Rack Unveils New Server Platforms Supporting AMD EPYC 9005 Series Processors and AMD Instinct MI325X Accelerators at AMD Advancing AI 2024

ASRock Rack Inc., a leading innovative server company, announced upgrades to its extensive lineup to support AMD EPYC 9005 Series processors. Among these updates is the introduction of the new 6U8M-TURIN2 GPU server. This advanced platform features AMD Instinct MI325X accelerators, specifically optimized for intensive enterprise AI applications, and will be showcased at AMD Advancing AI 2024.

ASRock Rack Introduce GPU Servers Powered by AMD EPYC 9005 series processors
AMD today revealed the 5th Generation AMD EPYC processors, offering a wide range of core counts (up to 192 cores), frequencies (up to 5 GHz), and expansive cache capacities. Select high-frequency processors, such as the AMD EPYC 9575F, are optimized for use as host CPUs in GPU-enabled systems. Additionally, the just launched AMD Instinct MI325X accelerators feature substantial HBM3E memory and 6 TB/s of memory bandwidth, enabling quick access and efficient handling of large datasets and complex computations.
Return to Keyword Browsing
Jun 14th, 2025 07:23 EEST change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts