News Posts matching #RDIMM

Return to Keyword Browsing

SK hynix Presents Groundbreaking AI & Server Memory Solutions at DTW 2025

SK hynix presented its leading memory solutions optimized for AI servers and AI PCs at Dell Technologies World (DTW) 2025 in Las Vegas from May 19-22. Hosted by Dell Technologies, DTW is an annual conference which introduces future technology trends. In line with DTW 2025's theme of "Accelerate from Ideas to Innovation," a wide range of products and technologies aimed at driving AI innovation was showcased at the event.

Based on its close partnership with Dell, SK hynix has participated in the event every year to reinforce its leadership in AI. This year, the company organized its booth into six sections: HBM, CMM (CXL Memory Module)-DDR5, server DRAM, PC DRAM, eSSDs, and cSSDs. Featuring products with strong competitiveness across all areas of DRAM and NAND flash for the AI server, storage and PC markets, the booth garnered strong attention from visitors.

New Intel Xeon 6 CPUs to Maximize GPU-Accelerated AI Performance

Intel today unveiled three new additions to its Intel Xeon 6 series of central processing units (CPUs), designed specifically to manage the most advanced graphics processing unit (GPU)-powered AI systems. These new processors with Performance-cores (P-cores) include Intel's innovative Priority Core Turbo (PCT) technology and Intel Speed Select Technology - Turbo Frequency (Intel SST-TF), delivering customizable CPU core frequencies to boost GPU performance across demanding AI workloads. The new Xeon 6 processors are available today, with one of the three currently serving as the host CPU for the NVIDIA DGX B300, the company's latest generation of AI-accelerated systems. The NVIDIA DGX B300 integrates the Intel Xeon 6776P processor, which plays a vital role in managing, orchestrating and supporting the AI-accelerated system. With robust memory capacity and bandwidth, the Xeon 6776P supports the growing needs of AI models and datasets.

"These new Xeon SKUs demonstrate the unmatched performance of Intel Xeon 6, making it the ideal CPU for next-gen GPU-accelerated AI systems," said Karin Eibschitz Segal, corporate vice president and interim general manager of the Data Center Group at Intel. "We're thrilled to deepen our collaboration with NVIDIA to deliver one of the industry's highest-performing AI systems, helping accelerate AI adoption across industries."

MiTAC Computing Unveils Full Server Lineup for Data Centers and Enterprises with Intel Xeon 6 at Computex 2025

MiTAC Computing Technology Corporation, a leading server platform designer, manufacturer, and a subsidiary of MiTAC Holdings Corporation, has launched its full suite of next-generation servers for data centers and enterprises at COMPUTEX 2025 (Booth M1110). Powered by Intel Xeon 6 processors, including those with Performance-cores (P-cores), MiTAC's new platforms are purpose-built for AI, HPC, cloud, and enterprise applications.

"For over five decades, MiTAC and Intel have built a close, collaborative relationship that continues to push innovation forward. Our latest server lineup reflects this legacy—combining Intel's cutting-edge processing power with MiTAC Computing's deep expertise in system design to deliver scalable, high-efficiency solutions for modern data centers." - Rick Hwang, President of MiTAC Computing.

Rambus Delivers Industry-Leading Client Chipsets for Next-Generation AI PC Memory Modules

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today announced the availability of complete client chipsets for next-generation AI PC memory modules, featuring two new Power Management ICs (PMICs) for client computing. PMICs are critical to efficiently power memory modules providing breakthrough levels of performance for advanced computing applications. The two new Rambus industry-leading PMICs are the PMIC5200, for LPDDR5 CAMM2 (LPCAMM2) memory modules and the PMIC5120, which supports DDR5 CSODIMMs and CUDIMMs.

These PMICs, alongside the Client Clock Driver (CKD) and Serial Presence Detect Hub (SPD Hub), comprise a complete chipset offering to enable memory modules for AI PC notebooks, desktops and workstations. Further, with the addition of these new PMICs, Rambus now offers complete memory interface chipsets for all JEDEC standard DDR5 and LPDDR5 memory modules for both servers and clients.

QNAP Unveils TDS-H2489FU R2 24-Bay Dual-CPU U.2 NVMe All-Flash NAS

QNAP Systems, Inc., a leading computing, and storage solutions innovator, today announced the launch of the next-generation flagship 24-bay U.2 NVMe all-flash rackmount NAS, the TDS-h2489FU R2. The TDS-h2489FU R2 offers powerful and stable computing performance, robust networking, and scalable storage capability to meet enterprise demands for high workloads, multitasking, and low latency in applications such as virtualization, high-performance computing (HPC), data centers, AI/ML computing, AI Big Data storage, and 3D rendering.

"As enterprises face the rapid growth of data and increasingly demanding workloads, they need more than just performance. They require a stable platform that supports mission-critical operations," said Alex Shih, Product Manager of QNAP, adding "The TDS-h2489FU R2 is purpose-built for high-load, multitasking applications, empowering IT teams to handle virtualization, AI model training, or post-production editing with ease. We aim to help enterprises maximize the benefits of all-flash storage in data centers and high-efficiency collaborative environments, accelerating digital transformation and innovation."

V-Color Unveils DDR5 Overclockable R-DIMM RGB Memory Modules with Speeds Up to 8000MT/s

V-Color Technology Inc., a leading innovator in high-performance memory solutions, is proud to announce the industry's first DDR5 OC RDIMM RGB modules. Engineered to redefine workstation and professional computing performance, these revolutionary modules will come in capacities ranging from 16 GB per module up to 64 GB or 512 GB in configurations of 64 GB x 8 and with configurations that reach 8000 MT/s delivering exceptional speeds in addition a new ultra-low timing RDIMM of CL26 now enters the line-up, fully optimized for AMD Ryzen Threadripper and Intel Xeon processor platforms. Protected under Patent No. M667728, v-color's DDR5 OC RDIMM RGB modules feature an advanced design with RGB LEDs, combining vibrant visual aesthetics with outstanding performance, bandwidth, and reliability. Designed specifically for AI development, professional multitasking, data analytics, and demanding creative workloads, these modules deliver unmatched speed, reliability, and efficiency.

Elevating Memory Performance
Crafted with Premium Original Top-Tier ICs and ON-Die ECC technology, v-color's DDR5 OC RDIMM RGB modules deliver exceptional data integrity and stability for demanding professional workloads. Featuring an advanced memory architecture, these modules push the boundaries of performance, reaching speeds of up to 8000 MT/s while also offering configurations with ultra-low Timing like 6000 MT/s CL28. Designed for AI development, real-time analytics, and complex 3D rendering, they ensure lightning-fast responsiveness in the most intensive computing environments.

SK hynix Showcases HBM4 to Highlight AI Memory Leadership at TSMC 2025 Technology Symposium

SK hynix showcased groundbreaking memory solutions including HBM4 at the TSMC 2025 North America Technology Symposium held in Santa Clara, California on April 23. The TSMC North America Technology Symposium is an annual event in which TSMC shares its latest technologies and products with global partners. This year, SK hynix participated under the slogan "Memory, Powering AI and Tomorrow," highlighting its technological leadership in AI memory through exhibition zones including HBM Solutions and AI/Data Center Solutions.

In the HBM Solution section, SK hynix presented samples of its 12-layer HBM4 and 16-layer HBM3E products. The 12-layer HBM4 is a next-generation HBM capable of processing over 2 terabytes (TB) of data per second. In March, the company announced it has become the first in the world to supply HBM4 samples to major customers and plans to complete preparations for mass production within the second half of 2025. The B100, NVIDIA's latest Blackwell GPU equipped with the 8-layer HBM3E, was also exhibited in the section along with 3D models of key HBM technologies such as TSV and Advanced MR-MUF, drawing significant attention from visitors.

Micron Innovates From the Data Center to the Edge With NVIDIA

Secular growth of AI is built on the foundation of high-performance, high-bandwidth memory solutions. These high-performing memory solutions are critical to unlock the capabilities of GPUs and processors. Micron Technology, Inc., today announced it is the world's first and only memory company shipping both HBM3E and SOCAMM (small outline compression attached memory module) products for AI servers in the data center. This extends Micron's industry leadership in designing and delivering low-power DDR (LPDDR) for data center applications.

Micron's SOCAMM, a modular LPDDR5X memory solution, was developed in collaboration with NVIDIA to support the NVIDIA GB300 Grace Blackwell Ultra Superchip. The Micron HBM3E 12H 36 GB is also designed into the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms, while the HBM3E 8H 24 GB is available for the NVIDIA HGX B200 and GB200 NVL72 platforms. The deployment of Micron HBM3E products in NVIDIA Hopper and NVIDIA Blackwell systems underscores Micron's critical role in accelerating AI workloads.

MiTAC Computing Showcases Cutting-Edge AI and HPC Servers at Supercomputing Asia 2025

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a global leader in server design and manufacturing, will showcase its latest AI and HPC innovations at Supercomputing Asia 2025, taking place from March 11 at Booth #B10. The event highlights MiTAC's commitment to delivering cutting-edge technology with the introduction of the G4520G6 AI server and the TN85-B8261 HPC server—both engineered to meet the growing demands of artificial intelligence, machine learning, and high-performance computing (HPC) applications.

G4520G6 AI Server: Performance, Scalability, and Efficiency Redefined
The G4520G6AI server redefines computing performance with an advanced architecture tailored for intensive workloads. Key features include:
  • Exceptional Compute Power- Supports dual Intel Xeon 6 Processors with TDP up to 350 W, delivering high-performance multicore processing for AI-driven applications.
  • Enhanced Memory Performance- Equipped with 32 DDR5 DIMM slots (16 per CPU) and 8 memory channels, supporting up to 8,192 GB DDR5 RDIMM/3DS RDIMM at 6400 MT/s for superior memory bandwidth.

Avalue Technology Unveils HPM-GNRDE High-Performance Server Motherboard

Avalue Technology introduces the HPM-GNRDE high-performance server motherboard, powered by the latest Intel Xeon 6 Processors (P-Core) 6500P & 6700P.

Designed to deliver quality computing performance, ultra-fast memory bandwidth, and advanced PCIe 5.0 expansion, the HPM-GNRDE is the ideal solution for AI workloads, high-performance computing (HPC), Cloud data centers, and enterprise applications. The HPM-GNRDE will make its debut at embedded world 2025, showcasing Avalue's innovation in high-performance computing.

ASUS Unveils All-New Intel Xeon 6 Server Lineup

ASUS today announced an all-new series of servers powered by the latest Intel Xeon 6 processors, including the Xeon 6900-series, 6500P/6700P-series and 6300-series processors. These powerhouse processors deliver exceptional performance, efficiency and scalability, featuring up to 128 Performance-cores (P-cores) or 288 Efficient-cores (E-cores) per socket, along with native support for PCI Express (PCIe 5.0) and DDR5 6400 MT/s memory speeds. The latest ASUS server solutions also incorporate the updated BMC module within the ASPEED 2600 chipset, providing improved manageability, security and compatibility with a wide range of remote management software - and coincide with the unveiling of the latest Intel Xeon 6 processors.

Redefining efficiency and scalability
Intel Xeon 6 processors are engineered to meet the needs of modern data centers, AI-driven workloads and enterprise computing. Offering a choice between P-core and E-core architectures, these processors provide flexibility for businesses to optimize performance and energy efficiency based on specific workloads.

Lenovo Delivers Unmatched Flexibility, Performance and Design with New ThinkSystem V4 Servers Powered by Intel Xeon 6 Processors

Today, Lenovo announced three new infrastructure solutions, powered by Intel Xeon 6 processors, designed to modernize and elevate data centers of any size to AI-enabled powerhouses. The solutions include next generation Lenovo ThinkSystem V4 servers that deliver breakthrough performance and exceptional versatility to handle any workload while enabling powerful AI capabilities in compact, high-density designs. Whether deploying at the edge, co-locating or leveraging a hybrid cloud, Lenovo is delivering the right mix of solutions that seamlessly unlock intelligence and bring AI wherever it is needed.

The new Lenovo ThinkSystem servers are purpose-built to run the widest range of workloads, including the most compute intensive - from algorithmic trading to web serving, astrophysics to email, and CRM to CAE. Organizations can streamline management and boost productivity with the new systems, achieving up to 6.1x higher compute performance than previous generation CPUs with Intel Xeon 6 with P-cores and up to 2x the memory bandwidth when using new MRDIMM technology, to scale and accelerate AI everywhere.

Advantech Unveils New AI, Industrial and Network Edge Servers Powered by Intel Xeon 6 Processors

Advantech, a global leader in industrial and embedded computing, today announced the launch of seven new server platforms built on Intel Xeon 6 processors, optimized for industrial, transportation and communications applications. Designed to meet the increasing demands of complex AI storage and networking workloads, these innovative edge servers and network appliances provide superior performance, reliability, and scalability for system integrators, solutions, and service provider customers.

Intel Xeon 6 - Exceptional Performance for the Widest Range of Workloads
Intel Xeon 6 processors feature advanced performance and efficiency cores, delivering up to 86 cores per CPU, DDR5 memory support with speeds up to 6400 MT/s, and PCIe Gen 5 lanes for high-speed connectivity. Designed to optimize both compute-intensive and scale-out workloads, these processors ensure seamless integration across a wide array of applications.

AMD to Build Next-Gen I/O Dies on Samsung 4nm, Not TSMC N4P

Back in January, we covered a report about AMD designing its next-generation "Zen 6" CCDs on a 3 nm-class node by TSMC, and developing a new line of server and client I/O dies (cIOD and sIOD). The I/O die is a crucial piece of silicon that contains all the uncore components of the processor, including the memory controllers, the PCIe root complex, and Infinity Fabric interconnects to the CCDs and multi-socket connections. Back then it was reported that these new-generation I/O dies were being designed on the 4 nm silicon fabrication process, which was interpreted as being AMD's favorite 4 nm-class node, the TSMC N4P, on which the company builds everything from its current "Strix Point" mobile processors to the "Zen 5" CCDs. It turns out that AMD has other plans, and is exploring a 4 nm-class node by Samsung.

This node is very likely the Samsung 4LPP, also known as the SF4, which has been in mass-production since 2022. The table below shows how the SF4 compares with TSMC N4P and Intel 4, where it is shown striking a balance between the two. We have also added values for the TSMC N5 node from which the N4P is derived from, and you can see that the SF4 offers comparable transistor density to the N5, and is a significant improvement in transistor density over the TSMC N6, which AMD uses for its current generation of sIOD and cIOD. The new 4 nm node will allow AMD to reduce the TDP of the I/O die, implement a new power management solution, and more importantly, the need for a new I/O die is driven by the need for updated memory controllers that support higher DDR5 speeds and compatibility with new kinds of DIMMs, such as CUDIMMs, RDIMMs with RCDs, etc.

Montage Technology Delivers Gen2 MRCD & MDB Samples for DDR5 MRDIMM

Montage Technology today announced that it has successfully sampled its Gen 2 Multiplexed Rank Registering Clock Driver (MRCD) and Multiplexed Rank Data Buffer (MDB) chipset to leading global memory manufacturers. Designed for DDR5 Multiplexed Rank DIMM (MRDIMM), this new chipset supports data rates up to 12800 MT/s, delivering exceptional memory performance for next-generation computing platforms.

The release comes at a crucial time, as AI and big data analytics drive increasing demands for memory bandwidth in data centers. MRDIMM technology has emerged as a key solution to address this challenge, particularly as server processors continue to increase in core count.

Supermicro Begins Volume Shipments of Max-Performance Servers Optimized for AI, HPC, Virtualization, and Edge Workloads

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge is commencing shipments of max-performance servers featuring Intel Xeon 6900 series processors with P-cores. The new systems feature a range of new and upgraded technologies with new architectures optimized for the most demanding high-performance workloads including large-scale AI, cluster-scale HPC, and environments where a maximum number of GPUs are needed, such as collaborative design and media distribution.

"The systems now shipping in volume promise to unlock new capabilities and levels of performance for our customers around the world, featuring low latency, maximum I/O expansion providing high throughput with 256 performance cores per system, 12 memory channels per CPU with MRDIMM support, and high performance EDSFF storage options," said Charles Liang, president and CEO of Supermicro. "We are able to ship our complete range of servers with these new application-optimized technologies thanks to our Server Building Block Solutions design methodology. With our global capacity to ship solutions at any scale, and in-house developed liquid cooling solutions providing unrivaled cooling efficiency, Supermicro is leading the industry into a new era of maximum performance computing."

Micron at the 2025 CES: Scripting a Strong Comeback to the Client and PC-DIY Segments

Micron at the 2025 International CES showed us product that hint at the company planning a strong comeback to the client and PC-DIY market segments. The company's Crucial brand is already a high-volume player in the client segment, but the company never really approached the enthusiast segment. Products like the company's new T705 Pro and P510 NVMe SSDs, and DDR5 Pro Overclocking memory, seek to change this. We begin our tour with PC memory, and the DDR5 Pro OC CUDIMMs. Crucial has jumped onto the CKD bandwagon, introducing memory modules and kits that come with DDR5-6400 out of the box, but which are geared for manual overclocking to take advantage of the 1β DRAM chips underneath (hence the name).

The company also showed us their first DDR5 CSODIMM suitable for the next generation of notebooks with HX-segment processors. This module comes with a CKD and a DDR5-6400 JEDEC-standard SPD profile out of the box. Lastly, there's the Micron-branded LPCAMM2, which comes in speeds of up to LPDDR5X-8533, and is suitable for the next generation of ultraportables.

ADATA Memory at CES 2025: CUDIMMs, CSODIMMs, and RDIMMs with RCD

ADATA at the 2025 International CES brought several of its latest memory products. The technology dominating memory products this year is CKD, or client clock driver. But there's more, ADATA also introduced memory modules with RCD, or registered clock driver, or a clock driver for RDIMMs. We begin our tour with the XPG Lancer CUDIMM RGB series, the company's flagship PC overclocking memory product. The top-spec module shown here comes with speeds as high as DDR5-9733, a step above even the DDR5-9600 that most other brands brought. The module comes in densities of 16 GB and 24 GB; and speeds of DDR5-8400, DDR5-8800, DDR5-9200, DDR5-9600, besides the top DDR5-9733. When paired with a Core Ultra "Arrow Lake-S" processor in Gear 4 mode, these kits should easily cross 10,000 MT/s using manual overclocking.

Next up, the company showed us its AICore line of DDR5 RDIMMs for workstations and servers. The module packs an RCD, a registered clock driver, which is essentially a CKD for RDIMMs. It is a component that clears out and amplifies the DDR5 physical layer signal, letting the machine operate at higher memory frequencies. The AICore series comes in speeds of up to DDR5-8000, and densities of up to 16 GB per module. Other speed variants in the series include DDR5-6400 and DDR5-7200. The recommended platforms for these modules include Intel's Xeon W-3500/W-2500 series "Sapphire Rapids," and AMD Ryzen Threadripper 7000-series "Storm Peak."

Crucial Broadens Memory and Storage Portfolio at CES 2025

Micron Technology, Inc., today announced expansions across its Crucial consumer memory and storage portfolio, including unveiling the high-speed Crucial P510 SSD, and expanding density and form factor options across its existing DRAM portfolio to enable broader choice and flexibility for consumers. The P510 features read and write speeds of up to 11,000/9,550 megabytes per second (MB/s), bringing blazing fast Gen 5 performance to the masses.

"With the exciting memory and storage offerings we're debuting today, Crucial's portfolio is now broader and stronger than ever before," said Dinesh Bahal, corporate vice president and general manager of Micron's Commercial Products Group. "These updates - from making fast Gen 5 SSDs available to the mainstream to launching high-density memory options - illustrate our commitment to driving innovation, performance and value for every consumer from casual gamers to creatives and students to hardcore enthusiasts."

Netlist Wins $118 Million in Second Patent Infringement Trial Against Samsung

Netlist, Inc. today announced that it won a $118 million damages award against Samsung Electronics Co., LTD., Samsung Electronics America, Inc., and Samsung Semiconductor, Inc. (together "Samsung") in the United States District Court for the Eastern District of Texas. The award resulted from a jury trial which involved three Netlist patents: U.S. Patent Nos. 7,619,912, 11,093,417 and 10,268,608. The infringing products were all Samsung DDR4 RDIMMs and DDR4 LRDIMMs. Netlist filed the complaint against Samsung in August 2022.

The federal jury's unanimous verdict confirmed that all three Netlist patents had been infringed by Samsung, that none of the patents were invalid, that Samsung willfully infringed those patents, and that money damages were owed to Netlist for the infringement of all three patents.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

Lenovo Shows 16 TB Memory Cluster with CXL in 128x 128 GB Configuration

Expanding the system's computing capability with an additional accelerator like a GPU is common. However, expanding the system's memory capacity with room for more DIMM is something new. Thanks to ServeTheHome, we see that at the OCP Summit 2024, Lenovo showcased its ThinkSystem SR860 V3 server, leveraging CXL technology and Astera Labs Leo memory controllers to accommodate a staggering 16 TB of DDR5 memory across 128 DIMM slots. Traditional four-socket servers face limitations due to the memory channels supported by Intel Xeon processors. With each CPU supporting up to 16 DDR5 DIMMs, a four-socket configuration maxes out at 64 DIMMs, equating to 8 TB when using 128 GB RDIMMs. Lenovo's new approach expands this ceiling significantly by incorporating an additional 64 DIMM slots through CXL memory expansion.

The ThinkSystem SR860 V3 integrates Astera Labs Leo controllers to enable the CXL-connected DIMMs. These controllers manage up to four DDR5 DIMMs each, resulting in a layered memory design. The chassis base houses four Xeon processors, each linked to 16 directly connected DIMMs, while the upper section—called the "memory forest"—houses the additional CXL-enabled DIMMs. Beyond memory capabilities, the server supports up to four double-width GPUs, making it also a solution for high-performance computing and AI workloads. This design caters to scale-up applications requiring vast memory resources, such as large-scale database management, and allows the resources to stay in memory instead of waiting on storage. CXL-based memory architectures are expected to become more common next year. Future developments may see even larger systems with shared memory pools, enabling dynamic allocation across multiple servers. For more pictures and video walkthrough, check out ServeTheHome's post.

Innodisk Unveils DDR5 6400 64GB CUDIMM and CSODIMM Memory Modules

Innodisk, a leading global AI solution provider, announces its DDR5 6400 DRAM series, featuring the industry's largest 64 GB single-module capacity. This 6400 series is purpose-built for data-intensive applications in AI, telehealth, and edge computing, where high performance at the edge is crucial. Available in versatile form factors, including CUDIMM, CSODIMM, and RDIMM, the series delivers unmatched speed, stability, and capacity to meet the rigorous demands of modern edge AI and industrial applications.

The DDR5 6400 series delivers a data transfer rate of 6400 MT/s, offering a 14% boost in speed over previous generations and doubling the maximum capacity to 64 GB. These enhancements make it an optimal choice for applications like Large Language Models (LLMs), generative AI, autonomous vehicles, and mixed reality, which require high-speed, reliable data processing in real time.

SK hynix Showcases Memory Solutions at the 2024 OCP Global Summit

SK hynix is showcasing its leading AI and data center memory products at the 2024 Open Compute Project (OCP) Global Summit held October 15-17 in San Jose, California. The annual summit brings together industry leaders to discuss advancements in open source hardware and data center technologies. This year, the event's theme is "From Ideas to Impact," which aims to foster the realization of theoretical concepts into real-world technologies.

In addition to presenting its advanced memory products at the summit, SK hynix is also strengthening key industry partnerships and sharing its AI memory expertise through insightful presentations. This year, the company is holding eight sessions—up from five in 2023—on topics including HBM and CMS.

MSI Showcases Innovation at 2024 OCP Global Summit, Highlighting DC-MHS, CXL Memory Expansion, and MGX-enabled AI Servers

MSI, a leading global provider of high-performance server solutions, is excited to showcase its comprehensive lineup of motherboards and servers based on the OCP Modular Hardware System (DC-MHS) architecture at the OCP Global Summit from October 15-17 at booth A6. These cutting-edge solutions represent a breakthrough in server designs, enabling flexible deployments for cloud and high-density data centers. Featured innovations include CXL memory expansion servers and AI-optimized servers, demonstrating MSI's leadership in pushing the boundaries of AI performance and computing power.

DC-MHS Series Motherboards and Servers: Enabling Flexible Deployment in Data Centers
"The rapidly evolving IT landscape requires cloud service providers, large-scale data center operators, and enterprises to handle expanding workloads and future growth with more flexible and powerful infrastructure. MSI's new rage of DC-MHS-based solutions provides the needed flexibility and efficiency for modern data center environments," said Danny Hsu, General Manager of Enterprise Platform Solutions.
Return to Keyword Browsing
Jun 13th, 2025 19:48 EEST change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts