News Posts matching #Server

Return to Keyword Browsing

AMD Namedrops EPYC "Venice" Zen 6 and EPYC "Verano" Zen 7 Server Processors

AMD at its 2025 Advancing AI event name-dropped its two next generations of EPYC server processors to succeed the current EPYC "Turin" powered by Zen 5 microarchitecture. 2026 will see AMD debut the Zen 6 microarchitecture, and its main workhorse for the server segment will be EPYC "Venice." This processor will likely see a generational increase in CPU core counts, increased IPC from the full-sized Zen 6 cores, support for newer ISA, and an updated I/O package. AMD is looking to pack "Venice" with up to 256 CPU cores per package.

AMD is looking to increase the CPU core count per CCD (CPU complex die) with "Zen 6." The company plans to build these CCDs on the 2 nm TSMC N2 process node. The sIOD (server I/O die) of "Venice" implements PCI-Express Gen 6 for a generational doubling in bandwidth to GPUs, SSDs, and NICs. AMD is also claiming memory bandwidth as high as 1.6 TB/s. There are a couple of ways they can go about achieving this, either by increasing the memory clock speeds, or giving the processor a 16-channel DDR5 memory interface, up from the current 12-channel DDR5. The company could also add support for multichannel DIMM standards, such as MR-DIMM and MCR-DIMMs. All said and done, AMD is claiming a 70% increase in multithreaded performance over the current EPYC "Turin," which we assume is comparing the highest performing part to its next-gen successor.

Supermicro Delivers Liquid-Cooled and Air-Cooled AI Solutions with AMD Instinct MI350 Series GPUs and Platforms

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing that both liquid-cooled and air-cooled GPU solutions will be available with the new AMD Instinct MI350 series GPUs, optimized for unparalleled performance, maximum scalability, and efficiency. The Supermicro H14 generation of GPU optimized solutions featuring dual AMD EPYC 9005 CPUs along with the AMD Instinct MI350 series GPUs, are designed for organizations seeking maximum performance at scale, while reducing the total cost of ownership for their AI-driven data centers.

"Supermicro continues to lead the industry with the most experience in delivering high-performance systems designed for AI and HPC applications," said Charles Liang, president and CEO of Supermicro. "Our Data Center Building Block Solutions enable us to quickly deploy end-to-end data center solutions to market, bringing the latest technologies for the most demanding applications. The addition of the new AMD Instinct MI350 series GPUs to our GPU server lineup strengthens and expands our industry-leading AI solutions and gives customers greater choice and better performance as they design and build the next generation of data centers."

Europe Builds AI Infrastructure With NVIDIA to Fuel Region's Next Industrial Transformation

NVIDIA today announced it is working with European nations, and technology and industry leaders, to build NVIDIA Blackwell AI infrastructure that will strengthen digital sovereignty, support economic growth and position the continent as a leader in the AI industrial revolution. France, Italy, Spain and the U.K. are among the nations building domestic AI infrastructure with an ecosystem of technology and cloud providers, including Domyn, Mistral AI, Nebius and Nscale, and telecommunications providers, including Orange, Swisscom, Telefónica and Telenor.

These deployments will deliver more than 3,000 exaflops of NVIDIA Blackwell compute resources for sovereign AI, enabling European enterprises, startups and public sector organizations to securely develop, train and deploy agentic and physical AI applications. NVIDIA is establishing and expanding AI technology centers in Germany, Sweden, Italy, Spain, the U.K. and Finland. These centers build on NVIDIA's history of collaborating with academic institutions and industry through the NVIDIA AI Technology Center program and NVIDIA Deep Learning Institute to develop the AI workforce and scientific discovery throughout the regions.

Pegatron Unveils AI-Optimized Server Innovations at GTC Paris 2025

PEGATRON, a globally recognized Design, Manufacturing, and Service (DMS) provider, is showcasing its latest AI server solutions at GTC Paris 2025. Built on NVIDIA Blackwell architecture, PEGATRON's cutting-edge systems are tailored for AI training, reasoning, and enterprise-scale deployment.

NVIDIA GB300 NVL72
At the forefront is the RA4802-72N2, built on the NVIDIA GB300 NVL72 rack system, featuring 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. Designed for AI factories, it boosts output by up to 50X. PEGATRON's in-house developed Coolant Distribution Unit (CDU) delivers 310 kW of cooling capacity with redundant hot-swappable pumps, ensuring performance and reliability for mission-critical workloads.

Supermicro Unveils Industry's Broadest Enterprise AI Solution Portfolio for NVIDIA Blackwell Architecture

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing an expansion of the industry's broadest portfolio of solutions designed for NVIDIA Blackwell Architecture to the European market. The introduction of more than 30 solutions reinforces Supermicro's industry leadership by providing the most comprehensive and efficient solution stack for NVIDIA HGX B200, GB200 NVL72, and RTX PRO 6000 Blackwell Server Edition deployments, enabling rapid time-to-online for European enterprise AI factories across any environment. Through close collaboration with NVIDIA, Supermicro's solution stack enables the deployment of NVIDIA Enterprise AI Factory validated design and supports the upcoming introduction of NVIDIA Blackwell Ultra solutions later this year, including NVIDIA GB300 NVL72 and HGX B300.

"With our first-to-market advantage and broad portfolio of NVIDIA Blackwell solutions, Supermicro is uniquely positioned to meet the accelerating demand for enterprise AI infrastructure across Europe," said Charles Liang, president and CEO of Supermicro. "Our collaboration with NVIDIA, combined with our global manufacturing capabilities and advanced liquid cooling technologies, enables European organizations to deploy AI factories with significantly improved efficiency and reduced implementation timelines. We're committed to providing the complete solution stack enterprises need to successfully scale their AI initiatives."

MSI Powers AI's Next Leap for Enterprises at ISC 2025

MSI, a global leader in high-performance server solutions, is showcasing its enterprise-grade, high-performance server platforms at ISC 2025, taking place June 10-12 at booth #E12. Built on standardized and modular architectures, MSI's AI servers are designed to power next-generation AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations.

"As AI workloads continue to grow and evolve toward inference-driven applications, we're seeing a significant shift in how enterprises approach AI deployment," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes.

ASUS Announces Key Milestone with Nebius and Showcases NVIDIA GB300 NVL72 System at GTC Paris 2025

ASUS today joined GTC Paris at VivaTech 2025 as a Gold Sponsor, highlighting its latest portfolio of AI infrastructure solutions and reinforcing its commitment to advancing the AI Factory vision with a full range of NVIDIA Blackwell Ultra solutions, delivering breakthrough performance from large-scale datacenter to personal desktop.

ASUS is also excited to announce a transformative partnership milestone in its partnership with Nebius. Together, the two companies are enabling a new era of AI innovation built on NVIDIA's advanced platforms. Building on the success of the NVIDIA GB200 NVL72 platform deployment, ASUS and Nebius are now moving forward with strategic collaborations featuring the next-generation NVIDIA GB300 NVL72 platform. This ongoing initiative underscores ASUS's role as a key enabler in AI infrastructure, committed to delivering scalable, high-performance solutions that help enterprises accelerate AI adoption and innovation.

Rising Demand and EOL Plans from Suppliers Drive Strong DDR4 Contract Price Hikes in 2Q25 for Server and PC Markets

TrendForce's latest investigations find that DDR4 contract prices for servers and PCs are expected to rise more sharply in the second quarter of 2025 due to two key factors: major DRAM suppliers scaling back DDR4 production and buyers accelerating procurement ahead of U.S. tariff changes. As a result, server DDR4 contract prices are forecast to rise by 18-23% QoQ, while PC DDR4 prices are projected to increase by 13-18%—both surpassing earlier estimates.

TrendForce notes that DDR4 has been in the market for over a decade, and demand is increasingly shifting toward DDR5. Given the significantly higher profit margins for HBM, DDR5, and LPDDR5(X), suppliers have laid out EOL plans for DDR4, with final shipments expected by early 2026. Current EOL notifications largely target server and PC clients, while consumer DRAM (mainly DDR4) remains in production due to continued mainstream demand.

Funcom Details Dune: Awakening's Rentable Private Server System

Greetings soon-to-be-awakened, today, just about 72 hours before the floodgates open, we can finally share with you that rentable private servers will be available from head start launch on June 5th! We've previously communicated that private servers are for post-launch, but we're happy to share that progress has been faster than expected. We do, however, want to manage expectations about how private servers work in Dune: Awakening. As you know, this is not your typical survival game.

Why private servers work differently in Dune: Awakening
Dune: Awakening is powered by a unique server and world structure, something we went in-depth on in a recent blog post. In short: each server belongs to a World consisting of several other servers, and each of those share the same social hubs and Deep Desert. This allows us to retain a neighborhood-like feel to the Hagga Basin and provide persistent, freeform building, and other server-demanding mechanics you typically see in survival games. We combine this with the large-scale multiplayer mechanics you would expect to find in MMOs where hundreds of players meet each other in social hubs and the Deep Desert to engage in social activities, trade, conflict, and more.

Dell Technologies Delivers First Quarter Fiscal 2026 Financial Results

Dell Technologies (NYSE: DELL) announces financial results for its fiscal 2026 first quarter. The company also provides guidance for its fiscal 2026-second quarter and full year.

First-Quarter Summary
  • First-quarter revenue of $23.4 billion, up 5% year over year
  • First-quarter operating income of $1.2 billion, up 21% year over year, and non-GAAP operating income of $1.7 billion, up 10%
  • First-quarter diluted EPS of $1.37, flat year over year, and non-GAAP diluted EPS of $1.55, up 17%

Infineon Announces Collaboration with NVIDIA on Power Delivery Chips for Future Server Racks

Infineon Technologies AG is revolutionizing the power delivery architecture required for future AI data centers. In collaboration with NVIDIA, Infineon is developing the next generation of power systems based on a new architecture with central power generation of 800 V high-voltage direct current (HVDC). The new system architecture significantly increases energy-efficient power distribution across the data center and allows power conversion directly at the AI chip (Graphic Processing Unit, GPU) within the server board. Infineon's expertise in power conversion solutions from grid to core based on all relevant semiconductor materials silicon (Si), silicon carbide (SiC) and gallium nitride (GaN) is accelerating the roadmap to a full scale HVDC architecture.

This revolutionary step paves the way for the implementation of advanced power delivery architectures in accelerated computing data centers and will further enhance reliability and efficiency. As AI data centers already are going beyond 100,000 individual GPUs, the need for more efficient power delivery is becoming increasingly important. AI data centers will require power outputs of one megawatt (MW) and more per IT rack before the end of the decade. Therefore, the HVDC architecture coupled with high-density multiphase solutions will set a new standard for the industry, driving the development of high-quality components and power distribution systems.

Dell Technologies Unveils Next Generation Enterprise AI Solutions with NVIDIA

The world's top provider of AI-centric infrastructure, Dell Technologies, announces innovations across the Dell AI Factory with NVIDIA - all designed to help enterprises accelerate AI adoption and achieve faster time to value.

Why it matters
As enterprises make AI central to their strategy and progress from experimentation to implementation, their demand for accessible AI skills and technologies grows exponentially. Dell and NVIDIA continue the rapid pace of innovation with updates to the Dell AI Factory with NVIDIA, including robust AI infrastructure, solutions and services that streamline the path to full-scale implementation.

MiTAC Computing Unveils Full Server Lineup for Data Centers and Enterprises with Intel Xeon 6 at Computex 2025

MiTAC Computing Technology Corporation, a leading server platform designer, manufacturer, and a subsidiary of MiTAC Holdings Corporation, has launched its full suite of next-generation servers for data centers and enterprises at COMPUTEX 2025 (Booth M1110). Powered by Intel Xeon 6 processors, including those with Performance-cores (P-cores), MiTAC's new platforms are purpose-built for AI, HPC, cloud, and enterprise applications.

"For over five decades, MiTAC and Intel have built a close, collaborative relationship that continues to push innovation forward. Our latest server lineup reflects this legacy—combining Intel's cutting-edge processing power with MiTAC Computing's deep expertise in system design to deliver scalable, high-efficiency solutions for modern data centers." - Rick Hwang, President of MiTAC Computing.

MSI Unveils Next-Level AI Solutions Using NVIDIA MGX and DGX Station at COMPUTEX 2025

MSI, a leading global provider of high-performance server solutions, unveils its latest AI innovations using NVIDIA MGX and NVIDIA DGX Station reference architectures at COMPUTEX 2025, held from May 20-23 at booth J0506. Purpose-built to address the growing demands of AI, HPC, and accelerated computing workloads, MSI's AI solutions feature modular, scalable building blocks designed to deliver next-level AI performance for enterprises and cloud data center environments.

"AI adoption is transforming enterprise data centers as organizations move quickly to integrate advanced AI capabilities," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With the explosive growth of generative AI and increasingly diverse workloads, traditional servers can no longer keep pace. MSI's AI solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures, deliver the scalability, flexibility, and performance enterprises need to future-proof their infrastructure and accelerate their AI innovation."

ASUS Announces ESC A8A-E12U Support for AMD Instinct MI350 Series GPUs

ASUS today announced that its flagship high-density AI server, ESC A8A-E12U, now supports the latest AMD Instinct MI350 series GPUs. This enhancement empowers enterprises, research institutions, and cloud providers to accelerate their AI and HPC workloads with next-generation performance and efficiency—while preserving compatibility with existing infrastructure.

Built on the 4th Gen AMD CDNA architecture, AMD Instinct MI350 series GPUs deliver powerful new capabilities, including 288 GB of HBM3E memory and up to 8 TB/s of bandwidth—enabling faster, more energy-efficient execution of large AI models and complex simulations. With expanded support for low-precision compute formats such as FP4 and FP6, the Instinct MI350 series significantly accelerates generative AI, inference, and machine-learning workloads. Importantly, Instinct MI350 series GPUs maintain drop-in compatibility with existing AMD Instinct MI300 series-based systems, such as those running Instinct MI325X—offering customers a cost-effective and seamless upgrade path. These innovations reduce server resource requirements and simplify scaling and workload management, making Instinct MI350 series GPUs an ideal choice for efficient, large-scale AI deployments.

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at Computex 2025

MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will present its latest innovations in AI infrastructure at COMPUTEX 2025. At booth M1110, MiTAC Computing will display its next-level AI server platforms MiTAC G4527G6, fully optimized for NVIDIA MGX architecture, which supports NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and the NVIDIA H200 NVL platform to address the evolving demands of enterprise AI workloads.

Next-Gen AI with High-Performance Computing
With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solution, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Built on Intel Xeon 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8 TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, and an NVIDIA BlueField-3 DPU for efficient north-south connectivity. Crucially, it integrates four next-generation NVIDIA ConnectX-8 SuperNICs, delivering up to 800 gigabits per second (Gb/s) of NVIDIA InfiniBand and Ethernet networking—significantly enhancing system performance for AI factories and cloud data center environments.

Wistron's New U.S. Facilities for NVIDIA Servers to be Operational Next Year

Taiwanese electronics manufacturer Wistron announced on Friday that its new U.S. manufacturing facilities for NVIDIA will be ready next year, confirming they're in discussions with other potential customers as well, Reuters reports. Wistron CEO Jeff Lin made his first public statement since NVIDIA's announcement, saying, "All our progress will follow the customer's lead," and confirmed their timeline aligns with NVIDIA's expectations. The facilities will partly support NVIDIA's ambitious plan to build AI servers worth up to $500 billion in the U.S. over the next four years. NVIDIA revealed in April its strategy to establish supercomputer manufacturing plants in Texas, collaborating with Foxconn in Houston and Wistron in Dallas, with both locations expected to increase production within 12-15 months. Recently Wistron's board has approved a $500 million investment in its new U.S. subsidiary.

The facilities will focus on producing high-performance computing and AI-related products, though Lin declined to name the other companies they're in talks with. When asked about U.S. restrictions on advanced chip exports to China, Lin noted that demand outside China remains robust. "We expect to grow alongside our customers... As for developments in the Middle East, most of them are essentially our indirect customers," he added. This comes as the UAE and U.S. signed an agreement this week to build the largest AI campus outside America, potentially involving the purchase of 500,000 of NVIDIA's most advanced AI chips yearly starting in 2025. Wistron also mentioned it's considering moving notebook production in Mexico in an attempt to avoid tariffs under the United States-Mexico-Canada trade agreement.

MiTAC Computing Deploys Latest AMD EPYC 4005 Series Processors

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a leading manufacturer in server platform design, introduced its latest offering featuring the AMD EPYC 4005 Series processors. These updated server solutions offer enhanced performance and energy efficiency to meet the growing demands of modern business workloads, including AI, cloud services, and data analytics.

"The new AMD EPYC 4005 Series processors deliver the performance and capabilities our customers need at a price point that makes ownership more attractive and attainable," said Derek Dicker, corporate vice president, Enterprise and HPC Business, AMD. "We're enabling businesses to own their computing infrastructure at an economical price, while providing the performance, security features and efficiency modern workloads demand."

ASRock Rack Announces Support for AMD EPYC 4005 Series Processors

ASRock Rack Inc., the leading innovative server company, today announced support for the newly launched AMD EPYC 4005 Series processors across its extensive lineup of AM5 socket server systems and motherboards. This announcement reinforces ASRock Rack's commitment to delivering cutting-edge performance, broad platform compatibility, and long-term value to customers in data centers, growing businesses, and edge computing environments.

Built on the AMD 'Zen 5' architecture, the AMD EPYC 4005 Series features up to 16 SMT-enabled cores and supports DDR5 memory speeds up to 5600 MT/s, delivering class-leading performance per watt within constrained IT budgets. As AI becomes embedded in everyday business software, AMD EPYC 4005 Series CPUs provide the performance headroom needed for AI-enhanced workloads such as automated customer service and data analytics while maintaining the affordability essential for small businesses. The series expands the proven AMD EPYC portfolio with solutions purpose-built for growing infrastructure demands.

Supermicro Announces New MicroCloud Servers Powered by AMD EPYC 4005 Series Processors

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing that a number of Supermicro servers are now shipping with the latest addition to the AMD based EPYC 4000 series CPUs, the AMD EPYC 4005 Series processors. These servers are optimized to deliver a powerful balance of performance density, scalability, and affordability. Supermicro will feature its new Supermicro MicroCloud multi-node solution, a 10-node CPU and a 5-node CPU + GPU version, in a 3U form factor, ideal for organizations seeking to optimize space, energy, and cost of their IT infrastructure. Supermicro's MicroCloud product family targets dedicated hosting markets where sharing the chassis, power, and cooling are desired while still maintaining physical separation.

"Supermicro continues to deliver first-to-market innovative rack-scale solutions for a wide range of use cases, with the addition of our new Supermicro MicroCloud multi-node solution that feature the latest AMD EPYC 4005 Series processors, designed to optimize the needs of on-premises or cloud service providers who need powerful but cost-effective solutions in a compact 3U form factor," said Mory Lin, Vice President, IoT/Embedded & Edge Computing at Supermicro. "These servers offer up to 2080 cores on a standard 42U rack, greatly reducing data center rack space and overall TCO for enterprise and small medium businesses."

IBM Intros LinuxONE Emperor 5 Mainframe with Telum II Processor

IBM has introduced the LinuxONE Emperor 5, its newest Linux computing platform that runs on the Telum II processor with built-in AI acceleration features. This launch aims to tackle three key issues for tech leaders: better security measures, reduced costs, and smooth AI incorporation into business systems. The heart of the system, the Telum II processor, includes a second-generation on-chip AI accelerator. This component is designed to boost predictive AI abilities and large language models for instant transaction handling. The upcoming IBM Spyre Accelerator (set to arrive in late 2025) via PCIe card will boost generative AI functions. The platform comes with an updated AI Toolkit fine-tuned for the Telum II processor. It also offers early looks at Red Hat OpenShift AI and Virtualization allowing unified control of both standard virtual machines and containerized workloads.

The platform provides wide-ranging security measures. These include confidential computing strong cryptographic abilities, and NIST-approved post-quantum algorithms. These safeguard sensitive AI models and data from current risks and expected post-quantum attacks. When it comes to productivity, companies can combine several server workloads on one high-capacity system. This might cut ownership expenses by up to 44% compared to x86 options over five years. At the same time, it keeps exceptional 99.999999% uptime rates according to IBM. The LinuxOne Emperor 5 will run Linux Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES) and Canonical Ubuntu Server. Tina Tarquinio, chief product officer at IBM Z and LinuxONE, said: "IBM LinuxONE 5 represents the next evolution of our Linux infrastructure strategy. It is designed to help clients unlock the full potential of Linux and AI while optimizing their datacenters, simplifying their operations, and addressing risk. Whether you're building intelligent applications, deploying regulated workloads, consolidating infrastructure, or preparing for the next wave of transformation, IBM LinuxONE offers an exciting path forward."

NVIDIA RTX PRO 6000 "Blackwell" Underperforms with Pre‑Release Drivers

Today, we are looking at the latest benchmark results for NVIDIA's upcoming RTX PRO 6000 "Blackwell" workstation-class GPU. Based on the new GB202 GPU, this professional visualization card features an impressive 24,064 CUDA cores distributed across 188 streaming multiprocessors, with boost clocks up to 2,617 MHz. It also introduces 96 GB of GDDR7 memory with full error‑correcting code, a capacity made possible by dual‑sided 3 GB modules. In Geekbench 6.4.0 OpenCL trials, the PRO 6000 Blackwell registered a total score of 368,219. That result trails the gaming‑oriented GeForce RTX 5090, which posted 376,858 points despite having fewer cores (21,760 vs. 24,064 of RTX PRO) and a lower peak clock of 2,410 MHz versus the 2617 MHz of RTX PRO.

A breakdown of subtests reveals that the workstation card falls behind in background blur (263.9 versus 310.7 images per second) and face detection (196.7 versus 241.5 images per second), yet it leads modestly in horizon detection and Gaussian blur. These mixed outcomes are attributed to pre‑release drivers, a temporary cap on visible memory (currently limited to 23.8 GB), and power‑limit settings. If the card ran on release drivers, software (especially OpenCL) could greatly benefit from more cores and higher max frequency. One significant distinction within the RTX PRO 6000 family concerns power consumption. The Max‑Q Workstation Edition is engineered for a 300 W thermal design point, making it suitable for compact chassis and environments where quiet operation is essential. It retains all 24,064 cores and the full 96 GB of memory, but clocks and voltages are adjusted to fit the 300 W budget. By contrast, the standard Workstation and Server models allow a thermal budget of up to 600 W, enabling higher sustained frequencies and heavier compute workloads in full‑size desktop towers and rack‑mounted systems.

MiTAC Computing Unveils Next-generation OCP Servers and Open Firmware Innovations at the OCP EMEA Summit 2025

MiTAC Computing Technology, a global leader in high-performance and energy-efficient server solutions, is proud to announce its participation at the OCP EMEA Summit 2025, taking place April 29-30 at the Convention Centre Dublin. At Booth No. B13, MiTAC will showcase its latest innovations in server design, sustainable cooling, and open-source firmware development - empowering the future of data center infrastructure.

C2810Z5 & C2820Z5: Advancing Sustainable Thermal Design
MiTAC will debut two new OCP server platforms - C2810Z5 (air-cooled) and C2820Z5 (liquid-cooled), built to meet the demands of high-performance computing (HPC) and AI workloads. Designed around the latest AMD EPYC 9005 series processors, these multi-node servers support the latest EPYC 9005 processors and are engineered to deliver optimal compute density and power efficiency.

MSI Servers Power the Next-Gen Datacenters at the 2025 OCP EMEA Summit

MSI, a leading global provider of high-performance server solutions, unveiled its latest ORv3-compliant and high-density multi-node server platforms at the 2025 OCP EMEA Summit, held April 29-30 at booth A19. Built on OCP-recognized DC-MHS architecture and supporting the latest AMD EPYC 9005 Series processors, these next-generation platforms are engineered to deliver outstanding compute density, energy efficiency, and scalability—meeting the evolving demands of modern, data-intensive datacenters.

"We are excited to be part of open-source innovation and sustainability through our contributions to the Open Compute Project," said Danny Hsu, General Manager of Enterprise Platform Solutions. "We remain committed to advancing open standards, datacenter-focused design, and modular server architecture. Our ability to rapidly develop products tailored to specific customer requirements is central to enabling next-generation infrastructure, making MSI a trusted partner for scalable, high-performance solutions."

Giga Computing Showcases Next-Gen OCP Solutions at OCP EMEA Regional Summit 2025

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance computing and server solutions, proudly announces its participation in the OCP EMEA Regional Summit 2025, taking place in Dublin, Ireland. As an active contributor to the Open Compute Project (OCP), Giga Computing will showcase its latest data center solutions tailored to meet the demands of hyperscale infrastructure, high-density storage, and AI-centric workloads.

The OCP EMEA Summit serves as a platform where global technical leaders come together to address critical challenges in data center sustainability, energy efficiency, and heat reuse across the region. The summit focuses on how innovations pioneered by hyperscale data center operators can help tackle these issues and drive meaningful change. Additionally, the event spotlights real-world deployments of OCP-recognized equipment in the EMEA region.
Return to Keyword Browsing
Jun 14th, 2025 15:22 EEST change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts