News Posts matching #NIC

Return to Keyword Browsing

Realtek to Bring Affordable 10 Gbps Ethernet to the Masses Later This Year

It's been two years since Realtek showed off its 5 Gbps Ethernet chips at Computex and at the time, they hinted at a 10 Gbps chip. This year, the company was showing off a wide range of 10 Gbps Ethernet chips on the show, ranging from a standard consumer solution, to server chips and native USB variants. The base chip is the RTL8127, which offers the full range of speeds from 10 Mbps to 10 Gbps, with a sub 2 Watt power consumption. This is followed by the RTL8127AP intended for servers, as it has full remote management support via DASH 1.2 support. Both chips sport a PCIe 4.0 x1 host interface, which for better or worse limits compatibility to more modern systems.

Next up is the fibre only RTL8127ATF, although it doesn't support 10/100 Mbps speeds, but it has a lower power consumption at just over 1 Watt. This is followed by the RTL8127AT, which is limited to the same speeds as the fibre only SKU, but it's a standard copper NIC. What sets these two SKUs apart from the previous two, is that they support PCIe Gen 3 x2 or PCIe Gen 4 x1 and they actually have a physical PCIe x2 interface, which limits compatibility with some motherboards as an add-card. Finally we have the RTL8159, which is Realtek's USB 3.2 Gen 2x2 10 Gbps chip, which again covers the full range of speed from 10 Mbps to 10 Gbps. Realtek had several mockups of customer products on display, but final products might not look exactly like the ones shown.

AMD Prepares Instinct MI450X IF128 Rack‑Scale System with 128 GPUs

According to SemiAnalysis, AMD has planned its first-ever rack-scale GPU cluster for the second half of 2026, when it shows its first rack‑scale accelerator, the Instinct MI450X IF128. Built on what's expected to be a 3 nm‑class TSMC process and packaged with CoWoS‑L, each MI450X IF128 card will include at least 288 GB of HBM4 memory. That memory will sustain up to 18 TB/s of bandwidth, driving around 50 PetaFLOPS of FP4 compute while drawing between 1.6 and 2.0 kW of power. In our recent article, we outlined that AMD split the Instinct MI400 series into HPC-first MI430X and MI450X for AI. Now for AI-focused MI450X, the company created both an "IF64" backplane for simpler single‑rack installs and the full‑blown "IF128" for maximum density. The IF128 version links 128 GPUs over an Ethernet‑based Infinity Fabric network and uses UALink instead of PCIe to connect each GPU to three built‑in Pensando 800 GbE NICs. That design delivers about 1.8 TB/s of unidirectional bandwidth per GPU and a total of 2,304 TB/s across the rack.

With 128 GPUs each offering 50 PetaFLOPS of FP4 compute and 288 GB of HBM4 memory, the MI450X IF128 system delivers a combined 6,400 PetaFLOPS and 36.9 TB of high‑bandwidth memory, and MI450X IF64 provides about half of that. Since AI deployments require massive density of rack systems, AMD plans to possibly outnumber NVIDIA's upcoming system known as "Vera Rubin" VR200 NVL144 (144 compute dies, 72 GPUs), which tops out at 3,600 PetaFLOPS and 936 TB/s of memory bandwidth—about half of what AMD's IF128 approach promises. AMD will have a possibly more powerful system architecture than NVIDIA until the launch of VR300 "Ultra" NVL576, which has 144 GPUs, each carrying four compute dies, totaling 576 compute chiplets.

Astera Labs Ramps Production of PCIe 6 Connectivity Portfolio

Astera Labs, Inc., a global leader in semiconductor-based connectivity solutions for AI and cloud infrastructure, today announced its purpose-built PCIe 6 connectivity portfolio is ramping production to fast-track deployments of modern AI platforms at scale. Now featuring gearbox connectivity solutions alongside fabric switches, retimers, and active cable modules, Astera Labs' expanding PCIe 6 portfolio provides a comprehensive connectivity platform to deliver unparalleled performance, utilization, and scalability for next-generation AI and general-compute systems. Along with Astera Labs' demonstrated PCIe 6 connectivity over optical media, the portfolio will provide even greater AI rack-scale distance optionality. The transition to PCIe 6 is fueled by the insatiable demand for higher compute, memory, networking, and storage data throughput, ensuring advanced AI accelerators and GPUs operate at peak efficiency.

Thad Omura, Chief Business Officer, said, "Our PCIe 6 solutions have successfully completed qualification with leading AI and cloud server customers, and we are ramping up to volume production in parallel with their next generation AI platform rollouts. By continuing to expand our industry-leading PCIe connectivity portfolio with additional innovative solutions that includes Scorpio Fabric Switches, Aries Retimers, Gearboxes, Smart Cable Modules, and PCIe over optics technology, we are providing our hyperscaler and data center partners all the necessary tools to accelerate the development and deployment of leading-edge AI platforms."

AMD Pensando Pollara 400 AI NIC Now Available and Shipping to Customers

To effectively train and deploy generative AI, large language models, or agentic AI, it's crucial to build parallel computing infrastructure that offers the best performance to meet the demands of AI/ML workloads but also offers the kind of flexibility that the future of AI demands. A key aspect for consideration is the ability to scale-out the intra-node GPU-GPU communication network in the data center.

At AMD, we believe in preserving customer choice by providing customers with easily scalable solutions that work across an open ecosystem, reducing total cost of ownership—without sacrificing performance. Remaining true to that ethos, last October, we announced the upcoming release of the new AMD Pensando Pollara 400 AI NIC. Today we're excited to share the industry's first fully programmable AI NIC designed with developing Ultra Ethernet Consortium (UEC) standards and features is available for purchase now. So, how has the Pensando Pollara 400 AI NIC been uniquely designed to accelerate AI workloads at scale?

Blackmagic Design Announces New DeckLink IP 100G NIC

Blackmagic Design today announced DeckLink IP 100G, a new PCIe Gen 4 card which can capture and playback up to 8 channels of HD and Ultra HD video simultaneously into 2110 IP systems. It also includes 2 x 100G Ethernet QSFP ports for redundancy, or connecting to two separate 100G Ethernet switches, as well as built in cooling. DeckLink IP 100G also supports GPUDirect RDMA for direct memory transfers between DeckLink and GPUs, reducing PCIe bandwidth when processing video on GPUs for a significant reduction in latency. DeckLink IP 100G will be available in July from Blackmagic Design resellers worldwide for US$1,795.

DeckLink IP 100G will be displayed on the Blackmagic Design NAB 2025 booth #SL216. DeckLink IP cards are the easiest way to capture and play back video directly into 2110 IP based broadcast systems. They have the same DeckLink features so existing software will just work. DeckLink IP cards support multiple video channels plus each channel can capture and play back at the same time. This means customers can build racks of servers generating broadcast graphics, virtual sets or GPU based AI image processing, all directly integrated into 2110 IP broadcast infrastructure. Customers can even use DaVinci Resolve for 2110 IP based broadcast editing workstations. DeckLink IP features a high speed PCIe connection so it works on the latest Mac Pro, Windows and Linux computers.

Marvell Demonstrates Industry's First End-to-End PCIe Gen 6 Over Optics at OFC 2025

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced in collaboration with TeraHop, a global optical solutions provider for AI driven data centers, the demonstration of the industry's first end-to-end PCIe Gen 6 over optics in the Marvell booth #2129 at OFC 2025. The demonstration will showcase the extension of PCIe reach beyond traditional electrical limits to enable low-latency, standards-based AI scale-up infrastructure.

As AI workloads drive exponential data growth, PCIe connectivity must evolve to support higher bandwidth and longer reach. The Marvell Alaska P PCIe Gen 6 retimer and its PCIe Gen 7 SerDes technology enable low-latency, low bit-error-rate transmission over optical fiber, delivering the scalability, power efficiency, and high performance required for next-generation accelerated infrastructure. With PCIe over optics, system designers will be able to take advantage of longer links between devices that feature the low latency of PCIe technology.

MiTAC Computing Showcases Cutting-Edge AI and HPC Servers at Supercomputing Asia 2025

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a global leader in server design and manufacturing, will showcase its latest AI and HPC innovations at Supercomputing Asia 2025, taking place from March 11 at Booth #B10. The event highlights MiTAC's commitment to delivering cutting-edge technology with the introduction of the G4520G6 AI server and the TN85-B8261 HPC server—both engineered to meet the growing demands of artificial intelligence, machine learning, and high-performance computing (HPC) applications.

G4520G6 AI Server: Performance, Scalability, and Efficiency Redefined
The G4520G6AI server redefines computing performance with an advanced architecture tailored for intensive workloads. Key features include:
  • Exceptional Compute Power- Supports dual Intel Xeon 6 Processors with TDP up to 350 W, delivering high-performance multicore processing for AI-driven applications.
  • Enhanced Memory Performance- Equipped with 32 DDR5 DIMM slots (16 per CPU) and 8 memory channels, supporting up to 8,192 GB DDR5 RDIMM/3DS RDIMM at 6400 MT/s for superior memory bandwidth.

AEWIN Unveils High Availability Storage Server Powered by Intel Xeon 6 Processors

AEWIN is excited to launch the MIS-5131-2U2, a cutting-edge 2U2N High Availability Storage Server powered by Intel's latest Xeon 6 processors. The horizontal placement enables optimized thermal dissipation to allow CPU running with high TDP of 350 W. Each node is equipped with a single Intel Xeon 6700/6500-series processor with P-cores (R1S), offering up to 80 performance cores, 136 PCIe 5.0 lanes, and 8x high-speed DDR5 RDIMMs with speeds of up to 6400 MT/s. Featuring rich I/O and 24x hot-swap dual port NVMe SSD bays, MIS-5131-2U2 is a high-performance and reliable storage solution for mission-critical applications.

The dual node architecture within a single chassis allows for seamless failover through with the NTB (Non-Transparent Bridge) interconnectivity, two BMC communication, and dual-port NVMe drives. The two nodes of MIS-5131 are linked via NTB running at PCIe Gen 5 speeds (32 GT/s) to enable high speed, redundant storage failover. With NTB, dual port NVMe drives, and sufficient PCIe lanes of Intel Xeon 6 R1S CPU, the system eliminates the need for an additional switch, delivering an optimized HA server solution with the best TCO for continuous operation.

Synopsys Announces Industry's First Ultra Ethernet and UALink IP Solutions

Synopsys, Inc. today announced the industry's first Ultra Ethernet IP and UALink IP solutions, including controllers, PHYs, and verification IP, to meet the demand for standards-based, high-bandwidth, and low-latency HPC and AI accelerator interconnects. As hyperscale data center infrastructures evolve to support the processing of trillions of parameters in large language models, they must scale to hundreds of thousands of accelerators with highly efficient and fast connections. Synopsys Ultra Ethernet and UALink IP will provide a holistic, low-risk solution for high-speed and low-latency communication to scale-up and scale-out AI architectures.

"For more than 25 years, Synopsys has been at the forefront of providing best-in-class IP solutions that enable designers to accelerate the integration of standards-based functionality," said Neeraj Paliwal, senior vice president of IP product management at Synopsys. "With the industry's first Ultra Ethernet and UALink IP, companies can get a head start on developing a new generation of high-performance chips and systems with broad interoperability to scale future AI and HPC infrastructure."

AEWIN Launches ANCT401 Quad-Port 10G Network Expansion Module

AEWIN is glad to launch our new member of AEWIN Network Expansion Modules, the NCT401. Built upon Intel XL710-BM1 LAN Controller and Intel X557-AT4 PHY, it provides quad port 10GbE RJ45 with two pairs of Gen 3 bypass. It is with AEWIN standard Expansion Module form factor of PCIe x8 connection. The front-access design makes NCT401 easy to maintain/service and it is compatible with various kinds of AEWIN existing Network Appliances providing great flexibility to tailor specific configurations for target workload effortlessly.

DPDK (Data Plane Development Kit) is supported to improve package forwarding for better efficiency. Along with implementing DPDK, Dynamic Device Personalization (DDP) enables programmable packet-processing pipeline which is optimized for specific workloads on demand. With these features, performance for network edge workloads can be enhanced which is perfect for solutions of Communications, Cloud and Networking.

Marvell Unveils Industry's First 3nm 1.6 Tbps PAM4 Interconnect Platform to Scale Accelerated Infrastructure

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today introduced Marvell Ara, the industry's first 3 nm 1.6 Tbps PAM4 interconnects platform featuring 200 Gbps electrical and optical interfaces. Building on the success of the Nova 2 DSP, the industry's first 5 nm 1.6 Tbps PAM4 DSP with 200 Gbps electrical and optical interfaces, Ara leverages the comprehensive Marvell 3 nm platform with industry-leading 200 Gbps SerDes and integrated optical modulator drivers, to reduce 1.6 Tbps optical module power by over 20%. The energy efficiency improvement reduces operational costs and enables new AI server and networking architectures to address the need for higher bandwidth and performance for AI workloads, within the significant power constraints of the data center.

Ara, the industry's first 3 nm PAM4 optical DSP, builds on six generations of Marvell leadership in PAM4 optical DSP technology. It integrates eight 200 Gbps electrical lanes to the host and eight 200 Gbps optical lanes, enabling 1.6 Tbps in a compact, standardized module form factor. Leveraging 3 nm technology and laser driver integration, Ara reduces module design complexity, power consumption and cost, setting a new benchmark for next-generation AI and cloud infrastructure.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive Lineup for AI and HPC Success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that are powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

NVIDIA Ethernet Networking Accelerates World's Largest AI Supercomputer, Built by xAI

NVIDIA today announced that xAI's Colossus supercomputer cluster comprising 100,000 NVIDIA Hopper GPUs in Memphis, Tennessee, achieved this massive scale by using the NVIDIA Spectrum-X Ethernet networking platform, which is designed to deliver superior performance to multi-tenant, hyperscale AI factories using standards-based Ethernet, for its Remote Direct Memory Access (RDMA) network.

Colossus, the world's largest AI supercomputer, is being used to train xAI's Grok family of large language models, with chatbots offered as a feature for X Premium subscribers. xAI is in the process of doubling the size of Colossus to a combined total of 200,000 NVIDIA Hopper GPUs.

Marvell Collaborates with Meta for Custom Ethernet Network Interface Controller Solution

Marvell Technology, Inc. (NASDAQ: MRVL), a leader in data infrastructure semiconductor solutions, today announced the development of FBNIC, a custom 5 nm network interface controller (NIC) ASIC in collaboration with Meta to meet the company's infrastructure and use case requirements. The FBNIC board design will also be contributed by Marvell to the Open Compute Project (OCP) community. FBNIC combines a customized network controller designed by Marvell and Meta, a co-designed board, and Meta's ASIC, firmware and software. This custom design delivers innovative capabilities, optimizes performance, increases efficiencies, and reduces the average time needed to resolve potential network and server issues.

"The future of large-scale, data center computing will increasingly revolve around optimizing semiconductors and other components for specific applications and cloud infrastructure architectures," said Raghib Hussain, President of Products and Technologies at Marvell. "It's been exciting to partner with Meta on developing their custom FBNIC on our industry-leading 5 nm accelerated infrastructure silicon platform. We look forward to the OCP community leveraging the board design for future innovations."

Realtek is Aiming to Make 5 Gbps Ethernet Switches More Affordable with New Platform

At Computex, Realtek was showing off a new 5 Gbps switch platform which is set to bring much more affordable high-speed Ethernet switches to the consumer market. At the core of the new switch platform sits Realtek's RTL9303 which is an eight port 10 Gbps switch controller. This was released a few years ago as a low cost 10 Gbps switch IC, but as it still required third party PHYs, it never really took off. The RTL9303 is built around an 800 MHz MIPS 34Kc CPU and supports up to 1 GB of DDR3 RAM as well as 64 MB of SPI NOR Flash for the firmware.

When combined with Realtek's RTL8251B 5 Gbps PHY, the end result is a comparably low-cost 5 Gbps switch. According to Ananadtech, Realtek is expecting a US$25 price per port, which is only about $10 more per port than your typical 2.5 Gbps switch today, even though some are as little as US$10 per port. When combined with a Realtek RTL8126 PCIe based 5 Gbps NIC which retails from around US$30, 5 Gbps Ethernet looks like a very sensible option in terms of price/performance. Admittedly 2.5 Gbps Ethernet cards can be had for as little as $13, but they started out at a higher price point compared to what 5 Gbps NICs are already selling for. Meanwhile, 10 Gbps NICs are still stuck at around US$80-90, with switches in most cases costing at least US$45 per port, but often a lot more. 5 Gbps Ethernet also has the advantage of being able to operate on CAT 5e cabling at up to 60 metres and CAT 6 cabling at up 100 metres, which means there's no need to replace older cabling to benefit from it.

NVIDIA Blackwell Platform Pushes the Boundaries of Scientific Computing

Quantum computing. Drug discovery. Fusion energy. Scientific computing and physics-based simulations are poised to make giant steps across domains that benefit humanity as advances in accelerated computing and AI drive the world's next big breakthroughs. NVIDIA unveiled at GTC in March the NVIDIA Blackwell platform, which promises generative AI on trillion-parameter large language models (LLMs) at up to 25x less cost and energy consumption than the NVIDIA Hopper architecture.

Blackwell has powerful implications for AI workloads, and its technology capabilities can also help to deliver discoveries across all types of scientific computing applications, including traditional numerical simulation. By reducing energy costs, accelerated computing and AI drive sustainable computing. Many scientific computing applications already benefit. Weather can be simulated at 200x lower cost and with 300x less energy, while digital twin simulations have 65x lower cost and 58x less energy consumption versus traditional CPU-based systems and others.

Intel Launches Gaudi 3 AI Accelerator: 70% Faster Training, 50% Faster Inference Compared to NVIDIA H100, Promises Better Efficiency Too

During the Vision 2024 event, Intel announced its latest Gaudi 3 AI accelerator, promising significant improvements over its predecessor. Intel claims the Gaudi 3 offers up to 70% improvement in training performance, 50% better inference, and 40% better efficiency than Nvidia's H100 processors. The new AI accelerator is presented as a PCIe Gen 5 dual-slot add-in card with a 600 W TDP or an OAM module with 900 W. The PCIe card has the same peak 1,835 TeraFLOPS of FP8 performance as the OAM module despite a 300 W lower TDP. The PCIe version works as a group of four per system, while the OAM HL-325L modules can be run in an eight-accelerator configuration per server. This likely will result in a lower sustained performance, given the lower TDP, but it confirms that the same silicon is used, just finetuned with a lower frequency. Built on TSMC's N5 5 nm node, the AI accelerator features 64 Tensor Cores, delivering double the FP8 and quadruple FP16 performance over the previous generation Gaudi 2.

The Gaudi 3 AI chip comes with 128 GB of HBM2E with 3.7 TB/s of bandwidth and 24 200 Gbps Ethernet NICs, with dual 400 Gbps NICs used for scale-out. All of that is laid out on 10 tiles that make up the Gaudi 3 accelerator, which you can see pictured below. There is 96 MB of SRAM split between two compute tiles, which acts as a low-level cache that bridges data communication between Tensor Cores and HBM memory. Intel also announced support for the new performance-boosting standardized MXFP4 data format and is developing an AI NIC ASIC for Ultra Ethernet Consortium-compliant networking. The Gaudi 3 supports clusters of up to 8192 cards, coming from 1024 nodes comprised of systems with eight accelerators. It is on track for volume production in Q3, offering a cost-effective alternative to NVIDIA accelerators with the additional promise of a more open ecosystem. More information and a deeper dive can be found in the Gaudi 3 Whitepaper.

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

Senao Networks Unveils SX904 SmartNIC with Embedded Xeon D to Process Network Stack

Senao Networks, a leading network solution provider, proudly announces its launch of SX904 SmartNIC based on the Intel NetSec Accelerator Reference Design. This cutting-edge NIC, harnessing the power of PCIe Gen 4 technology and fueled by the Intel Xeon D processor, sets an unprecedented standard in high-performance network computing. Senao will showcase a system demonstration at the Intel booth during the upcoming MWC in Barcelona. As transformative shift in networking edge, enterprises are increasingly leaning on scalable edge infrastructure. In order to cater the demands of workloads, low latency, local data processing, and robust security, SX904 marks a significant leap forward.

The combination of Intel XeonD processor, PCIe Gen 4 technology, dual 25 Gbps SFP28 support, and DDR4 ECC memory support enables the SX904 to achieve unparalleled data transfer rates and maximum bandwidth utilization, ideal for modern server architectures. It provides higher performance from the latest Intel Xeon D processor and Intel Ethernet Controller E810 and supports the latest Intel Platform Firmware Resilience, BMC, and TPM 2.0. SX904 enables the seamless offload of applications optimized for Intel architecture with zero changes, optimizing performance transmission effortlessly into an Intel-based server in PCIe add-in-card form factor.

Synology Shows Off New Personal Cloud and Surveillance Products at CES 2024

NAS major Synology showcased its latest consumer NAS, personal cloud, and surveillance solutions at the 2024 International CES. Starting things off are the company's latest TC500 and BC500 high-resolution wired cameras; and the CC400W wireless cameras, enhanced with AI. This may sound nebulous, but upon detecting movement, the cameras automatically move to zoom in on faces or vehicle license plates, and automatically dial up video resolution and probably bitrate (image quality). The wired cameras come with 5 MP sensors, while the wireless ones have 4 MP (capable of 1440p @ 30 FPS). Each of these comes with a dark light that can illuminate up to 30 m. They also come with microSD slots, to in the event of network failures, recording continues onto a memory card. You don't need a Synology Surveillance Station device license to use these.

Moving on to the stuff Synology is best known for, NAS, and we have the upgraded DiskStation DS1823xs+, an 8-bay business grade NAS with a combined RW throughput of 3100 MB/s reads, with 2600 MB/s writes; over 173,000 IOPS random reads, and over 80,800 IOPS random writes. The main networking interface is a 10 GbE, but with an additional PCIe Gen 3 x4 slot to drop in more NICs. The NAS can pair with DX517 expansion units over USB 3.1 to scale up to 18 drives. The DS423+ is a compact 4-bay NAS powered by a Celeron J4125 quad-core CPU, 2 GB of RAM, and room for two M.2-2280 NVMe SSDs besides the four 3.5-inch HDDs. The maximum rated throughput is still around 226 MB/s, but the main networking interfaces are two 10 GbE. The DS224+ is nearly the same device, but with just two 3.5-inch bays, and two 2.5 GbE interfaces.

IBM Unleashes the Potential of Data and AI with its Next-Generation IBM Storage Scale System 6000

Today, IBM introduced the new IBM Storage Scale System 6000, a cloud-scale global data platform designed to meet today's data intensive and AI workload demands, and the latest offering in the IBM Storage for Data and AI portfolio.

For the seventh consecutive year and counting, IBM is a 2022 Gartner Magic Quadrant for Distributed File Systems and Object Storage Leader, recognized for its vision and execution. The new IBM Storage Scale System 6000 seeks to build on IBM's leadership position with an enhanced high performance parallel file system designed for data intensive use-cases. It provides up to 7M IOPs and up to 256 GB/s throughput for read only workloads per system in a 4U (four rack units) footprint.

QNAP Introduces New Dual-port 10GbE Network Cards Supporting SR-IOV for Boosting VMware Applications

QNAP Systems, Inc., a leading computing, networking and storage solution innovator, today launched the new QXG-10G2SF-X710 10GbE network expansion card. Equipped with the advanced Intel Ethernet Controller X710-BM2, this PCIe Gen 3 card (compatible with PCIe Gen 2) can be installed into a QNAP NAS or Windows /Linux PC, instantly augmenting connectivity with two high-speed 10GbE ports.

Featuring a low-noise fanless design, the QXG-10G2SF-X710 comes with two 10GbE SFP+ (10G/1G) network ports. Users can utilize SMB Multichannel or Port Trunking to combine bandwidth, providing up to 20 Gbps of data transfer potential, thereby accelerating large file sharing and intensive data transmission. The QXG-10G2SF-X710 also supports SR-IOV that enhances network resource allocation for VMware virtualization applications, reducing network bandwidth consumption and significantly lowering CPU usage for virtual machine servers (hypervisors).

ASUS Showcases Cutting-Edge Cloud Solutions at OCP Global Summit 2023

ASUS, a global infrastructure solution provider, is excited to announce its participation in the 2023 OCP Global Summit, which is taking place from October 17-19, 2023, at the San Jose McEnery Convention Center. The prestigious annual event brings together industry leaders, innovators and decision-makers from around the world to explore and discuss the latest advancements in open infrastructure and cloud technologies, providing a perfect stage for ASUS to unveil its latest cutting-edge products.

The ASUS theme for the OCP Global Summit is Solutions beyond limits—ASUS empowers AI, cloud, telco and more. We will showcase an array of products:

Inventec's C805G6 Data Center Solution Brings Sustainable Efficiency & Advanced Security for Powering AI

Inventec, a global leader in high-powered servers headquartered in Taiwan, is launching its cutting-edge C805G6 server for data centers based on AMD's newest 4th Gen EPYC platform—a major innovation in computing power that provides double the operating efficiency of previous platforms. These innovations are timely, as the industry worldwide faces converse challenges—on one hand, a growing need to reduce carbon footprints and power consumption, while, on the other hand, the push for ever higher computing power and performance for AI. In fact, in 2022 MIT found that improving a machine learning model tenfold will require a 10,000-fold increase in computational requirements.

Addressing both pain points, George Lin, VP of Business Unit VI, Inventec Enterprise Business Group (Inventec EBG) notes that, "Our latest C805G6 data center solution represents an innovation both for the present and the future, setting the standard for performance, energy efficiency, and security while delivering top-notch hardware for powering AI workloads."

Giga Computing Releases First Workstation Motherboards to Support DDR5 and PCIe Gen5 Technologies

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced two new workstation motherboards, GIGABYTE MW83-RP0 and MW53-HP0, built to support the Intel Xeon W-3400 or Intel Xeon W-2400 desktop workstation processors. The new CPU platform, developed on the Intel W790 chipset, is the first workstation platform in the market that supports both DDR5 and PCIe 5.0 technology, and this platform excels at demanding applications such as complex 3D CAD, AI development, simulations, 3D rendering, and more.

The new generation of Intel "Sapphire Rapids" Xeon W-3400 & W-2400 series processors adds some significant benefits to its workstation processors when compared to the prior gen of "Ice Lake" Xeon W-3300 processors. Like its predecessor, the new Xeon processors support up to 4 TB of 8-channel memory; however, the new Xeon CPUs have moved to DDR5, which is incredibly advantageous because of the big jump in memory bandwidth performance. Second, higher CPU performance across most workloads, partially due to the higher CPU core count and higher clock speeds. As mentioned before, the new Xeon processors support PCIe Gen 5 devices and speeds for higher throughput between CPU and devices such as GPU.
Return to Keyword Browsing
Jun 11th, 2025 03:17 EEST change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts