News Posts matching #Cloud

Return to Keyword Browsing

Shadow Launches Neo: The Next Generation Cloud Gaming PC

SHADOW, the Global leader in high-performance cloud computing, is proud to announce the launch of Neo, a brand-new cloud gaming PC offering designed to deliver next-level RTX experiences for gamers, creators, and professionals alike. Neo will officially roll out in Europe and North America starting June 16, 2025.

Building on the success of the company's previous offers, Neo replaces its widely adopted "Boost" tier and delivers major performance leaps—up to 150% more in gaming and 200% more in pro software performance. All existing Boost users are being upgraded to Neo at no additional cost while new users rates will start at $37.99 per month.

Europe Builds AI Infrastructure With NVIDIA to Fuel Region's Next Industrial Transformation

NVIDIA today announced it is working with European nations, and technology and industry leaders, to build NVIDIA Blackwell AI infrastructure that will strengthen digital sovereignty, support economic growth and position the continent as a leader in the AI industrial revolution. France, Italy, Spain and the U.K. are among the nations building domestic AI infrastructure with an ecosystem of technology and cloud providers, including Domyn, Mistral AI, Nebius and Nscale, and telecommunications providers, including Orange, Swisscom, Telefónica and Telenor.

These deployments will deliver more than 3,000 exaflops of NVIDIA Blackwell compute resources for sovereign AI, enabling European enterprises, startups and public sector organizations to securely develop, train and deploy agentic and physical AI applications. NVIDIA is establishing and expanding AI technology centers in Germany, Sweden, Italy, Spain, the U.K. and Finland. These centers build on NVIDIA's history of collaborating with academic institutions and industry through the NVIDIA AI Technology Center program and NVIDIA Deep Learning Institute to develop the AI workforce and scientific discovery throughout the regions.

Red Hat & AMD Strengthen Strategic Collaboration - Leading to More Efficient GenAI

Red Hat, the world's leading provider of open source solutions, and AMD today announced a strategic collaboration to propel AI capabilities and optimize virtualized infrastructure. With this deepened alliance, Red Hat and AMD will expand customer choice across the hybrid cloud, from deploying optimized, efficient AI models to more cost-effectively modernizing traditional virtual machines (VMs). As workload demand and diversity continue to rise with the introduction of AI, organizations must have the capacity and resources to meet these escalating requirements. The average datacenter, however, is dedicated primarily to traditional IT systems, leaving little room to support intensive workloads such as AI. To answer this need, Red Hat and AMD are bringing together the power of Red Hat's industry-leading open source solutions with the comprehensive portfolio of AMD high-performance computing architectures.

AMD and Red Hat: Driving to more efficient generative AI
Red Hat and AMD are combining the power of Red Hat AI with the AMD portfolio of x86-based processors and GPU architectures to support optimized, cost-efficient and production-ready environments for AI-enabled workloads. AMD Instinct GPUs are now fully enabled on Red Hat OpenShift AI, empowering customers with the high-performing processing power necessary for AI deployments across the hybrid cloud without extreme resource requirements. In addition, using AMD Instinct MI300X GPUs with Red Hat Enterprise Linux AI, Red Hat and AMD conducted testing on Microsoft Azure ND MI300X v5 to successfully demonstrate AI inferencing for scaling small language models (SLMs) as well as large language models (LLM) deployed across multiple GPUs on a single VM, reducing the need to deploy across multiple VMs and reducing performance costs.

NVIDIA & Microsoft Accelerate Agentic AI Innovation - From Cloud to PC

Agentic AI is redefining scientific discovery and unlocking research breakthroughs and innovations across industries. Through deepened collaboration, NVIDIA and Microsoft are delivering advancements that accelerate agentic AI-powered applications from the cloud to the PC. At Microsoft Build, Microsoft unveiled Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI. This will help research and development departments across various industries accelerate the time to market for new products, as well as speed and expand the end-to-end discovery process for all scientists.

Microsoft Discovery will integrate the NVIDIA ALCHEMI NIM microservice, which optimizes AI inference for chemical simulations, to accelerate materials science research with property prediction and candidate recommendation. The platform will also integrate NVIDIA BioNeMo NIM microservices, tapping into pretrained AI workflows to speed up AI model development for drug discovery. These integrations equip researchers with accelerated performance for faster scientific discoveries. In testing, researchers at Microsoft used Microsoft Discovery to detect a novel coolant prototype with promising properties for immersion cooling in data centers in under 200 hours, rather than months or years with traditional methods.

Marvell Custom Cloud Platform Upgraded with NVIDIA NVLink Fusion Tech

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced it is teaming with NVIDIA to offer NVLink Fusion technology to customers employing Marvell custom cloud platform silicon. NVLink Fusion is an innovative new offering from NVIDIA for integrating custom XPU silicon with NVIDIA NVLink connectivity, rack-scale hardware architecture, software and other technology, providing customers with greater flexibility and choice in developing next-generation AI infrastructure.

The Marvell custom platform strategy seeks to deliver breakthrough results through unique semiconductor designs and innovative approaches. By combining expertise in system and semiconductor design, advanced process manufacturing, and a comprehensive portfolio of semiconductor platform solutions and IP—including electrical and optical serializer/deserializers (SerDes), die-to-die interconnects for 2D and 3D devices, advanced packaging, silicon photonics, co-packaged copper, custom high-bandwidth memory (HBM), system-on-chip (SoC) fabrics, optical IO, and compute fabric interfaces such as PCIe Gen 7—Marvell is able to create platforms in collaboration with customers that transform infrastructure performance, efficiency and value.

Vultr Cloud Platform Broadened with AMD EPYC 4005 Series Processors

Vultr, the world's largest privately-held cloud infrastructure company, today announced that it is one of the first cloud providers to offer the new AMD EPYC 4005 Series processors. The AMD EPYC 4005 Series processors will be available on the Vultr platform, enabling enterprise-class features and leading performance for businesses and hosted IT service providers. The AMD EPYC 4005 Series processors extend the broad AMD EPYC processor family, powering a new line of cost-effective systems designed for growing businesses and hosted IT services providers that demand performance, advanced technologies, energy efficiency, and affordability. Servers featuring the high-performance AMD EPYC 4005 Series CPUs with streamlined memory and I/O feature sets are designed to deliver compelling system price-to-performance metrics on key customer workloads. Meanwhile, the combination of up to 16 SMT-capable cores and DDR5 memory in the AMD EPYC 4005 Series processors enables smooth execution of business-critical workloads, while maintaining the thermal and power efficiency characteristics crucial for affordable compute environments.

"Vultr is committed to delivering the most advanced cloud infrastructure with unrivaled price-to-performance," said J.J. Kardwell, CEO of Vultr. "The AMD EPYC 4005 Series provides straightforward deployment, scalability, high clock speed, energy efficiency, and best-in-class performance. Whether you are a business striving to scale reliably or a developer crafting the next groundbreaking innovation, these solutions are designed to deliver exceptional value and meet demanding requirements now and in the future." Vultr's launch of systems featuring the AMD EPYC 4245P and AMD EPYC 4345P processors will expand the company's robust line of Bare Metal solutions. Vultr will also feature the AMD EPYC 4345P as part of its High Frequency Compute (HFC) offerings for organizations requiring the highest clock speeds and access to locally-attached NVMe storage.

NVIDIA & ServiceNow CEOs Jointly Present "Super Genius" Open-source Apriel Nemotron 15B LLM

ServiceNow is accelerating enterprise AI with a new reasoning model built in partnership with NVIDIA—enabling AI agents that respond in real time, handle complex workflows and scale functions like IT, HR and customer service teams worldwide. Unveiled today at ServiceNow's Knowledge 2025—where NVIDIA CEO and founder Jensen Huang joined ServiceNow chairman and CEO Bill McDermott during his keynote address—Apriel Nemotron 15B is compact, cost-efficient and tuned for action. It's designed to drive the next step forward in enterprise large language models (LLMs).

Apriel Nemotron 15B was developed with NVIDIA NeMo, the open NVIDIA Llama Nemotron Post-Training Dataset and ServiceNow domain-specific data, and was trained on NVIDIA DGX Cloud running on Amazon Web Services (AWS). The news follows the April release of the NVIDIA Llama Nemotron Ultra model, which harnesses the NVIDIA open dataset that ServiceNow used to build its Apriel Nemotron 15B model. Ultra is among the strongest open-source models at reasoning, including scientific reasoning, coding, advanced math and other agentic AI tasks.

IBM & Oracle Expand Partnership - Aim to Advance Agentic AI and Hybrid Cloud

IBM is working with Oracle to bring the power of watsonx, IBM's flagship portfolio of AI products, to Oracle Cloud Infrastructure (OCI). Leveraging OCI's native AI services, the latest milestone in IBM's technology partnership with Oracle is designed to fuel a new era of multi-agentic, AI-driven productivity and efficiency across the enterprise. Organizations today are deploying AI throughout their operations, looking to take advantage of the extraordinary advancements in generative AI models, tools, and agents. AI agents that can provide a single, easy-to-use interface to complete tasks are emerging as key tools to help simplify the deployment and use of AI across enterprise operations and functions. "AI delivers the most impactful value when it works seamlessly across an entire business," said Greg Pavlik, executive vice president, AI and Data Management Services, Oracle Cloud Infrastructure. "IBM and Oracle have been collaborating to drive customer success for decades, and our expanded partnership will provide customers new ways to help transform their businesses with AI."

Watsonx Orchestrate to support multi-agent workflows
To give customers a consistent way to build and manage agents across multi-agent, multi-system business processes, spanning both Oracle and non-Oracle applications and data sources, IBM is making its watsonx Orchestrate AI agent offerings available on OCI in July. This multi-agent approach using wastonx Orchestrate is designed to work with the expansive AI agent offerings embedded within the Oracle AI Agent Studio for Fusion Applications, as well as OCI Generative AI Agents, and OCI's other AI services. It extends the ecosystem around Oracle Fusion Applications to enable further functionality across third-party and custom applications and data sources. The first use cases being addressed are in human resources. The watsonx Orchestrate agents will perform AI inferencing on OCI, which many customers use to host their data, AI, and other applications. IBM agents run in watsonx Orchestrate on Red Hat OpenShift on OCI, including in public, sovereign, government, and Oracle Alloy regions, to enable customers to address specific regulatory and privacy requirements. The agents can also be hosted on-premises or in multicloud environments for true hybrid cloud capabilities.

Astera Labs Ramps Production of PCIe 6 Connectivity Portfolio

Astera Labs, Inc., a global leader in semiconductor-based connectivity solutions for AI and cloud infrastructure, today announced its purpose-built PCIe 6 connectivity portfolio is ramping production to fast-track deployments of modern AI platforms at scale. Now featuring gearbox connectivity solutions alongside fabric switches, retimers, and active cable modules, Astera Labs' expanding PCIe 6 portfolio provides a comprehensive connectivity platform to deliver unparalleled performance, utilization, and scalability for next-generation AI and general-compute systems. Along with Astera Labs' demonstrated PCIe 6 connectivity over optical media, the portfolio will provide even greater AI rack-scale distance optionality. The transition to PCIe 6 is fueled by the insatiable demand for higher compute, memory, networking, and storage data throughput, ensuring advanced AI accelerators and GPUs operate at peak efficiency.

Thad Omura, Chief Business Officer, said, "Our PCIe 6 solutions have successfully completed qualification with leading AI and cloud server customers, and we are ramping up to volume production in parallel with their next generation AI platform rollouts. By continuing to expand our industry-leading PCIe connectivity portfolio with additional innovative solutions that includes Scorpio Fabric Switches, Aries Retimers, Gearboxes, Smart Cable Modules, and PCIe over optics technology, we are providing our hyperscaler and data center partners all the necessary tools to accelerate the development and deployment of leading-edge AI platforms."

IBM Cloud is First Service Provider to Deploy Intel Gaudi 3

IBM is the first cloud service provider to make Intel Gaudi 3 AI accelerators available to customers, a move designed to make powerful artificial intelligence capabilities more accessible and to directly address the high cost of specialized AI hardware. For Intel, the rollout on IBM Cloud marks the first major commercial deployment of Gaudi 3, bringing choice to the market. By leveraging Intel Gaudi 3 on IBM Cloud, the two companies aim to help clients cost-effectively test, innovate and deploy GenAI solutions.

According to a recent forecast by research firm Gartner, worldwide generative AI (GenAI) spending is expected to total $644 billion in 2025, an increase of 76.4% from 2024. The research found "GenAI will have a transformative impact across all aspects of IT spending markets, suggesting a future where AI technologies become increasingly integral to business operations and consumer products."

LG Brings Xbox Cloud Gaming (Beta) Experience Directly to LG Smart TV Screens

LG Electronics (LG) has announced the arrival of the highly anticipated Xbox app on LG Smart TVs this week, allowing users to stream Xbox games on the big screen at home. LG Smart TV owners in over 25 countries are now able to play the latest indie hit to the biggest AAA titles directly through the Xbox app on their LG Smart TVs.

Conveniently accessible from the Gaming Portal and the LG Apps, the Xbox app enables LG TV owners to jump straight into gameplay from day one with Xbox Game Pass Ultimate, launching hundreds of titles from Activision, Bethesda, Blizzard, Mojang, Xbox Game Studios and more - with just a compatible controller. The Xbox app is available on LG TVs and select smart monitors running the latest webOS 24 and newer versions, and it will soon be available on StanbyME screens.

Micron Announces Business Unit Reorganization to Capitalize on AI Growth Across All Market Segments

Micron Technology, Inc. (Nasdaq: MU), a leader in innovative memory and storage solutions, today announced a market segment-based reorganization of its business units to capitalize on the transformative growth driven by AI, from data centers to edge devices.

Micron has maintained multiple generations of industry leadership in DRAM and NAND technology and has the strongest competitive positioning in its history. Micron's industry-leading product portfolio, combined with world-class manufacturing execution enables the development of differentiated solutions for its customers across end markets. As high-performance memory and storage become increasingly vital to drive the growth of AI, this Business Unit reorganization will allow Micron to stay at the forefront of innovation in each market segment through deeper customer engagement to address the dynamic needs of the industry.

Huawei CloudMatrix 384 System Outperforms NVIDIA GB200 NVL72

Huawei announced its CloudMatrix 384 system super node, which the company touts as its own domestic alternative to NVIDIA's GB200 NVL72 system, with more overall system performance but worse per-chip performance and higher power consumption. While NVIDIA's GB200 NVL72 uses 36 Grace CPUs paired with 72 "Blackwell" GB200 GPUs, the Huawei CloudMatrix 384 system employs 384 Huawei Ascend 910C accelerators to beat NVIDIA's GB200 NVL72 system. It takes roughly five times more Ascend 910C accelerators to deliver nearly twice the GB200 NVL system performance, which is not good on per-accelerator bias, but excellent on per-system level of deployment. SemiAnalysis argues that Huawei is a generation behind in chip performance but ahead of NVIDIA in scale-up system design and deployment.

When you look at individual chips, NVIDIA's GB200 NVL72 clearly outshines Huawei's Ascend 910C, delivering over three times the BF16 performance (2,500 TeraFLOPS vs. 780 TeraFLOPS), more on‑chip memory (192 GB vs. 128 GB), and faster bandwidth (8 TB/s vs. 3.2 TB/s). In other words, NVIDIA has the raw power and efficiency advantage at the chip level. But flip the switch to the system level, and Huawei's CloudMatrix CM384 takes the lead. It cranks out 1.7× the overall PetaFLOPS, packs in 3.6× more total HBM capacity, and supports over five times the number of GPUs and the associated bandwidth of NVIDIA's NVL72 cluster. However, that scalability does come with a trade‑off, as Huawei's setup draws nearly four times more total power. A single GB200 NVL72 draws 145 kW of power, while a single Huawei CloudMatrix 384 draws ~560 kW. So, NVIDIA is your go-to if you need peak efficiency in a single GPU. If you're building a massive AI supercluster where total throughput and interconnect speed matter most, Huawei's solution actually makes a lot of sense. Thanks to its all-to-all topology, Huawei has delivered an AI training and inference system worth purchasing. When SMIC, the maker of Huawei's chips, gets to a more advanced manufacturing node, the efficiency of these systems will also increase.

Safe Superintelligence Inc. Uses Google TPUs Instead of Regular GPUs for Next-Generation Models

It seems like Google aims to grab a bit of the market share from NVIDIA and AMD by offering startups large compute deals and allowing them to train their massive AI models on the Google Cloud Platform (GCP). One such case is the OpenAI co-founder Ilya Sutskever's Safe Superintelligence Inc. (SSI) startup. According to a GCP post, SSI is "partnering with Google Cloud to use TPUs to accelerate its research and development efforts toward building a safe, superintelligent AI." Google's latest TPU v7p, codenamed Ironwood, was released yesterday. Carrying 4,614 TeraFLOPS of FP8 precision and 192 GB of HBM memory, these TPUs are interconnected using Google's custom ICI infrastructure and are scaled to configurations in pods of 9,216 chips, where Ironwood delivers 42.5 ExaFLOPS of total computing power.

For AI training, this massive power will allow AI models to quickly go over training, accelerating research iterations and ultimately accelerating model development. For SSI, the end goal is a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

NVIDIA Will Bring Agentic AI Reasoning to Enterprises with Google Cloud

NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA Blackwell HGX and DGX platforms and NVIDIA Confidential Computing for data safety. With the NVIDIA Blackwell platform on Google Distributed Cloud, on-premises data centers can stay aligned with regulatory requirements and data sovereignty laws by locking down access to sensitive information, such as patient records, financial transactions and classified government information. NVIDIA Confidential Computing also secures sensitive code in the Gemini models from unauthorized access and data leaks.

"By bringing our Gemini models on premises with NVIDIA Blackwell's breakthrough performance and confidential computing capabilities, we're enabling enterprises to unlock the full potential of agentic AI," said Sachin Gupta, vice president and general manager of infrastructure and solutions at Google Cloud. "This collaboration helps ensure customers can innovate securely without compromising on performance or operational ease." Confidential computing with NVIDIA Blackwell provides enterprises with the technical assurance that their user prompts to the Gemini models' application programming interface—as well as the data they used for fine-tuning—remain secure and cannot be viewed or modified. At the same time, model owners can protect against unauthorized access or tampering, providing dual-layer protection that enables enterprises to innovate with Gemini models while maintaining data privacy.

5th Gen AMD EPYC Processors Deliver Leadership Performance for Google Cloud C4D and H4D Virtual Machines

Today, AMD announced the new Google Cloud C4D and H4D virtual machines (VMs) are powered by 5th Gen AMD EPYC processors. The latest additions to Google Cloud's general-purpose and HPC-optimized VMs deliver leadership performance, scalability, and efficiency for demanding cloud workloads; for everything from data analytics and web serving to high-performance computing (HPC) and AI.

Google Cloud C4D instances deliver impressive performance, efficiency, and consistency for general-purpose computing workloads and AI inference. Based on Google Cloud's testing, leveraging the advancements of the AMD "Zen 5" architecture allowed C4D to deliver up to 80% higher throughput/vCPU compared to previous generations. H4D instances, optimized for HPC workloads, feature AMD EPYC CPUs with Cloud RDMA for efficient scaling of up to tens of thousands of cores.

IBM & Intel Announce the Availability of Gaudi 3 AI Accelerators on IBM Cloud

Yesterday, at Intel Vision 2025, IBM announced the availability of Intel Gaudi 3 AI accelerators on IBM Cloud. This offering delivers Intel Gaudi 3 in a public cloud environment for production workloads. Through this collaboration, IBM Cloud aims to help clients more cost-effectively scale and deploy enterprise AI. Intel Gaudi 3 AI accelerators on IBM Cloud are currently available in Frankfurt (eu-de) and Washington, D.C. (us-east) IBM Cloud regions, with future availability for the Dallas (us-south) IBM Cloud region in Q2 2025.

IBM's AI in Action 2024 report found that 67% of surveyed leaders reported revenue increases of 25% or more due to including AI in business operations. Although AI is demonstrating promising revenue increases, enterprises are also balancing the costs associated with the infrastructure needed to drive performance. By leveraging Intel's Gaudi 3 on IBM Cloud, the two companies are aiming to help clients more cost effectively test, innovate and deploy generative AI solutions. "By bringing Intel Gaudi 3 AI accelerators to IBM Cloud, we're enabling businesses to help scale generative AI workloads with optimized performance for inferencing and fine-tuning. This collaboration underscores our shared commitment to making AI more accessible and cost-effective for enterprises worldwide," said Saurabh Kulkarni, Vice President, Datacenter AI Strategy and Product Management, Intel.

AMD 5th Gen EPYC CPUs Powers Oracle Cloud Infrastructure Compute E6 Shapes

Today, AMD announced 5th Gen AMD EPYC processors power the Oracle Cloud Infrastructure (OCI) Compute E6 Standard shapes. 5th Gen AMD EPYC processors, the world's best server CPUs for enterprise, AI and cloud, enable OCI Compute E6 shapes to deliver up to a 2X increase in cost to performance, compared to the previous E5 instance generation based on testing by OCI.

The new OCI Compute E6 shapes build on the success of the previous E5 generation to deliver leadership performance and cost efficiency for general-purpose and compute-intensive workloads. These OCI shapes add to the selection of more than a thousand compute instances powered by AMD EPYC processors across all major cloud service providers.

Supermicro Ships Over 20 New Systems that Redefine Single-Socket Performance

Super Micro Computer, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is announcing the availability of new single-socket servers capable of supporting applications that required dual-socket servers for a range of data center workloads. By leveraging a single-socket architecture, enterprises and data center operators can reduce initial acquisition costs, ongoing operational costs such as power and cooling, and reduce the physical footprint of server racks compared to previous generations of systems based on older processors.

"We are entering a new era of compute where energy-efficient and thermally optimized single-socket architectures are becoming a viable alternative to traditional dual-processor servers," said Charles Liang, president and CEO of Supermicro. "Our new single-socket servers support 100% more cores per system than previous generations and have been designed to maximize acceleration, networking, and storage flexibility. Supporting up to 500-watt TDP processors, these new systems can be configured to fulfill a wide range of workload requirements."

NVIDIA GeForce NOW Brings More Blizzard Gaming to the Cloud

Bundle up - GeForce NOW is bringing a flurry of Blizzard titles to its ever-expanding library. Prepare to weather epic gameplay in the cloud, tackling the genres of real-time strategy (RTS), multiplayer online battle arena (MOBA) and more. Classic Blizzard titles join GeForce NOW, including Heroes of the Storm, Warcraft Rumble and three titles from the Warcraft: Remastered series. They're all part of 11 games joining the cloud this week, atop the latest update for hit game Zenless Zone Zero from miHoYo.

Blizzard Heats Things Up
Heroes of the Storm, Blizzard's unique take on the MOBA genre, offers fast-paced team battles across diverse battlegrounds. The game features a roster of iconic Blizzard franchise characters, each with customizable talents and abilities. Heroes of the Storm emphasizes team-based gameplay with shared experiences and objectives, making it more accessible to newcomers while providing depth for experienced players.

Supermicro Intros New Systems Optimized for Edge and Embedded Workloads

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is introducing a wide range of new systems which are fully optimized for edge and embedded workloads. Several of these new compact servers, which are based on the latest Intel Xeon 6 SoC processor family (formerly codenamed Granite Rapids-D), empower businesses to optimize real-time AI inferencing and enable smarter applications across many key industries.

"As the demand for Edge AI solutions grows, businesses need highly reliable, compact systems that can process data at the edge in real-time," said Charles Liang, president and CEO of Supermicro. "At Supermicro, we design and deploy the industry's broadest range of application optimized systems from the data center to the far edge. Our latest generation of edge servers deliver advanced AI capabilities for enhanced efficiency and decision-making close to where the data is generated. With up to 2.5 times core count increase at the edge with improved performance per watt and per core, these new Supermicro compact systems are fully optimized for workloads such as Edge AI, telecom, networking, and CDN."

Qualcomm Targets Bolstering of AI & IoT Capabilities with Edge Impulse Acquisition

At Embedded World Germany, Qualcomm Technologies, Inc. announced the entry into an agreement to acquire EdgeImpulse Inc., which will enhance its offering for developers and expand its leadership in AI capabilities to power AI-enabled products and services across IoT. The closing of this deal is subject to customary closing conditions. This acquisition is anticipated to complement Qualcomm Technologies' strategic approach to IoT transformation, which includes a comprehensive chipset roadmap, unified software architecture, a suite of services, developer resources, ecosystem partners, comprehensive solutions, and IoT blueprints to address diverse industry needs and challenges.

"We are thrilled about the opportunity to significantly enhance our IoT offerings with Edge Impulse's advanced AI-powered end-to-end platform that will complement our strategic approach to IoT transformation," said Nakul Duggal, group general manager, automotive, industrial and embedded IoT, and cloud computing, Qualcomm Technologies, Inc. "We anticipate that this acquisition will strengthen our leadership in AI and developer enablement, enhancing our ability to provide comprehensive technology for critical sectors such as retail, security, energy and utilities, supply chain management, and asset management. IoT opens the door for a myriad of opportunities, and success is about building real-world solutions, enabling developers and enterprises with AI capabilities to extract intelligence from data, and providing them with the tools to build the applications and services that will power the digital transformation of industries."

Amazon GameLift Streams Empowers Developers to Stream Games to Virtually Any Device

Amazon Web Services (AWS), an Amazon.com, Inc. company, today announced Amazon GameLift Streams, a fully managed capability that enables developers to deliver high-fidelity, low-latency game experiences to players using virtually any device with a browser. Game developers no longer need to spend time and resources modifying their games for streaming or building their own streaming infrastructure. Players around the world can begin playing games in seconds instead of waiting minutes for streams or hours for downloads. Amazon GameLift Streams is a new capability of Amazon GameLift, the AWS service that empowers developers to build and deliver the world's most demanding games. The new streaming capability opens opportunities for developers to deliver new experiences to more players, helping them grow engagement and sales of their games.

"With more than 750 million people playing games running on AWS every month, we have a long history of supporting the industry's game development, content creation, player acquisition, personalization, and more," said Chris Lee, general manager and head of Immersive Technology at AWS. "Amazon GameLift Streams can help the game industry transform billions of everyday devices around the world into gaming machines without rebuilding game code or managing your own infrastructure. For game developers, this creates exciting new revenue and monetization opportunities that weren't possible before."

QNAP Releases Cloud NAS Operating System QuTScloud c5.2

QNAP Systems, Inc. today released QuTScloud c5.2, the latest version of its Cloud NAS operating system. This update introduces Security Center, a proactive security application that monitors Cloud NAS file activities and defends against ransomware threats. Additionally, QuTScloud c5.2 provides extensive optimizations, streamlining operations and management for a more seamless user experience.

QuTScloud Cloud NAS revolutionizes enterprise data storage and management. By deploying a QuTScloud image on virtual machines, businesses can flexibly implement Cloud NAS on public cloud platforms or virtualization environments. With a subscription-based pricing model starting at just US $4.99 per month, users can allocate resources efficiently and optimize costs.

Intel Showcases Foundational Network Infrastructure with Xeon 6 at MWC 2025

The telecommunications industry is undergoing a major transformation as AI and 5G technologies reshape networks and connectivity. While operators are eager to modernize infrastructure, challenges remain, such as high capital expenditures, security concerns and integration with legacy systems. At MWC 2025, Intel - alongside more than 50 partners and customers - will showcase groundbreaking solutions that deliver high capacity and high efficiency performance with built-in AI integration, eliminating the need for costly additional hardware and delivering optimized total cost of ownership (TCO).

"By leveraging cloud technologies and fostering close collaborations with partners, we are helping operators virtualize both 5G core and radio access networks - proving that the most demanding, mission-critical workloads can run efficiently on general-purpose silicon," said Sachin Katti, senior vice president and general manager of the Network and Edge Group at Intel Corporation. "Through our Xeon 6 processors, we are enabling the future of AI-powered network modernization."
Return to Keyword Browsing
Jun 17th, 2025 01:38 EEST change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts