News Posts matching #Research

Return to Keyword Browsing

Doudna Supercomputer Will be Powered by NVIDIA's Next-gen Vera Rubin Platform

Ready for a front-row seat to the next scientific revolution? That's the idea behind Doudna—a groundbreaking supercomputer announced today at Lawrence Berkeley National Laboratory in Berkeley, California. The system represents a major national investment in advancing U.S. high-performance computing (HPC) leadership, ensuring U.S. researchers have access to cutting-edge tools to address global challenges. "It will advance scientific discovery from chemistry to physics to biology and all powered by—unleashing this power—of artificial intelligence," U.S. Energy Secretary Chris Wright (pictured above) said at today's event.

Also known as NERSC-10, Doudna is named for Nobel laureate and CRISPR pioneer Jennifer Doudna. The next-generation system announced today is designed not just for speed but for impact. Powered by Dell Technologies infrastructure with the NVIDIA Vera Rubin architecture, and set to launch in 2026, Doudna is tailored for real-time discovery across the U.S. Department of Energy's most urgent scientific missions. It's poised to catapult American researchers to the forefront of critical scientific breakthroughs, fostering innovation and securing the nation's competitive edge in key technological fields.

Heron QPU-powered IBM Quantum System One Will Bolster UTokyo's Miyabi Supercomputer

The University of Tokyo (UTokyo) and IBM have announced plans to deploy the latest 156-qubit IBM Heron quantum processing unit (QPU), which will be operational in the IBM Quantum System One administered by UTokyo for the members of the Quantum Innovation Initiative (QII) Consortium. The IBM Heron QPU, which features a tunable-coupler architecture, delivers a significantly higher performance than the processor previously installed in 2023.

This is the second update of the IBM Quantum System One as part of the collaboration between UTokyo and IBM. It was first deployed with a 27-qubit IBM Falcon QPU, before being updated to a 127-qubit IBM Eagle QPU in 2023. It is now transitioning to the latest generation highly performant IBM Heron later this year. IBM has deployed four Heron-based systems worldwide and their performance shows significant improvement over the previous Eagle QPU, with a 3-4x improvement in two-qubit error rates; an order of magnitude improvement in device-wide performance benchmarked by errors across 100-qubit long layers; continued improvement in speed, with a 60 percent increase in CLOPS expected; and a system uptime of more than 95%. The latest IBM Heron processor has continued to demonstrate immense value in orchestrating utility-level workloads, to date, with multiple published studies leveraging these systems' capability of achieving more than 5,000 gate operations.

ETH Zurich Researchers Discover New Security Vulnerability in Intel Processors

Computer scientists at ETH Zurich discover new class of vulnerabilities in Intel processors, allowing them to break down barriers between different users of a processor using carefully crafted instruction sequences. Entire processor memory can be read by employing quick, repeated attacks. Anyone who speculates on likely events ahead of time and prepares accordingly can react quicker to new developments. What practically every person does every day, consciously or unconsciously, is also used by modern computer processors to speed up the execution of programs. They have so-called speculative technologies which allow them to execute instructions on reserve that experience suggests are likely to come next. Anticipating individual computing steps accelerates the overall processing of information.

However, what boosts computer performance in normal operation can also open up a backdoor for hackers, as recent research by computer scientists from the Computer Security Group (COMSEC) at the Department of Information Technology and Electrical Engineering at ETH Zurich shows. The computer scientists have discovered a new class of vulnerabilities that can be exploited to misuse the prediction calculations of the CPU (central processing unit) in order to gain unauthorized access to information from other processor users.

Update:: Intel released a security advisory regarding CVE-2024-45332, accompanied by a public announcement, and provided TechPowerUp with the following statement:
"We appreciate the work done by ETH Zurich on this research and collaboration on coordinated public disclosure. Intel is strengthening its Spectre v2 hardware mitigations and recommends customers contact their system manufacturer for the appropriate update. To date, Intel is not aware of any real-world exploits of transient execution vulnerabilities.", Intel spokesperson

Samsung Display Unveils Advanced R&D Achievements at Display Week 2025

Samsung Display announced on May 13 that it will take part in Display Week 2025, held from May 13 to 15 at the McEnery Convention Center in San Jose, California. The event, hosted by the Society for Information Display (SID), brings together display companies and experts from around the world to share cutting-edge technologies and R&D achievements.

Samsung Display is set to unveil a range of next-generation display technologies, including the industry's first non-cadmium 400-nit EL-QD and a 5,000 pixels-per-inch (PPI) RGB OLED on Silicon (OLEDoS) at Display Week 2025, reinforcing its leadership in cutting-edge panel technology. The company will also highlight its leadership in OLED innovation through a range of future-oriented technologies, including organic photodiodes (OPD), advanced sensors capable of measuring biometric data such as heart rate and blood pressure directly from light generated by a panel touched by a patient, and a high-resolution microdisplay that delivers 5,000 PPI in a compact 1.4-inch form factor.

Aledia to Showcase FlexiNova at Display Week 2025 — A Customizable MicroLED Platform for Next-Gen Screens

At Display Week 2025, Aledia, the leader in nanowire and 3D silicon-based microLED display technology, will showcase FlexiNOVA - a new product-ready platform built on its proprietary nanowire microLED technology and designed to scale production of next-gen displays. With its first industrial chip format now defined and available for sampling in 6 V and 9 V variants starting in H2 2025, FlexiNova empowers OEMs and display makers to integrate microLEDs into devices as diverse as smartwatches, automotive dashboards, luxury TVs and ultra-high-resolution monitors while maximizing power efficiency.

To bring microLED to mass production, performance alone isn't enough—industrial viability is key. Until now, microLEDs have mostly stayed in the lab due to high costs and manufacturing challenges for chips smaller than 30 µm. FlexiNOVA removes these barriers, making it easier for manufacturers to transition to microLED displays, which are brighter, more energy-efficient and have a longer battery life than OLED and LCD. What makes FlexiNOVA stand out is its flexibility: chip size, shape and power usage can be tailored to product needs without sacrificing performance.

PlanetPlay & UN Research Reveals Gamers' Shift to Greener Habits

Data drawn from a new in-game poll called Play2Act showed evidence that gaming has the potential to incentivize greener habits among gamers. 79 percent of those respondents who had played games with green messages or environmental content reported making at least one positive behavioural change after playing these games. Among these players, 47 percent report reducing their environmental impact through energy use or public transport, while 34 percent report making greener consumption choices.

Launched in September 2024, Play2Act is a poll embedded in popular games, designed to explore the role of games in tackling the climate and nature crises. The initiative was developed by PlanetPlay, a not-for-profit platform that contributes to environmental action through games, in collaboration with the United Nations Development Programme (UNDP).

IBM Announces Granite 4.0 Tiny Preview - an Extremely Compact & Compute Efficient AI Model

We're excited to present IBM Granite 4.0 Tiny Preview, a preliminary version of the smallest model in the upcoming Granite 4.0 family of language models, to the open source community. Granite 4.0 Tiny Preview is extremely compact and compute efficient: at FP8 precision, several concurrent sessions performing long context (128K) tasks can be run on consumer grade hardware, including GPUs commonly available for under $350 USD. Though the model is only partially trained—it has only seen 2.5T of a planned 15T or more training tokens—it already offers performance rivaling that of IBM Granite 3.3 2B Instruct despite fewer active parameters and a roughly 72% reduction in memory requirements. We anticipate Granite 4.0 Tiny's performance to be on par with that of Granite 3.3 8B Instruct by the time it has completed training and post-training.

As its name suggests, Granite 4.0 Tiny will be among the smallest offerings in the Granite 4.0 model family. It will be officially released this summer as part of a model lineup that also includes Granite 4.0 Small and Granite 4.0 Medium. Granite 4.0 continues IBM's firm commitment to making efficiency and practicality the cornerstone of its enterprise LLM development. This preliminary version of Granite 4.0 Tiny is now available on Hugging Face—though we do not yet recommend the preview version for enterprise use—under a standard Apache 2.0 license. Our intent is to allow even GPU-poor developers to experiment and tinker with the model on consumer-grade GPUs. The model's novel architecture is pending support in Hugging Face transformers and vLLM, which we anticipate will be completed shortly for both projects. Official support to run this model locally through platform partners including Ollama and LMStudio is expected in time for the full model release later this summer.

Ultra-High-Speed Flash Memory Created by Chinese Researchers

Researchers at Fudan University have made a significant advancement in integrated circuit technology. The team led by Zhou Peng and Liu Chunsen has developed "PoX," a picosecond flash memory device that operates at unprecedented speeds. The team predicted a phenomenon called "super-injection" by creating a quasi-2D Poisson model, which surpasses existing theoretical limits on memory speed. Their device achieves read/write speeds of 400 picoseconds (less than one nanosecond), equaling approximately 2.5 billion operations per second and making it the world's fastest semiconductor charge memory technology. "This is like the device can work one billion times in the blink of an eye, while a U disk (a hard drive in USB form) can only work 1,000 times. The previous world record for similar technology was two million," said Zhou Peng, a researcher from Fudan University's State Key Laboratory of Integrated Chips and Systems and a leading scientist on the research team.

Traditional flash memory requires electrons to "warm up" and accelerate along a channel before being captured for storage, a process limited by the long acceleration distance and electric field constraints. The new approach includes Dirac energy band structure, ballistic transport properties of two-dimensional materials, and modulation for the Gaussian length of the 2D channel. This allows electrons to reach very high speeds immediately without any 'run-up' period. Once fully developed, it can completely revolutionize the computer architecture by making memory and storage components unnecessary, obviating the need for hierarchical storage and allowing for large AI models to be deployed locally. Within 3 -5 years, the researchers plan to scale integration to tens of megabits, after which the technology will be made available for licensing to industry. The research was published as 'Subnanosecond flash memory enabled by 2D-enhanced hot-carrier injection' in the journal Nature 641, 90-97 (2025).

LG Innotek to Build FC-BGA Into 700 Million USD Business by 2030

LG unveiled the Dream Factory, a hub for the production of FC-BGAs (Flip Chip Ball Grid Arrays), the company's next-generation growth engine, to the media for the first time and announced it on the 30th April.

In 2022, LG Innotek announced its plans to launch a business producing FC-BGAs, high-value semiconductor substrates. To build the Dream Factory, the company acquired LG Electronics' Gumi 4 Factory and began full-scale mass production in February 2024.

Report: Global PC Shipments Up 6.7% YoY in Q1 2025 Amid US Tariff Anticipation

Global PC shipments grew 6.7% YoY in Q1 2025 to reach 61.4 million units, according to Counterpoint Research's preliminary data. The growth was mainly driven by PC vendors accelerating shipments ahead of US tariffs and the increasing adoption of AI-enabled PCs amid the end of Windows 10 support. However, this surge may be short-lived, as inventory levels are likely to stabilize in the next few weeks. The impact of the US tariffs is expected to dampen the growth momentum in 2025.

Apple and Lenovo delivered strong performances in the quarter, largely due to new product launches and market dynamics. Apple experienced 17% YoY growth in shipments, driven by its AI-capable M4-based MacBook series. Lenovo's 11% growth reflected its expansion into AI-enabled PCs and its diversified product portfolio. Lenovo remained the brand with the largest market share during the quarter. HP and Dell, on the other hand, benefited from the US market pull-ins during the quarter, with 6% and 4% YoY growth respectively, and maintained their second and third places in Q1. We also found that the pull-ins happened for other major brands too ahead of the tariff uncertainty, leading to the market share further consolidating around major brands.

Intel Arc Xe2 "BMG-G31" GPU Spotted in Shipping Manifest; "Battlemage" B770 Model's Fortunes Revived?

At the tail end of March, an interaction between Tomasz Gawroński (aka GawroskiT) and Jaykihn (jaykihn0) indicated that Intel had abandoned the development of higher-end Arc Xe2 "Battlemage" graphics cards—possibly back in late 2024. Months of silence—since the launch of pleasingly wallet-friendly B580 and B570 models—instilled a sense of unease within segments of the PC gaming hardware community. Many watchdogs assumed that company engineers had simply moved onto devising futuristic Arc Xe3 "Celestial" equivalents. As discovered last week, hopes have been elevated for a potential expansion of Team Blue's "Battlemage" dGPU lineup. Haze2K1 highlighted an intriguing entry within an NBD shipping manifest; a "BMG-G31"-type GPU was transferred "for R&D purposes." Currently, the lower end of Intel's B-card series is populated by discrete solutions based on their smaller "BMG-G21" GPU design.

Tomasz Gawroński spent part of his Easter weekend poring over shipping documents; soon stumbling on an entry that mentioned a mysterious "IBC C32 SKT"—again, listed under "research and development" purposes. In a reply to Gawroński's social media bulletin, miktdt weighed in with a logical theory: "because of the BMG in the text the best I could believe is a reworked/restarted BMG G31. C32 could simply mean cores 32 which is a fully-enabled G31. This makes more sense to me." VideoCardz posits that these leaks do not necessarily signal the revival of fortunes for more potent Arc Xe2 "Battlemage" SKUs; Intel could be shipping "canceled project" prototypes to different locations. Going back to late summer 2023, a "BMG G10" GPU die was spotted by members of the press during a tour of Team Blue's Malaysian test lab. Back then, certain industry insiders believed that the whole "Battlemage" endeavor was going through "development hell." Fast-forward to the present day; OneRaichu reckons that there is still a likelihood of Team Blue's "B770" model turning up at some point in the future.

Bionic Bay Out Now on PC & PlayStation 5

Speed running scientists and challenge hunters, rejoice. After years of crafting, your invitation to Bionic Bay is now available on Steam and PlayStation 5. It began with a chance meeting on Reddit, and now the Bionic Bay story continues with you, our trusted Research Participants. In Bionic Bay you'll master the art of teleportation, control the very fabric of time itself, manipulate gravity and navigate a path filled with dangers. Our team will provide you with the tools you'll need to traverse this intricately crafted pixel art world. But don't let the stunning visuals fool you, this is no slow-moving narrative crawl. The best way to navigate Bionic Bay is with fast movements, precise controls, exceptional timing, puzzle solving intuition and just maybe a little bit of luck.

What you've seen in the Bionic Bay demo is just the beginning. Bionic Bay v1.0 includes 23 painstakingly created levels, with an increasing challenge to simply stay alive through your journey. Importantly, however, the path can be as helpful as it is dangerous, and Research Participants will need to judge the way ahead accordingly. In addition to the main game, we've also developed the Bionic Bay Online mode, with a dedicated build for speed runners and those hungry for a challenge. In this demanding arena, you'll have the opportunity to customize your own scientist, and claim your place on our transient and global leaderboards.

Quantum Machines Anticipates Collaborative Breakthroughs at NVIDIA's New Research Center

Quantum Machines (QM), a leading provider of advanced quantum control solutions, today announced its intention to work with NVIDIA at its newly established NVIDIA Accelerated Quantum Research Center (NVAQC), unveiled at the GTC global AI conference. The Boston-based center aims to advance quantum computing research with accelerated computing, including integrating quantum processors with AI- supercomputing to overcome significant challenges in the quantum computing space. As quantum computing rapidly evolves, the integration of quantum processors with powerful AI supercomputers becomes increasingly essential. These accelerated quantum supercomputers are pivotal for advancing quantum error correction, device control, and algorithm development.

Quantum Machines joins other quantum computing pioneers, including Quantinuum and QuEra, along with academic partners from Harvard and MIT, in working with NVIDIA at the NVAQC to develop pioneering research. Quantum Machines will work with NVIDIA to integrate its NVIDIA GB200 Grace Blackwell Superchips with QM's advanced quantum control technologies, including the OPX1000. This integration will facilitate rapid, high-bandwidth communication between quantum processors and classical supercomputers. QM and NVIDIA thereby lay the essential foundations for quantum error correction and robust quantum algorithm execution. By reducing latency and enhancing processing efficiency, QM and NVIDIA solutions will significantly accelerate practical applications of quantum computing.

Qualcomm Announces Acquisition of VinAI Division, Aims to Expand GenAI Capabilities

Qualcomm today announced the acquisition of MovianAI Artificial Intelligence (AI) Application and Research JSC (MovianAI), the former generative AI division of VinAI Application and Research JSC (VinAI) and a part of the Vingroup ecosystem. As a leading AI research company, VinAI is renowned for its expertise in generative AI, machine learning, computer vision, and natural language processing. Combining VinAI's advanced generative AI research and development (R&D) capabilities with Qualcomm's decades of extensive R&D will expand its ability to drive extraordinary inventions.

For more than 20 years, Qualcomm has been working closely with the Vietnamese technology ecosystem to create and deliver innovative solutions. Qualcomm's innovations in the areas of 5G, AI, IoT and automotive have helped to fuel the extraordinary growth and success of Vietnam's information and communication technology (ICT) industry and assisted the entry of Vietnamese companies into the global marketplace.

Quantum Machines Announces NVIDIA DGX Quantum Early Access Program

Quantum Machines (QM), the leading provider of advanced quantum control solutions, has recently announced the NVIDIA DGX Quantum Early Customer Program, with a cohort of six leading research groups and quantum computer builders. NVIDIA DGX Quantum, a reference architecture jointly developed by NVIDIA and QM, is the first tightly integrated quantum-classical computing solution, designed to unlock new frontiers in quantum computing research and development. As quantum computers scale, their reliance on classical resources for essential operations, such as quantum error correction (QEC) and parameter drift compensation, grows exponentially. NVIDIA DGX Quantum provides access to the classical acceleration needed to support this progress, advancing the path toward practical quantum supercomputers.

NVIDIA DGX Quantum leverages OPX1000, the best-in-class, modular high-density hybrid control platform, seamlessly interfacing with NVIDIA GH200 Grace Hopper Superchips. This solution brings accelerated computing into the heart of the quantum computing stack for the first time, achieving an ultra-low round-trip latency of less than 4 µs between quantum control and AI supercomputers - faster than any other approach. The NVIDIA DGX Quantum Early Customer Program is now underway, with selected leading academic institutions, national labs, and commercial quantum computer builders participating. These include the Engineering Quantum Systems group (equs.mit.edu) led by MIT Professor William D. Oliver, the Israeli Quantum Computing Center (IQCC), quantum hardware developer Diraq, the Quantum Circuit group (led by Ecole Normale Supérieure de Lyon Professor Benjamin Huard), and more.

TSMC Arizona Operations Only 10% More Expensive Than Taiwanese Fab Operations

A recent study by TechInsights is reshaping the narrative around the cost of semiconductor manufacturing in the United States. According to the survey, processing a 300 mm wafer at TSMC's Fab 21 in Phoenix, Arizona, is only about 10% more expensive than similar operations in Taiwan. This insight challenges earlier assumptions based on TSMC founder Morris Chang's comments, which suggested that high fab-building expenses in Arizona made US chip production financially impractical. G. Dan Hutcheson of TechInsights highlighted that the observed cost difference largely reflects the expenses associated with establishing a brand-new facility. "It costs TSMC less than 10% more to process a 300 mm wafer in Arizona than the same wafer made in Taiwan," he explained. The initial higher costs stem from constructing a fab in an unfamiliar market with a new, sometimes unskilled workforce—a scenario not typical for mature manufacturing sites.

A significant portion of the wafer production cost is driven by equipment, which accounts for well over two-thirds of the total expenses. Leading equipment providers like ASML, Applied Materials, and Lam Research charge similar prices globally, effectively neutralizing geographic disparities. Although US labor costs are higher than in Taiwan, the heavy automation in modern fabs means that labor represents less than 2% of the overall cost. Additional logistics for Fab 21, including the return of wafers to Taiwan for dicing, testing, and packaging, add complexity but only minimally affect the overall expense. With plans to expand domestic packaging capabilities, TSMC's approach is proving to be strategically sound. This fresh perspective suggests that the apparent high cost of US fab construction has been exaggerated. TSMC's $100B investment in American semiconductor manufacturing reflects a calculated decision informed by detailed cost analysis—demonstrating that location-based differences become less significant when the equipment dominates expenses.

Google Making Vulkan the Official Graphics API on Android

We're stepping up our multiplatform gaming offering with exciting news dropping at this year's Game Developers Conference (GDC). We're bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we'll be diving into all of the latest games coming to Play, plus new developer tools that'll help improve gameplay across the Android ecosystem.

We're sharing a closer look at what's new from Android. We're making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we're enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out our video, or keep reading below.

NVIDIA to Build Accelerated Quantum Computing Research Center

NVIDIA today announced it is building a Boston-based research center to provide cutting-edge technologies to advance quantum computing. The NVIDIA Accelerated Quantum Research Center, or NVAQC, will integrate leading quantum hardware with AI supercomputers, enabling what is known as accelerated quantum supercomputing. The NVAQC will help solve quantum computing's most challenging problems, ranging from qubit noise to transforming experimental quantum processors into practical devices.

Leading quantum computing innovators, including Quantinuum, Quantum Machines and QuEra Computing, will tap into the NVAQC to drive advancements through collaborations with researchers from leading universities, such as the Harvard Quantum Initiative in Science and Engineering (HQI) and the Engineering Quantum Systems (EQuS) group at the Massachusetts Institute of Technology (MIT).

ASUS Introduces New "AI Cache Boost" BIOS Feature - R&D Team Claims Performance Uplift

Large language models (LLMs) love large quantities of memory—so much so, in fact, that AI enthusiasts are turning to multi-GPU setups to make even more VRAM available for their AI apps. But since many current LLMs are extremely large, even this approach has its limits. At times, the GPU will decide to make use of CPU processing power for this data, and when it does, the performance of your CPU cache and DRAM comes into play. All this means that when it comes to the performance of AI applications, it's not just the GPU that matters, but the entire pathway that connects the GPU to the CPU to the I/O die to the DRAM modules. It stands to reason, then, that there are opportunities to boost AI performance by optimizing these elements.

That's exactly what we've found as we've spent time in our R&D labs with the latest AMD Ryzen CPUs. AMD just launched two new Ryzen CPUs with AMD 3D V-Cache Technology, the AMD Ryzen 9 9950X3D and Ryzen 9 9900X3D, pushing the series into new performance territory. After testing a wide range of optimizations in a variety of workloads, we uncovered a range of settings that offer tangible benefits for AI enthusiasts. Now, we're ready to share these optimizations with you through a new BIOS feature: AI Cache Boost. Available through an ASUS AMD 800 Series motherboard and our most recent firmware update, AI Cache Boost can accelerate performance up to 12.75% when you're working with massive LLMs.

Scientists Cast Doubt on Microsoft's Quantum "Breakthrough" with Majorana 1 Chip

Microsoft launched its Majorana 1 chip—the world's first quantum processor powered by a Topological Core architecture—last month. The company's debuting of its Majorana design was celebrated as a significant milestone—in 2023, an ambitious roadmap was published by Microsoft's research department. At the time, a tall Majorana particle-based task was set: the building of a proprietary quantum supercomputer within a decade. Returning to the present day; outside parties have criticized Microsoft's February announcements. The Register published an investigative piece earlier today, based on quotes from key players specializing in the field of Quantum studies. Many propose a theoretical existence of Majorana particles, while Microsoft R&D employees have claimed detection and utilization. The Register referred back to recent history: "(Microsoft) made big claims about Majorana particles before, but it didn't end well: in 2021 Redmond's researchers retracted a 2018 paper in which they claimed to have detected the particles."

As pointed out by Microsoft researcher Chetan Nayak; their latest paper was actually authored last March 2024, but only made public in recent weeks. Further details of progress are expected next week, at the American Physical Society (APS) 2025 Joint March Meeting. The Register has compiled quotes from vocal critics; starting with Henry Legg—a lecturer in theoretical physics at the University of St Andrews, Scotland. The noted scholar believes—as divulged in a scientific online comment—that Microsoft's claimed Quantum breakthrough: "is not reliable and must be revisited." Similarly, collaborators from Germany's Forschungszentrum Jülich institute and the University of Pittsburgh, USA released a joint video statement. (Respectively) Experimental physicist Vincent Mourik and by Professor Sergey Frolov outlined: "distractions caused by unreliable scientific claims from Microsoft Quantum."

ASML and imec Sign Strategic Partnership Agreement to Support Semiconductor Research and Sustainable Innovation in Europe

ASML Holding N.V. (ASML) and imec, a leading research and innovation hub in nanoelectronics and digital technologies, today announce that they have signed a new strategic partnership agreement, focusing on research and sustainability. The agreement has a duration of five years and aims to deliver valuable solutions in two areas by bringing together ASML's and imec's respective knowledge and expertise. First, to develop solutions that advance the semiconductor industry and second, to develop initiatives focused on sustainable innovation.

The collaboration incorporates ASML's whole product portfolio, with a focus on developing high-end nodes, using ASML systems including 0.55 NA EUV, 0.33 NA EUV, DUV immersion, YieldStar optical metrology and HMI single- and multi-beam technologies. These tools will be installed in imec's state-of-the-art pilot line and incorporated in the EU- and Flemish-funded NanoIC pilot line, providing the most advanced infrastructure for sub-2 nm R&D to the international semiconductor ecosystem. Focus areas for R&D will also include silicon photonics, memory and advanced packaging, offering full stack innovation for future semiconductor-based AI applications in diverse markets.

Remedy Entertainment Updates Control Ultimate Edition - Adds New Ultra Ray Tracing Preset, HDR Support & more

Hello Director! We're happy to announce that all owners of will be receiving a free content update. The update is hitting PC players (Steam and Epic Games Store, GOG to follow later) today, including several improvements to support newer hardware. In the near future we will release this same update for PlayStation 5 and Xbox Series X|S platforms. We have a small team working on these updates so we want to space them out to give us time to fix potential issues that might come up. We appreciate your patience! Keep an eye on our social channels for the exact date and time, as well as update notes.

Control Ultimate Edition contains the main game and all previously released Expansions ("The Foundation" and "AWE") in one great value package. A corruptive presence has invaded the Federal Bureau of Control…Only you have the power to stop it. The world is now your weapon in an epic fight to annihilate an ominous enemy through deep and unpredictable environments. Containment has failed, humanity is at stake. Will you regain control?

China Doubles Down on Semiconductor Research, Outpacing US with High-Impact Papers

When the US imposed sanctions on Chinese semiconductor makers, China began the push for sovereign chipmaking tools. According to a study conducted by the Emerging Technology Observatory (ETO), Chinese institutions have dramatically outpaced their US counterparts in next-generation chipmaking research. Between 2018 and 2023, nearly 475,000 scholarly articles on chip design and fabrication were published worldwide. Chinese research groups contributed 34% of the output—compared to just 15% from the United States and 18% from Europe. The study further emphasizes the quality of China's contributions. Focusing on the top 10% of the most-cited articles, Chinese researchers were responsible for 50% of this high-impact work, while American and European research accounted for only 22% and 17%, respectively.

This trend shows China's lead isn't about numbers only, and suggests that its work is resonating strongly within the global academic community. Key research areas include neuromorphic, optoelectric computing, and, of course, lithography tools. China is operating mainly outside the scope of US export restrictions that have, since 2022, shrunk access to advanced chipmaking equipment—precisely, tools necessary for fabricating chips below the 14 nm process node. Although US sanctions were intended to limit China's access to cutting-edge manufacturing technology, the massive body of Chinese research suggests that these measures might eventually prove less effective, with Chinese institutions continuing to push forward with influential, high-citation studies. However, Chinese theoretical work is yet to be proven in the field, as only a single company currently manufactures 7 nm and 5 nm nodes—SMIC. Chinese semiconductor makers still need more advanced lithography solutions to reach high-volume manufacturing on more advanced nodes like 3 nm and 2 nm to create more powerful domestic chips for AI and HPC.

GlobalFoundries and MIT Collaborate on Photonic AI Chips

GlobalFoundries (GF) and the Massachusetts Institute of Technology (MIT) today announced a new master research agreement to jointly pursue advancements and innovations for enhancing the performance and efficiency of critical semiconductor technologies. The collaboration will be led by MIT's Microsystems Technology Laboratories (MTL) and GF's research and development team, GF Labs.

With an initial research focus on AI and other applications, the first projects are expected to leverage GF's differentiated silicon photonics technology, which monolithically integrates RF SOI, CMOS and optical features on a single chip to realize power efficiencies for datacenters, and GF's 22FDX platform, which delivers ultra-low power consumption for intelligent devices at the edge.

IBM Introduces New Multi-Modal and Reasoning AI "Granite" Models Built for the Enterprise

IBM today debuted the next generation of its Granite large language model (LLM) family, Granite 3.2, in a continued effort to deliver small, efficient, practical enterprise AI for real-world impact. All Granite 3.2 models are available under the permissive Apache 2.0 license on Hugging Face. Select models are available today on IBM watsonx.ai, Ollama, Replicate, and LM Studio, and expected soon in RHEL AI 1.5 - bringing advanced capabilities to businesses and the open-source community.
Return to Keyword Browsing
Jun 9th, 2025 22:12 EEST change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts