TrustFinance is trustworthy and accurate information you can rely on. If you are looking for financial business information, this is the place for you. All-in-One source for financial business information. Our priority is our reliability.

TrustFinance Global Insights
Mar 17, 2026
2 min read
13

Samsung has officially announced its sixth-generation HBM4 memory chips, specifically designed for Nvidia’s upcoming Vera Rubin AI platform. The new chips feature speeds of 11.7 gigabits per second, significantly outpacing the current industry standard of 8 Gbps.
Revealed at Nvidia’s GPU Technology Conference, this development positions Samsung to compete more aggressively in the high-demand AI chip sector. The company aims to reclaim ground after previously trailing rivals like SK Hynix and Micron Technology in supplying high-bandwidth memory to Nvidia. Samsung also stated it was the first to mass-produce and ship HBM4 products.
This move underscores Samsung's strategy to establish itself as a total AI solutions provider, leveraging its strengths in memory, logic, and advanced packaging. The collaboration is pivotal as Nvidia projects significant revenue from its next-generation platforms, reinforcing the importance of a stable and high-performance memory supply chain.
Samsung's advanced HBM4 technology signals a strong push to secure a larger share of the AI hardware market. The successful integration with Nvidia's future platforms will be a critical determinant of its competitive standing against other memory suppliers.
Q: What is the performance of Samsung's new HBM4 chip?
A: The HBM4 chip delivers speeds of 11.7 Gbps, with an upgraded HBM4E variant capable of 16 Gbps.
Q: Which Nvidia platform will use these new chips?
A: The HBM4 chips are designed for Nvidia’s future Vera Rubin platform.
Source: Investing.com

TrustFinance Global Insights
AI-assisted editorial team by TrustFinance curating reliable financial and economic news from verified global sources.
Related Articles