NVIDIA Brings Generative AI to Millions, With Tensor Core GPUs, LLMs, Tools for RTX PCs and Workstations

 

NVIDIA recently announced GeForce RTX™ SUPER desktop GPUs for supercharged generative AI performance, new AI laptops from every top manufacturer, and new NVIDIA RTX™-accelerated AI software and tools for both developers and consumers.

 

Building on decades of PC leadership, with over 100 million of its RTX GPUs driving the AI PC era, NVIDIA is now offering these tools to enhance PC experiences with generative AI: NVIDIA TensorRT™ acceleration of the popular Stable Diffusion XL model for text-to-image workflows, NVIDIA RTX Remix with generative AI texture tools, NVIDIA ACE microservices and more games that use DLSS 3 technology with Frame Generation.

 

AI Workbench, a unified, easy-to-use toolkit for AI developers, will be available in beta later this month. In addition, NVIDIA TensorRT-LLM (TRT-LLM), an open-source library that accelerates and optimizes inference performance of the latest large language models (LLMs), now supports more pre-optimized models for PCs. Accelerated by TRT-LLM, Chat with RTX, an NVIDIA tech demo also releasing this month, allows AI enthusiasts to interact with their notes, documents and other content.

 

“Generative AI is the single most significant platform transition in computing history and will transform every industry, including gaming,” said Jensen Huang, founder and CEO of NVIDIA. “With over 100 million RTX AI PCs and workstations, NVIDIA is a massive installed base for developers and gamers to enjoy the magic of generative AI.”

 

Running generative AI locally on a PC is critical for privacy, latency and cost-sensitive applications. It requires a large installed base of AI-ready systems, as well as the right developer tools to tune and optimize AI models for the PC platform. To meet these needs, NVIDIA is delivering innovations across its full technology stack, driving new experiences and building on the 500+ AI-enabled PC applications and games already accelerated by NVIDIA RTX technology.

 

This is NVIDIA’s first and very important step towards the vision of “LLM as Operating System” – a locally running, heavily optimized AI assistant that can deeply integrate with all your local files, but at the same time preserving privacy. NVIDIA is going local even before OpenAI!

Read the full article at: nvidianews.nvidia.com

More
articles