Get Started With NVIDIA ACE
NVIDIA ACE is a suite of digital human technologies that bring game characters and digital assistants to life with generative AI. NVIDIA ACE encompasses technology for every aspect of the digital human—from speech and translation, to vision and intelligence, to realistic animation and behavior, to lifelike appearance.
Cloud Deployment: NVIDIA NIM
NVIDIA NIM™ microservices are easy-to-use inference microservices for LLMs, VLMs, ALMs, speech, animation, and more that accelerate the deployment of AI on any cloud or data center. Try out the latest NVIDIA NIM microservices today at ai.nvidia.com.
Audio2Face-2D
Animate a person’s portrait photo using audio with support for lip sync, blinking and head pose animation.
Audio2Face-3D
For audio to 3D facial animation and lip sync. On-device coming soon.
Nemo Retriever
Seamlessly connect custom models to diverse business data and deliver highly accurate responses for AI applications using RAG.
Omniverse RTX Renderer
For streaming ultra-realistic visuals to any device.
Riva Neural Machine Translation
For text translation for up to 32 languages.
Unreal Render 5 Renderer
This microservice allows you to use Unreal Engine 5.4 to customize and render your avatars.
Cloud Deployment: NVIDIA AI Blueprints
The Digital Human for Customer Service NVIDIA AI Blueprint is a cutting-edge solution that allows enterprises to create 2D and 3D animated digital human avatars, enhancing user engagement beyond traditional customer service methods.
Digital Human Customer Service AI Blueprint
Powered by NVIDIA ACE, Omniverse RTX™, Audio2Face™, and Llama 3.1 NIM microservices, this blueprint integrates seamlessly with existing generative AI applications built using RAG.
PC Deployment: In-Game Inferencing (IGI) SDK
The IGI SDK streamlines AI model deployment and integration for PC application developers. The SDK preconfigures the PC with the necessary AI models, engines, and dependencies. It orchestrates in-process AI inference for C++ games and applications and supports all major inference backends across different hardware accelerators (GPU, NPU, CPU).
Learn MoreDownload Beta
PC Deployment: ACE On-Device Models
ACE on-device models enable agentic workflows for autonomous game characters. These characters can perceive their environment, understand multi-modal inputs, strategically plan a set of actions and execute them all in real-time, providing dynamic experiences for players.
Audio2Face-3D
For audio to 3D facial animation and lip-sync. Features all new diffusion-based architecture for better lip-sync, non-verbal cues and emotions.
Mistral NeMo Minitron Instruct Family of Models
Agentic small language models that enable better role-play, retrieval-augmented generation (RAG) and function calling capabilities.
Cosmos-Nemotron-4B-Instruct
Agentic multi-modal small language model that provides game characters with visual imagery understanding of the real world and on screen actions for better context aware responses.
Riva
Transcribes human language and adds speech to text in real time.
PC Deployment: Engine Plugins and Samples
Plugins and samples for Unreal Engine developers looking to bring their MetaHumans to life with generative AI on RTX PCs.
NVIDIA ACE Unreal Engine 5 Reference
The NVIDIA Unreal Engine 5 reference showcases NPCs interacting with natural language. This workflow contains an Audio2Face on-device plugin for Unreal Engine 5 alongside a configuration sample.
ACE Tools
Technologies for customization and simple deployment.
Autodesk Maya ACE
Streamline facial animation in Autodesk Maya or dive into the source code to develop your own plugin for the digital content creation tool of your choice.
Avatar Configurator
Build and configure custom characters with base, hair, and clothes.
Unified Cloud Services Tools
Simplify deployment of multimodal applications.
ACE Examples
Get started with ACE microservices below. These video tutorials provide tips for common digital human use cases.