Nvidia has launched Alpamayo, a new family of open-source artificial intelligence models designed to help autonomous vehicles “think like a human.”
Announced at CES 2026, the technology promises safer and more explainable decision-making in complex driving environments.
Nvidia unveiled Alpamayo as a comprehensive suite of open AI models, simulation tools and datasets focused on training physical robots and autonomous vehicles. The goal is to enable machines to reason through real-world situations rather than simply react to sensor inputs.
Calling it a turning point, Nvidia CEO Jensen Huang described the launch as “the ChatGPT moment for physical AI,” where machines begin to understand, reason and act in the real world.
How Alpamayo helps vehicles ‘think’
At the core of the new family is Alpamayo 1, a 10-billion-parameter vision-language-action (VLA) model built around chain-of-thought reasoning. Unlike traditional systems, it allows autonomous vehicles to break down unfamiliar situations step by step.
This approach enables vehicles to handle rare edge cases, such as navigating a busy intersection during a traffic light outage, even without prior experience.
Explaining decisions
Nvidia executives stressed that Alpamayo does more than control steering, braking and acceleration. According to Jensen Huang, the model reasons about the action it plans to take, explains why it chose that action, and then executes the driving trajectory.
Ali Kani, Nvidia’s vice president of automotive, said the system evaluates every possible outcome before selecting the safest path forward.
Open source and developer flexibility
Nvidia has made Alpamayo 1’s underlying code available on Hugging Face. Developers can fine-tune the model into smaller and faster versions tailored for vehicle development or use it to train simpler driving systems.
The model can also support tools such as auto-labeling systems that tag video data automatically or evaluators that assess whether a vehicle made a smart driving decision.
Synthetic data and world models
Developers can combine Alpamayo with Nvidia’s Cosmos platform to generate synthetic data. Cosmos uses generative world models to create digital representations of physical environments, allowing systems to be trained and tested on both real and synthetic driving data.
This approach is designed to improve safety and performance without relying solely on real-world testing.
As part of the rollout, Nvidia is releasing an open dataset containing more than 1,700 hours of driving data. The dataset spans diverse geographies, conditions and rare real-world scenarios.
The company is also launching AlpaSim, an open-source simulation framework available on GitHub. AlpaSim is designed to recreate real-world driving conditions, including sensors and traffic, enabling large-scale validation of autonomous driving systems.







