NVIDIA Omniverse Gets Modular Libraries for Physical AI Integration
Caroline Bishop
Apr 08, 2026 16:37
NVIDIA releases standalone Omniverse libraries ovrtx, ovphysx, and ovstorage in early access, enabling developers to embed RTX rendering and physics simulation into existing applications.
NVIDIA is unbundling its Omniverse platform into standalone libraries, letting developers plug RTX rendering and physics simulation directly into existing applications without adopting the full container stack. The three new libraries—ovrtx, ovphysx, and ovstorage—hit early access on GitHub and NGC this week, marking a significant architectural shift for the company’s physical AI ambitions.
The move addresses a persistent pain point in industrial robotics and digital twin deployments: monolithic runtimes that complicate scaling, headless deployment, and CI/CD integration. Rather than forcing teams into wholesale platform migrations, NVIDIA is now exposing core Omniverse components as headless-first C APIs with Python and C++ bindings.
What Each Library Does
The ovrtx library handles high-fidelity RTX path-tracing and sensor simulation. Developers can render frames and generate synthetic data in roughly 10 lines of Python code, with DLPack support enabling zero-copy data exchange with PyTorch, NumPy, and Warp.
Ovphysx wraps the PhysX SDK for USD-native physics simulation, offering hardware-accelerated dynamics that can run independently from any UI dependencies. The asynchronous, stream-ordered execution model gives applications explicit control over physics stepping—critical for deterministic robotics training.
The ovstorage library connects existing PLM systems and storage backends (S3, Azure) directly to Omniverse without requiring data migrations. It’s designed for Kubernetes deployment, letting teams scale microservices independently.
Isaac Lab Already Transitioning
NVIDIA is dogfooding these libraries internally. Isaac Lab 3.0 Beta, the company’s reinforcement learning simulation framework, has moved from the monolithic Kit framework to a modular architecture powered by ovphysx and ovrtx. The result: developers can now choose between PhysX or a MuJoCo-Warp backend depending on requirements, while a pluggable renderer system supports multiple visualization options.
For Isaac Lab’s engineering team, the transition solves three bottlenecks: explicit execution control replacing Kit’s runtime loop, decoupled update frequencies for sensors running at different rates, and a minimal binary footprint for Linux cluster deployments.
Industrial Partners Already Onboard
ABB Robotics is embedding Omniverse into RobotStudio for physical AI training and validation. PTC is connecting Onshape directly to Isaac Sim for cloud-native robot design workflows. Siemens, Adobe, Cadence, and Synopsys are also integrating the libraries—the common thread being the ability to add RTX rendering and PhysX simulation without architectural rewrites.
NVIDIA also announced MCP (Model Context Protocol) servers that expose Omniverse operations in machine-readable schemas, enabling LLM-based agents like Claude and Cursor to call simulation APIs safely. The NemoClaw infrastructure stack provides sandboxed execution with policy-based guardrails for autonomous agent deployment.
Early Access Caveats
APIs may change between releases during early access—NVIDIA is publishing migration notes and collecting feedback via GitHub and Discord. The company plans production releases with API stability and long-term support later in 2026.
The decision framework is straightforward: use libraries when embedding physical AI capabilities into existing 3D or CAD applications, or for lightweight headless deployments. Stick with the full Kit framework when building new feature-rich OpenUSD applications that need integrated UI and viewport coordination.
For robotics developers and industrial software teams who’ve been waiting for Omniverse capabilities without the platform lock-in, the libraries are available now on GitHub (ovrtx, ovphysx) and NGC (ovstorage).
Image source: Shutterstock

