RL training
environments

Reinforcement learning for robotics needs massive environment diversity and realistic physics — especially for sim-to-real transfer. Rigyd converts any 3D catalog into SimReady assets calibrated for RL pipelines.

The problem

Why existing workflows fall short.

RL demands millions of training steps

Reinforcement learning runs for millions of episodes. Every step whose physics deviates from reality compounds into a larger sim-to-real gap by the time the policy finishes training.

Environment diversity drives generalization

Policies trained on a narrow asset set overfit to specific geometries and fail on new objects. Diversity, not algorithmic tricks, is what produces robust policies in the real world.

Asset creation bottlenecks research

Research teams spend more time preparing assets than training policies. Every hour on collision-mesh generation or physics tuning is an hour not spent on algorithms or evaluation.

How Rigyd helps

AI-native infrastructure that automates the hard parts.

SimReady for Isaac Lab, Brax, MuJoCo

Rigyd outputs work with the major RL simulation stacks — Isaac Lab, Brax (MJX), MuJoCo, Gazebo. Physics properties survive format conversion, so training pipelines stay consistent.

Domain-randomization-ready physics

Mass within 15–20% and friction within 0.1 — inside typical DR variance ranges, so RL policies can center randomization on realistic baselines instead of arbitrary guesses.

Scale object libraries for free

Convert thousands of unique objects with one bulk operation. Diverse training sets are no longer gated by asset-creation budgets or graduate-student weekends.

40%

better real-world transfer with physics-accurate RL training

1,000+

unique objects practical for RL datasets

97%

faster than manual physics annotation

Train RL policies on realistic environments

Upload your object library and get SimReady assets calibrated for RL pipelines.

Starts at $29/month. 30 SimReady objects included.