Frequently Asked Questions¶
General¶
What is BotManifold?¶
BotManifold is a platform for evaluating robot control policies through simulation. You write a policy (a Python function that controls a robot), submit it, and we run it in a physics simulation to see how well it performs.
Is it free?¶
Yes, BotManifold is free to use during the beta period.
What programming languages are supported?¶
Currently, only Python is supported. Your policy must be a Python function that takes an observation dictionary and returns a numpy array of actions.
Policies¶
What should my policy.py file look like?¶
import numpy as np
def policy(observation: dict) -> np.ndarray:
# Your logic here
return np.zeros(7) # 7D action vector
What packages can I use?¶
In ZIP upload mode:
numpy(pre-installed)math(standard library)- Python standard library
- Any pure Python code you include in your zip
In policy server mode (botmanifold serve):
- Any Python package installed in your environment
- PyTorch, TensorFlow, JAX, etc.
- GPU acceleration (if available on your machine)
Can I use machine learning models?¶
Yes, but the approach depends on your submission method:
- ZIP upload: Include model weights in your zip; only numpy-based inference; no GPU; must complete each step within ~100ms
- Policy server (
botmanifold serve): Full framework support (PyTorch, TensorFlow, etc.); GPU acceleration; run on your own hardware
How do I include multiple files?¶
Put them all in your zip:
policy.zip
├── policy.py # Entry point (required)
├── model.py # Your model code
├── weights.npy # Model weights
└── utils.py # Helper functions
Then import them in policy.py:
What's the action format?¶
A 7D numpy array:
action = np.array([
vx, # X velocity (-1 to 1)
vy, # Y velocity (-1 to 1)
vz, # Z velocity (-1 to 1)
roll, # Roll velocity (-1 to 1)
pitch, # Pitch velocity (-1 to 1)
yaw, # Yaw velocity (-1 to 1)
gripper # -1 = close, 1 = open
])
Submissions¶
How long does evaluation take?¶
Typically 1-5 minutes, depending on queue length:
- Queued: Waiting for a worker (0-2 min)
- Running: Simulation (~30-60 sec)
- Judging: AI evaluation (~30 sec)
What do the verdicts mean?¶
| Verdict | Meaning |
|---|---|
| SAFE | Policy passed safety verification |
| UNSAFE | Policy failed safety checks |
| PASS | Successfully completed the arena task |
| PARTIAL | Partially completed (some objectives met) |
| FAIL | Did not meet minimum requirements |
Can I see the simulation video?¶
Yes! Every submission generates a video showing your robot's performance. View it on the submission detail page or access it via report.video_url in the SDK.
How is the score calculated?¶
Each scenario has its own scoring formula. Generally:
- Task completion: Main source of points
- Time bonus: Faster completion = more points
- Penalties: Dropping objects, collisions, etc.
See the specific scenario documentation for details.
Why did my submission fail?¶
Common reasons:
- Code error: Check for syntax errors, missing imports
- Timeout: Policy too slow (>100ms per step)
- Invalid action: Wrong shape or out-of-range values
- Runtime error: Exception during execution
Check the submission detail page for error messages.
Scenarios¶
What scenarios are available?¶
See the Scenarios page for the current list. There are 15 scenarios spanning easy to hard difficulty.
Can I suggest a new scenario?¶
Yes! Open an issue on GitHub with your idea.
Can I test locally before submitting?¶
Yes! Use the SDK's local testing tools:
# Run a local policy server
botmanifold serve my_policy.py
# Start a mock simulation server for testing
botmanifold mock-server
You can also test your policy logic with mock observations before submitting.
Technical¶
What physics engine do you use?¶
MuJoCo (Multi-Joint dynamics with Contact), a high-performance physics simulator.
What robot is simulated?¶
A 7-DOF robotic arm with a parallel gripper, similar to a Franka Panda.
How does the AI judge work?¶
We use a Vision-Language Model (VLM) to evaluate the simulation video. It analyzes:
- Whether objectives were achieved
- Quality of execution
- Any errors or problems
Is my code private?¶
Yes. Your submitted code is:
- Used only for evaluation
- Not shared with other users
- Deleted after evaluation (only results are stored)
Account & Authentication¶
Do I need an API key?¶
Yes. All API requests require authentication via an API key. Set it as an environment variable:
Or pass it directly to the SDK:
Will there be a paid tier?¶
Possibly, for features like:
- Priority queue
- More submissions per day
- Team features
The basic tier will remain free.
Troubleshooting¶
My policy works locally but fails on submission¶
Common causes:
- Different Python version: We use Python 3.12
- Missing files: Ensure all dependencies are in the zip
- Absolute paths: Use relative paths for any file access
The robot doesn't do anything¶
Check that:
- Your action values are non-zero
- Action values are within [-1, 1]
- You're returning a numpy array, not a list
"Module not found" error¶
Ensure the module is either:
- Part of Python standard library
- Included in your zip file
- numpy (pre-installed)
Contact¶
- Bug reports: GitHub Issues
- Feature requests: GitHub Discussions
- General questions: Check this FAQ first!