Have you ever wanted to use dm-control with stable-baselines3? Within Reinforcement learning (RL), a number of APIs are used to implement environments, with limited ability to convert between them. This makes training agents across different APIs highly difficult, and has resulted in a fractured ecosystem.
To address this issue, we are excited to announce Shimmy, an API compatibility tool for converting external RL environments to the Gymnasium and PettingZoo APIs. This allows users to access a wide range of single and multi-agent environments, all under a single standard API.
As detailed in our announcement blog post, the Farama Foundation’s greater goal is to create a unified and user-friendly ecosystem for open-source reinforcement learning software, for both research and industry. Shimmy plays an important role in this plan, by integrating popular external RL environments inside the Farama ecosystem.
We plan to maintain Shimmy for the long term, and are welcome to new contributions to support additional APIs.
Shimmy includes API compatibility wrappers for the following environments.
Single-agent (Gymnasium wrappers):
Multi-agent (PettingZoo wrappers):
Shimmy’s documentation contains an overview of each environment, as well as full usage scripts and installation instructions—allowing users to easily load and interact with environments without digging through source code.
We include automated testing for each environment, to ensure that converted environments are fully functional and are held to the same standards as native Gymnasium and PettingZoo environments.
Furthermore, Shimmy provides full installation scripts for external environments which are not available in distributed releases (via PyPi or elsewhere), along with Dockerfiles which can be used to install on any platform (e.g., DeepMind Lab does not support Windows or macOS).
For more information, see Shimmy 1.0.0 release notes.