Metadata-Version: 2.1
Name: abmarl
Version: 0.2.8
Summary: Agent Based Simulation and MultiAgent Reinforcement Learning
Home-page: https://github.com/llnl/abmarl
Author: Ephraim Rusu
Author-email: rusu1@llnl.gov
License: BSD 3
Project-URL: Featured Usage, https://abmarl.readthedocs.io/en/latest/featured_usage.html
Project-URL: Documentation, https://abmarl.readthedocs.io/en/latest/index.html
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: BSD License
Classifier: Natural Language :: English
Classifier: Operating System :: Unix
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.7, <3.11
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: NOTICE
Requires-Dist: importlib-metadata<5.0
Requires-Dist: numpy<1.24
Requires-Dist: gym
Requires-Dist: matplotlib
Requires-Dist: seaborn
Provides-Extra: rllib
Requires-Dist: tensorflow; extra == "rllib"
Requires-Dist: ray[rllib]==2.0.0; extra == "rllib"
Provides-Extra: open-spiel
Requires-Dist: open-spiel; extra == "open-spiel"

# Abmarl

Abmarl is a package for developing Agent-Based Simulations and training them with
MultiAgent Reinforcement Learning (MARL). We provide an intuitive command line
interface for engaging with the full workflow of MARL experimentation: training,
visualizing, and analyzing agent behavior. We define an Agent-Based Simulation
Interface and Simulation Manager, which control which agents interact with the
simulation at each step. We support integration with popular reinforcement learning
simulation interfaces, including gym.Env, MultiAgentEnv, and OpenSpiel. We define
our own GridWorld Simulation Framework for creating custom grid-based Agent Based
Simulations.

Abmarl leverages RLlib’s framework for reinforcement learning and extends it to
more easily support custom simulations, algorithms, and policies. We enable researchers to rapidly
prototype MARL experiments and simulation design and lower the barrier for pre-existing
projects to prototype RL as a potential solution.

<p align="center">
  <img src="https://github.com/LLNL/Abmarl/actions/workflows/build-and-test.yml/badge.svg" alt="Build and Test Badge" />
  <img src="https://github.com/LLNL/Abmarl/actions/workflows/build-docs.yml/badge.svg" alt="Sphinx docs Badge" />
  <img src="https://github.com/LLNL/Abmarl/actions/workflows/lint.yml/badge.svg" alt="Lint Badge" />
</p>


## Quickstart

To use Abmarl, install via pip: `pip install abmarl`

To develop Abmarl, clone the repository and install via pip's development mode.

```
git clone git@github.com:LLNL/Abmarl.git
cd abmarl
pip install -r requirements/requirements_all.txt
pip install -e . --no-deps
```

Train agents in a multicorridor simulation:
```
abmarl train examples/multi_corridor_example.py
```

Visualize trained behavior:
```
abmarl visualize ~/abmarl_results/MultiCorridor-2020-08-25_09-30/ -n 5 --record
```

Note: If you install with `conda,` then you must also include `ffmpeg` in your
virtual environment.

## Documentation

You can find the latest Abmarl documentation on
[our ReadTheDocs page](https://abmarl.readthedocs.io/en/latest/index.html).

[![Documentation Status](https://readthedocs.org/projects/abmarl/badge/?version=latest)](https://abmarl.readthedocs.io/en/latest/?badge=latest)


## Community

### Citation

[![DOI](https://joss.theoj.org/papers/10.21105/joss.03424/status.svg)](https://doi.org/10.21105/joss.03424)

Abmarl has been published to the Journal of Open Source Software (JOSS). It can
be cited using the following bibtex entry:

```
@article{Rusu2021,
  doi = {10.21105/joss.03424},
  url = {https://doi.org/10.21105/joss.03424},
  year = {2021},
  publisher = {The Open Journal},
  volume = {6},
  number = {64},
  pages = {3424},
  author = {Edward Rusu and Ruben Glatt},
  title = {Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning},
  journal = {Journal of Open Source Software}
}
```

### Reporting Issues

Please use our issue tracker to report any bugs or submit feature requests. Great
bug reports tend to have:
- A quick summary and/or background
- Steps to reproduce, sample code is best.
- What you expected would happen
- What actually happens

### Contributing

Please submit contributions via pull requests from a forked repository. Find out
more about this process [here](https://guides.github.com/introduction/flow/index.html).
All contributions are under the BSD 3 License that covers the project.

## Release

LLNL-CODE-815883
