A Multi Agent Deep Deterministic Actor-Critic reinforcement learning solution in python for the Unity ML (Udacity) Tennis environment
In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.
To set up your python environment and run the code in this repository, follow the instructions below.
Create (and activate) a new environment with Python 3.6.
- Linux or Mac:
conda create --name ddpg-rl python=3.6
source activate ddpg-rl
- Windows:
conda create --name ddpg-rl python=3.6
activate ddpg-rl
Clone the repository and install dependencies
git clone https://github.com/kotsonis/maddpg-tennis.git
cd maddpg-tennis
pip install -r requirements.txt
-
Download the environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.
-
Place the file in the
maddpg-reacher
folder, and unzip (or decompress) the file. -
edit tennis_env.cfg to set the
env
entry to point to the right location. Example :
--env=./Tennis_Windows_x86_64/Tennis.exe
maddpg-reacher uses the Abseil library for logging and argument parsing You can get the CLI options by running
python tennis.py -h
To train the agents, tennis.py reads the hyperparameters from training.cfg and accepts command line options to modify parameters and/or set saving options.You can train the agents with standard parameters as follows
python tennis.py --flagfile=training.cfg
you can see the agents playing with a pre-trained model as follows:
python tennis.py --flagfile=play.cfg
You can also specify the number of episodes you want the agent to play, as well as the non-default trained model as follows:
python tennis.py --play --episodes 20 --load v2_model.pt
You can read about the implementation details and the results obtained in Report.md