This is the public repository for the paper "Logic and the 2-Simplicial Transformer" published at ICLR 2020. The initial release contains the simplicial and relational agents, environment, training notebooks and videos of rollouts of the trained agents. This is research code and some assembly may be required.
Main files:
- Relational agent:
agent/agent_relational.py
- Simplicial agent:
agent/agent_simplicial.py
- Environment:
env/bridge_boxworld.py
- Training notebooks:
notebooks/
- Videos (see below)
- Trained agent weights (not yet available)
There is a brief training guide in notebooks/training.ipynb
and brief installation instructions below. In notes-implementation.md
we collect various notes about training agents with IMPALA in Ray RLlib that might be useful (but as the Ray codebase is evolving quickly, many of the class names in these notes may now be incorrect). Note that we use a patched version of several of the files from RLlib, see the installation instructions for details.
For background on some of the ideas from neuroscience that partly inspired this work, see the talk "Building models of the world for behavioural control" by Tim Behrens from Cosyne 2018.
The video rollouts are provided for the best training run of the simplicial agent (simplicial agent A of the paper). The videos are organised by puzzle type, with 335C meaning the third episode sampled on puzzle type 335. Videos are not cherry-picked, and include episodes where the agent opens the bridge. There are three episodes of every puzzle type, and extras for the harder puzzles 335, 336. Figure 6 of the paper is step 8 of episode 335A, Figure 7 is step 18 of 325C, Figure 8 is step 13 of episode 335A, Figure 9 is step 29 of episode 335E.
- Solution length one: 112A, 112B, 112C
- Solution length two: 213A, 213B, 213C, 214A, 214B, 214C, 223A, 223B, 223C, 224A, 224B, 224C
- Solution length three: 314A, 314B, 314C, 315A, 315B, 315C, 316A, 316B, 316C, 324A, 324B, 324C, 325A, 325B, 325C, 326A, 326B, 326C, 334A, 334B, 334C
- 335: 335A, 335B, 335C, 335D, 335E, 335F, 335G, 335H, 335I
- 336: 336A, 335B, 335C, 335D
not yet available
In the experiments
folder we collect some checkpoints of the eight agents described in the paper. Reconstructing the agent from these checkpoints requires some expertise with Ray RLlib.
- simplicial agent A = 30-7-19-A
- simplicial agent B = 1-8-19-A
- simplicial agent C = 23-7-19-A
- simplicial agent D = 13-8-19-C
- relational agent A = 4-8-19-A
- relational agent B = 12-6-19-A
- relational agent C = 13-8-19-A
- relational agent D = 13-6-19-C
For some of the agents the very last checkpoint is "bad", in the sense that the winrate decreased from its converged value (this is due to our use of a fixed learning rate over the entire course of training), and we are distributing the last good checkpoint, as well as a sample of earlier checkpoints. We are happy to share the entire checkpoint history, but these files approach 500Mb for some of the agents and we do not currently have a good distribution method. Nonetheless if you want the files get in touch and we can work something out.
- The current implementation of the simplicial agent
agent_simplicial.py
assumes one head of 2-simplicial attention.
The following instructions assume you know how to set up TensorFlow, and cover the other aspects of setting up a blank GCP or AWS instance to a point where they can run our training notebooks. Our training was done under Ray version 0.7.0.dev2
and TensorFlow 1.13.1
and we do not make any assurances that the code will even run on later versions. As detailed in the paper, our head nodes (the ones on which we run the training notebooks) have either a P100 or K80 GPU, and the worker nodes have no GPU.
sudo apt-get update
sudo apt install python-pip
sudo apt install python3-dev python3-pip
sudo apt install cmake
sudo apt-get install zlib1g-dev
sudo apt install git
pip3 install --user tensorflow-gpu
pip3 install -U dask
pip3 install --user ray[rllib]
pip3 install --user ray[debug]
pip3 install jupyter
pip3 install -U matplotlib
pip3 install psutil
pip3 install --upgrade gym
sudo apt-get install ffmpeg
sudo apt-get install pssh
sudo apt-get install keychain
pip3 install -U https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-0.7.0.dev2-cp36-cp36m-manylinux1_x86_64.whl
On the CPU-only machines use pip3 install --user tensorflow
More installation:
git clone https://github.com/dmurfet/2simplicialtransformer.git
git clone https://github.com/kpot/keras-transformer.git
cd keras-transformer;pip3 install --user .
Reboot after this to fix the PATH. You'll also need to open the port 6379
for Redis and the 8888
port for Jupyter in the console Security Groups tab, otherwise RLlib won't be able to initialise the cluster (resp. the Jupyter notebook will not be remotely accessible).
Jupyter setup (for head nodes only): To set up Jupyter as a remote service, follow these instructions (including making a keypair) except you need to use c.NotebookApp.ip = '0.0.0.0'
rather than c.NotebookApp.ip = '*'
as they say. To get Jupyter to run on startup you'll need to first create an rc.local
file (on Ubuntu 18 this is no longer shipped standard) see this. Then add this line to rc.local
cd /home/ubuntu && su ubuntu -c "/home/ubuntu/.local/bin/jupyter notebook &"