GIF Preview: https://gfycat.com/NippyTidyCassowary
YouTube Preview: https://www.youtube.com/watch?v=FJfskjJOQKc
Software for computing a Predator-Prey Agent Based Model of prey colour polymorphism (CP) in Go.
Written specifically to assist the research by Dr. James Dale in evaluating hypotheses for the evolutionary maintenance of extreme colour polymorphism.
Server-side computation, client-side GUI controls and visualisation of the running ABM in a web browser.
Generalise system so any abm
package with local model conditions, agent design, etc can be hot-swapped in (linked via a command line flag) and run.
Install Go from here.
Download my repo: go get -u github.com/benjamin-rood/abm-cp
Download external dependencies:
go get -u golang.org/x/net/websocket
go get -u github.com/benjamin-rood/gobr
go get -u github.com/pquerna/ffjson
go get -u github.com/davecgh/go-spew/spew
go get -u github.com/spf13/cobra
(All dependecies will be vendored into the abm-cp
package from v1.0.0 onwards)
Change current directory to $GOPATH/src/github.com/benjamin-rood/abm-cp
and run go install
.
From there, calling abm-cp run
from the shell prompt will start the websocket server.
Point web browser at localhost:8000
Current version only tested on Safari on OS X.
Base requirements completed.
-
A simple interface for the CP Prey ABM, with the visualisation on the left, with conditionsual parameter controls on the right. ✅
-
Use P5js for render viewport. ✅
-
Browser recieves drawing instructions from ABM which get loaded into the P5 instance's draw loop. ✅
-
Responsive design, visualisation (render viewport) scales to the dimensions of available browser real estate. ✅
-
Server handles concurrent bi-directional connections for concurrent ABM modelling individual to each user, with data sent between client and server using Web Sockets. ✅
-
Server cleans up after user disconnections. ✅
-
Server receives new submitted conditionsual parameters as signal to start/restart ABM. ✅
-
Serialisation of data messages using JSON (prohibatively slow for anything approaching 10,000 agents). ✅
-
CP Prey agents implementation:
- Rule-Based-Behaviour. ✅
- Asexual Reproduction. ✅
- Mutation (colouration). ✅
-
Visual Predator implementation:
- Rule-Based-Behaviour. ✅
- Exhaustive Prey Search (very slow). ✅
- Colour Imprinting (needs tweaking, no baseline yet established). ✅
-
Simple concurrent execution of Predator/Prey Action. ✅
-
Unit tests for
geometry
,calc
,render
packages. ✅
-
Dispatch errors along channels, use external handlers who receive the errors and process/print them. ✅
-
Essential unit tests for
abm
package ✅ -
Show live population and timeline statistics inside P5js viewport. ✅
-
Visual Predator implemenation:
- Find baseline params for Colour Imprinting. ✅
- Adaptation in response to hunger. ✅
- Starvation ⟹ Death. ✅
- Sexual Reproduction. ✅
-
General modelling and interactions between agent types in place, with all baseline parameters set for end-use. ✅
-
Toggle Visualisation on/off ✅
-
Record data of a running model to log files as JSON for statistical analysis as a concurrent process, independent of Visualisation or model computation. Toggleable. ✅
-
User determined frequency of data logging on client side. ✅
-
Better model for Predator specialisation through colour-imprinting which directly gives search/attack advantage, rather than being decided randomly. ✅
-
Complete tests for
abm
package ✅
-
Use
ffjson
–generated custom Marshal/Unmarshal JSON methods for ~2X speedup when serialising render messages to client ✅ -
Better Prey Search using 2d dimensional search trees.
-
Browser-side input validation.
-
Beging switch to
spf13/cobra
CLI system. ✅ -
Use k-dimensional tree for spatial partitioning of model environment, permitting optimal search. (General implementation in Go already done using trees connected to pointers to elements in slices)
-
Web-side input validation for web contextual parameters
-
Have complete control over ABM computation, logging, visualisation from command-line, rather than just starting up a web server and controlling through the the browser.
-
Use uncompressed JSON-formatted logging for debug only.
-
Switch to a compressed binary encoding for log files – or try FlatBuffers?
- Switch data serialisation to Protocol Buffers (protobuf) ~10X speedup. Marshalling drawing instructions to JSON is currently the single most expensive action!
###1.0.0 – Late 2016?
Use Amazon Web Services and switch to a model of cloud (distributed) computation and storage for all log files, thus entirely taking the burden off the user for all hardware costs in the modelling. Whilst the need for CPU and memory optimisation along with data compression over the wire remains essential, the scale of the model environment and populations could become entirely unrestricted.
-
Switch all public html file to templated/generated ones based on conditions parameters etc.
-
Switch to
gopherjs
for all front-end code? -
Allow end-user to switch between different browser layouts: Visualisation only, Standard and Statistical?
$\Leftarrow$ Could use Jupyter to present graphing in browser? -
Start ABM computation remotely and keep running after disconnection? i.e. start the model running, leave it, reconnect based on session UUID at a later tim to check up or review results.
-
Batch processing.
-
Email user when model session finishes.
-
Enable use in a distributed environment.
-
Complete testing suite including integration tests.
-
Allow hot-swapping of different
abm
variant packages. -
Store modelling sessions to server database along with statistical data for later retrieval.
-
Fluid ABM timescale controls? Doable, but probably not without switching to gopherjs so I can integrate it all within the same codebase.
-
Optional recording of Visualisation to SVG frame sequence using
ajstarks/svgo
package.