Ostrichinator aims at providing a web platform for discovering adversarial examples and potentially collecting human responses, which may fundamentally help with improving current deep learning algorithms. The frontend of this project is based on Python Flask, which handles requests from users and passes jobs to the backend, and the backend is implemented in MATLAB based on MatConvNet and minConf.
Like most other Flask based simple web applications, the frontend of Ostrichinator is concisely implemented in two files: frontend.py
and template/index.html
.
User requests are indexed by UUIDs and the source (original images) and result files (hacked images and logs) are named correspondingly.
While source and resultant images are directly served in static/
, logs are placed in backend/log/
.
Except for the secret keys for Flask and Flask-WTF reCAPTCHA (i.e. SECRET_KEY
, RECAPTCHA_PUBLIC_KEY
and RECAPTCHA_PRIVATE_KEY
, which in our case, are put in keys.py
), to run the backend locally or distributedly (i.e. without or with a job queue) should be the only thing which needs to be configured here.
Frontend.py
by default is using a Celery plus Redis job queue for requests.
However, users can easily comment/uncomment blocks in frontend.py
to make it run locally following instructions in the file.
Simply executing python frontend.py
can launch the Flask frontend with gevent WSGI server.
However, other combinations (e.g. Flask with uWSGI and Nginx, as used by the demo site) may be more favorable.
The backend of Ostrichinator is a compiled executable which directly takes input images from the static/
directory and parameters from the command line arguments.
When the backend is running, log files stating the execution progresses and the final results are generated inside backend/log/
, and when it finishes, result images are written into the static/
directory as well.
The log files are structurally defined with the first lines describing the tasks, the fourth- and third-to-the-last lines the original and final class labels, the second-to-the-last lines the exit flags, and the final lines "DONE".
For now, there's no explicit mechanism for pushing results to the users implemented yet.
Configuring the backend of Ostrichinator would involve compiling MATLAB codes located in backend/src/
into an executable.
We used MATLAB R2014b, while any MATLAB version after R2013a should be fine as well.
First thing the users need to do would be installing and setting up MatConvNet, which is fairly simple and quick.
While MatConvNet v1.0 beta-7 is used and included in this project, newer versions should work as well.
The main MATLAB file is backend/src/demo.m
, which should be ready to run after MatConvNet is correctly installed, and both MatConvNet and backend/src/minConf
are in MATLAB’s search path.
Please follow demo.m
to compile the executable, and place the generated demo
and run_demo.sh
under backend/
.
If installing MATLAB is not an option, users can check out our precompiled executables as well.
After obtaining the executable, users also need to specify the path to MATLAB runtime libraries in backend/MCR/
.
If users have a full MATLAB installation, a symbolic link, e.g. backend/MCR/v84 -> /usr/local/MATLAB/R2014b/
, can be created to do this.
If users don’t have a full MATALB and decide to use our precompiled executable, please download and install the MATLAB Compiler Runtime into backend/MCR/
.
Also remember to download the pretrained deep learning networks [1,2,3] into backend/networks/
.
Lastly, adjust backend/run.py
and backend/info.py
for running locally or distributedly as well.
If users decided to run the backend distributedly, simply remember to start Redis and Celery by e.g. redis-server
and celery worker -A backend.run --loglevel=info
(for Celery, under the main directory, not backend/
).
Otherwise, nothing needs to be done.