-
Notifications
You must be signed in to change notification settings - Fork 0
Future Work
This page will give an outline of the planned features and methods for this pipeline.
Place a config.json
in the input directory to read all parameters to be used during the pipeline.
Currently used static parameters that should be made dynamically:
- TBA
- TBA
- TBA
Use $id
to check your user preferences.
These should be fed to the docker to avoid permission conflicts.
When running the pipeline, the user should be able to choose the branch (and commit?) they want to use on the fly.
Maybe use a $ read
for that?
$ read
$ git clone
...
$ git checkout
...
- Watershed Nuclei (Y/N)
- Masking out Cytoplasm (Y/N)
Currently the length of potential filenames is hardcoded. This leads to problems when trying to use arbitrary data. This should be made more dynamically!
The pipeline needs more detailed cout
and error messages.
These should also be logged in a text file with a timestamp.
The pipeline needs more (in regards of quality and quantity) result files. These files should also be translated to English.
The metadata.csv
file is used to link experiment- with metadata to be used by sophisticated algorithms developed at the IUF in Düsseldorf, Germany.
This output does not work 100% right now and should be improved.
With every run, a protocoll
file should be created, detailing the used parameters.
This helps reproduceability.
This file should be fed back into a new run to use as input parameters.
There should be a option to dynamically adjust the number of threads used when running concurrent operations.
NeuronJ can save manual neurite tracings as a file. This datastructure must be reverse engineered and the pipeline should output the calculated skeleton as such a file to be used within NeuronJ.
This pipeline should also run on Windows.