-
Notifications
You must be signed in to change notification settings - Fork 0
Future Work
This page will give an outline of the planned features and methods for this pipeline.
Place a config.json
in the input directory to read all parameters to be used during the pipeline.
Currently used static parameters that should be made dynamically:
- TBA
- TBA
- TBA
Add an output that displays the coordinates of every branching- and end-point for the skeletons.
Use $id
to check your user preferences.
These should be fed to the docker to avoid permission conflicts.
When the filename length does not match a certain number of characters, the toolbox throws a segmentation fault
error.
The pipeline should be able to deal with an arbitrary filename length.
Example filename that throws the error: RK3_20200522_MDN_A_01_Alexa488_01
The corrected filename that works: RK3_20200522_MDN_A_0_01_Alexa488_01
When running the pipeline, the user should be able to choose the branch (and commit?) they want to use on the fly.
Maybe use a $ read
for that?
$ read
$ git clone
...
$ git checkout
...
- Watershed Nuclei (Y/N)
- Masking out Cytoplasm (Y/N)
Currently the length of potential filenames is hardcoded. This leads to problems when trying to use arbitrary data. This should be made more dynamically!
Sometimes channels have artifacts in them. These might originate from bubbles of air, staining errors or areas that are out of focus.
The pipeline should be made able to deal with this. The following options should be implemented:
- Upper and lower bounds for nucleus area sizes
- A optional binary mask for every input image. This mask should indicate areas that the algorithm will ignore.
The pipeline needs more detailed cout
and error messages.
These should also be logged in a text file with a timestamp.
The pipeline needs more (in regards of quality and quantity) result files. These files should also be translated to English.
The metadata.csv
file is used to link experiment- with metadata to be used by sophisticated algorithms developed at the IUF in Düsseldorf, Germany.
This output does not work 100% right now and should be improved.
With every run, a protocoll
file should be created, detailing the used parameters.
This helps reproduceability.
This file should be fed back into a new run to use as input parameters.
There should be a option to dynamically adjust the number of threads used when running concurrent operations.
NeuronJ can save manual neurite tracings as a file. This datastructure must be reverse engineered and the pipeline should output the calculated skeleton as such a file to be used within NeuronJ.
This pipeline should also run on Windows.