Skip to content

Future Work

Nils edited this page Feb 24, 2021 · 6 revisions

This page will give an outline of the planned features and methods for this pipeline.

Dynamic Parameter Reading via file

Place a config.json in the input directory to read all parameters to be used during the pipeline. Currently used static parameters that should be made dynamically:

  • TBA
  • TBA
  • TBA

Branching- and Terminal- Point Coordinates

Add an output that displays the coordinates of every branching- and end-point for the skeletons.

Dynamic Docker User Permissions

Use $id to check your user preferences. These should be fed to the docker to avoid permission conflicts.

Dynamic File Name Lengths

When the filename length does not match a certain number of characters, the toolbox throws a segmentation fault error. The pipeline should be able to deal with an arbitrary filename length.

Example filename that throws the error: RK3_20200522_MDN_A_01_Alexa488_01

The corrected filename that works: RK3_20200522_MDN_A_0_01_Alexa488_01

User Branch Selection

When running the pipeline, the user should be able to choose the branch (and commit?) they want to use on the fly. Maybe use a $ read for that?

$ read

$ git clone...

$ git checkout...

Dynamic Algorithm Toggle

  • Watershed Nuclei (Y/N)
  • Masking out Cytoplasm (Y/N)

Dynamic Filename Support

Currently the length of potential filenames is hardcoded. This leads to problems when trying to use arbitrary data. This should be made more dynamically!

Dealing with Artifacts

Sometimes channels have artifacts in them. These might originate from bubbles of air, staining errors or areas that are out of focus.

The pipeline should be made able to deal with this. The following options should be implemented:

  • Upper and lower bounds for nucleus area sizes
  • A optional binary mask for every input image. This mask should indicate areas that the algorithm will ignore.

Expansive Logging and Error Handling

The pipeline needs more detailed cout and error messages. These should also be logged in a text file with a timestamp.

Updated Results File

The pipeline needs more (in regards of quality and quantity) result files. These files should also be translated to English.

Updated Metadata File handling

The metadata.csv file is used to link experiment- with metadata to be used by sophisticated algorithms developed at the IUF in Düsseldorf, Germany. This output does not work 100% right now and should be improved.

Parameter Reproduceability

With every run, a protocoll file should be created, detailing the used parameters. This helps reproduceability. This file should be fed back into a new run to use as input parameters.

Load Reduction

There should be a option to dynamically adjust the number of threads used when running concurrent operations.

Backintegration with NeuronJ

NeuronJ can save manual neurite tracings as a file. This datastructure must be reverse engineered and the pipeline should output the calculated skeleton as such a file to be used within NeuronJ.

Windows Support

This pipeline should also run on Windows.