This project presents an experimental approach to tackle the challenge of data scarcity in a specific task by exploring the utilization of existing annotated datasets from related NLP tasks. Our experiment involves training a single base model, such as BERT, with multiple heads, each dedicated to a specific task, and running them simultaneously during training. We term these additional tasks as "supporting tasks." The goal is to leverage shared knowledge across different domains and enhance the model's performance and robustness.
Branches:
- Medical tasks can be found in the
main
branch. - The GLUE (General Language Understanding Evaluation) tasks can be found in the
glue_tasks
branch.
Please note that this project is experimental, and the results may vary based on the specific task and datasets used. While the approach shows promise, it is essential to interpret the outcomes with caution. The aim of sharing this experiment is to encourage collaborative exploration and discussions on dealing with data scarcity in machine learning projects.
We welcome contributions and feedback from the community to refine further and improve this experimental approach. Together, let's delve into innovative methods to overcome data limitations and advance the field of machine learning. 🌟
The multi-head model can be viewed in models/multiHeadModel.py
The multi-head training can be viewed at train.py
pip install -r requirements.txt
Run:
python train.py --batch_size <batch size> --epochs <number of epochs> --device <device>
For the rest of the arguments, please see train.py