This anonymous repo contains data and part of the models finetuned and Pretrained during our experiments.
- The repository Data_NER contains the data used for finetuning the models during our experiments.
- The repository data_sample_continual, contains a sample extraction of the data used for the continual pretraining of our model AdminBERT; the training file is composed of 30,000 text fragments, and the test file of 6000 text fragments.
- The Models repository contains our PLM AdminBERT zip files in large and small versions. As some of the finetuned models, we couldn't upload NERmemBERT-Lare 3 entities and Wikineural-NER for storage reasons.