Skip to main content

Table 3 Fine-tuning hyperparameters of the tested models

From: Hybrid natural language processing tool for semantic annotation of medical texts in Spanish

Model

B

Mx Ep

LR

Optim

Pat

Seed

Bi-LSTM-CRF (FLAIR)

16

100

0.1

SGD

5

Random

RoBERTa, EriBERTa,

16

20

2e-05

Adam

5

{100, 200, 300, 400, 500}

mBERT and mDeBERTA vs 3

CLIN-X-ES

8

30

2e-05

Adam

5

{100, 200, 300, 400, 500}

  1. B: ‘batch’; LR: ‘fine-tune learning rate’; Mx Ep: ‘maximum number of epochs’; Pat: ‘Patience’;
  2. Optim: ‘Optimizer’; SGD: ‘stochastic gradient descent’