Skip to main content

Table 1 LLMs and their context lengths

From: Optimizing biomedical information retrieval with a keyword frequency-driven prompt enhancement strategy

Model

Context lengths

Max fit

GPT-3

4 K tokens ≈ 16 K chars

8 chunks (7 context + 1 answer*)

GPT-4

8 K tokens ≈ 32 K chars

16 chunks (15 context + 1 answer*)

  1. *Assuming generated answer length is \(\le\) 2000 characters