Why is it taken for granted that LLM models will keep improving?

0
admin
Aug 24, 2024 03:55 PM 0 Answers Ask Question
Member Since Apr 2019
Subscribed Subscribe Not subscribe
Flag(0)

It is commonly assumed that Large Language Models (LLMs) will keep improving because current scaling laws show clear power-law relationships indicating performance gains with increased data and parameters. No limits have been found yet, and continuous innovations in data collection, compute power, and algorithms suggest ongoing advancements. However, historical engineering trends advise caution against indefinite extrapolation.

0 Subscribers
Submit Answer
Please login to submit answer.
0 Answers
Sort By:

Share: