Big LLMs weights and their evolution

Viewed 42
The discussion around 'Big LLMs' presents a nuanced view on the current generation of Large Language Models, emphasizing the distinction between varying sizes of models while humorously suggesting names like 'Tall', 'Grande', and 'Venti' for categorization. Users reminisce about simpler functionalities from previous models like text-davinci, which could generate direct useable links and provide structured outputs. The commentary focuses on the synthetic memory aspect of LLMs, referring to them as 'artificial memory' rather than pure intelligence. The analogy to Vannevar Bush's Memex emphasizes the quest for efficient information retrieval akin to human memory architecture, calling for advancements in memory implementation in machine learning.
0 Answers