#4: The mind as an LLM
"There is no limit on how much we can improve what we start with. There is no limit on better."- Kevin Kelly
Here's an interesting quote from AssemblyAI's blog on Emergent Abilities of Large Language Models. :
While the fact that LLMs gain these abilities as they scale is remarkable, it is the manner in which they appear that is especially interesting. In particular, many abilities of Large Language Models appear to be emergent. That is, as LLMs grow in size, they increase from near-zero performance to sometimes state-of-the-art performance at incredibly rapid paces and at unpredictable scales.
By analogy, consider a growing child who is unable to draw coherent pictures. As he grows, his brain smoothly increases in size his fine motor skills smoothly improve; however, upon hitting a certain critical age there is a discontinuous “jump” in his ability to draw. This jump renders the child suddenly able to draw incredible portraits despite the fact that his fine motor skills show gradual improvement.
So just by reading a lot more data, the model ( a neural net) experiences a discontinuous jump in ability at a certain (previously unknown) scale even though there is a smooth increase in the amount of data read.
Also “the performance of Language Models on their training objective steadily improves with scale” and keeps on improving even after several orders of magnitude of data have been fed to the model. Does that imply that the returns from increasing knowledge accumulation do not decrease?
Do human brains behave the same way? Will your ability to perform a task suddenly rise exponentially just by gaining a lot more knowledge? How would you test this objectively?
For LLMs scaling alone improves next token prediction. Better training datasets give more useful results. Incisive prompts give better outputs. Chain-of-thought reasoning - working through a problem step-by-step - avoids common errors.
Since both speaking and writing are forms of next token prediction - the guidance for a better (useful? productive? fun?) brain seems obvious
Read & learn (by doing) as much as possible, continuously
Discard low quality reading, learning, conversation, challenges/tests you’ve already completed, anything below 5-stars
Multimodal inputs - There’s too much emphasis on learning though language, at the cost of underutilising other modes. The ears, nose, skin collect sensory input 24x7, unlike the eyes which are getting no external inputs during sleep. Tyler Cowen exemplifies multimodal learning -
reading immensely
listening to lots of different kinds of music (especially classical)
appreciating art and architecture
traveling and varied food (sense of smell!)
There’s a 5th sense - ‘touch’ - which perhaps could be better cultivated through crafting physical objects and playing musical instruments.
Every interpersonal interaction is a prompt - pick well, more so because you cannot cancel generation automatically and there will be more prompts.
Work through problems step by step. Do the next thing, then the next, then the next.
Alignment training - listen repeatedly to people who live the values you admire
Rlhf - record thoughts, review and rate them periodically
Occasionally, go on scheduled maintenance.

I’m reading Shop Class as Soulcraft and wondering if the lack of focus in the current era has to do with the disappearance of manual work - knowledge work can happen with less than 100% focus but being distracted while working with tools, machines, electrical objects can be injurious. There’s no scope to think about the Lewis-Max rivalry when operating a machine saw.
Quote in subtitle is from Kevin Kelly’s marvellous book Excellent Advice for Living.