
MIT researchers have a mechanistic explanation for why large language model performance scales so reliably with size. The answer comes down to a phenomenon called superposition.
The article MIT study explains why scaling language models works so reliably appeared first on The Decoder.
