Researchers at Google have released a novel language model called 123B. This enormous model is trained on a dataset of remarkable size, consisting textual data from a wide range of sources. The aim of this research is to explore the capabilities of scaling language models to massive sizes and show the benefits that can arise from such an approach.