Home BlogDataset Training Small Language Models

Training Small Language Models

by Morgan Stevens
by
Books

Researchers at Microsoft have created a dataset of short stories to better train small language models, which require less computing power and resources than larger models. The team used GPT-3.5 and GPT-4 to generate short, simple stories that only use words a three to four year old child could understand. They then used the dataset to train small language models to produce stories comparable to stories produced by larger language models.

Get the data.

Image credit: Flickr user David Masters

You may also like

Show Buttons
Hide Buttons