< Explain other AI papers

Tina: Tiny Reasoning Models via LoRA

Shangshang Wang, Julian Asilis, Ömer Faruk Akgül, Enes Burak Bilgin, Ollie Liu, Willie Neiswanger

2025-04-24

Tina: Tiny Reasoning Models via LoRA

Summary

This paper talks about Tina, a group of very small and efficient AI models that can solve reasoning problems almost as well as much bigger and more expensive models, thanks to a smart training method called LoRA.

What's the problem?

The problem is that most AI models that are good at reasoning require a lot of computer power and money to train, which makes it hard for smaller labs or individuals to use or improve them.

What's the solution?

The researchers started with a small language model and used LoRA, a technique that only updates a few parts of the model during training, along with reinforcement learning. This allowed them to quickly and cheaply teach the model to get much better at reasoning tasks, without needing to retrain the whole model from scratch.

Why it matters?

This matters because it shows that strong reasoning skills in AI don't have to come from huge, expensive models. With the right approach, even tiny models can perform really well, making advanced AI tools more accessible and affordable for everyone.

Abstract

Tina, a family of tiny reasoning models, achieves high reasoning performance at minimal computational cost through LoRA during reinforcement learning.