Budget Breakthrough: Affordable AI Reasoning Model Takes the Stage

USAThu Feb 06 2025
Advertisement
A world where cutting-edge AI models aren't just for big tech companies. This is now a reality. The researchers at Stanford and the University of Washington developed an AI reasoning model that costs less than $50 in cloud credits. It's similar to the big names in the AI industry, like OpenAI’s and DeepSeek's models. This model, called S1, is available on GitHub. The process was simple. They started with an off-the-shelf base model and used distillation. Distillation is a method where a model learns from another model's answers. This was done using Google's reasoning model, Gemini 2. 0 Flash Thinking Experimental. This method has been used before to create other reasoning models. There's a catch. The model is based on a small AI model from Alibaba's Qwen. The researchers created a dataset of 1, 000 carefully curated questions. They then trained the model using these questions. The training took less than 30 minutes using 16 Nvidia H100 GPUs. The cost was around $20. The model performs well on AI benchmarks. To make it better, the researchers used a simple trick. They told the model to wait before answering. This helped the model arrive at more accurate answers. There's a lot at stake here. The big companies in the AI industry aren't happy. They have invested millions in developing their models. OpenAI has accused DeepSeek of using its data to create a competing model. This is a big no-no in the AI world. The researchers behind S1 wanted to find the simplest way to achieve strong reasoning performance. They also wanted the model to think more before answering a question. The S1 paper suggests that reasoning models can be distilled using a relatively small dataset. This is done through a process called supervised fine-tuning (SFT). This process is cheaper than large-scale reinforcement learning. The big companies in the AI industry plan to invest hundreds of billions of dollars in AI infrastructure. This will go towards training next-generation AI models. This level of investment may be necessary to push the envelope of AI innovation. However, distillation has shown to be a good method for cheaply re-creating an AI model’s capabilities, but it doesn't create new AI models vastly better than what’s available today.
https://localnews.ai/article/budget-breakthrough-affordable-ai-reasoning-model-takes-the-stage-d94d8b46

actions