Smarter AI, Less Memory: Liquid's Groundbreaking Debut

Massachusetts Institute of Technology (MIT), Cambridge, USATue Oct 01 2024
Liquid AI, a startup co-founded by former researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), has made a significant breakthrough in the field of artificial intelligence. Instead of relying on the transformer architecture, Liquid has developed a new type of AI model called the Liquid Foundation Models (LFMs) that are designed to be more efficient and adaptable. The first LFM models, which are available in three different sizes and variants, have already demonstrated superior performance to other transformer-based models of comparable size. The smallest model, LFM 1. 3B, has outperformed Meta's Llama 3. 2-1. 2B and Microsoft's Phi-1. 5 on many leading third-party benchmarks, including the Massive Multitask Language Understanding (MMLU) test. What sets Liquid's models apart is their ability to process sequential data, including video, audio, text, time series, and signals, while using significantly less memory than traditional transformer-based models. This makes them ideal for deployment on edge devices, such as smartphones and smart home devices, where memory is limited. Liquid's approach to training post-transformer AI models is built on a blend of "computational units deeply rooted in the theory of dynamical systems, signal processing, and numerical linear algebra. " This has allowed the company to develop models that are not only more efficient but also more adaptable, allowing them to adjust in real-time without the need for additional computational power.
https://localnews.ai/article/smarter-ai-less-memory-liquids-groundbreaking-debut-6924ea6d

questions

    Do the authors of the article have a vested interest in promoting the new models?
    Can the new models truly outperform transformer-based models, or is this just marketing hype?
    Do the newly introduced Liquid Foundation Models rely solely on transformer architecture?

actions