Insights

One Model to Rule Them All? The Dilemma of Super-Sized Language Models

Max Lu

Feb 29, 2024

In the ever-evolving world of artificial intelligence and natural language processing, we have witnessed an unprecedented surge in the size and complexity of language models. Large Language Models (LLMs) have become the darlings of tech giants like Microsoft, Google, and others, with the race to build the biggest and most versatile model taking center stage. But is the pursuit of an all-encompassing "one model to rule them all" really the right approach?


The Gigantic Leap: LLMs Are Getting Larger and Larger

The race for the largest language model has led to astounding breakthroughs. Models like GPT-3 have demonstrated remarkable capabilities in a wide range of applications, from text generation to translation and summarization. It's no surprise that big tech companies are eager to harness the power of these super-sized models. It’s not hard to envision a future where a single model can handle everything from chatbots to content creation, transcending language barriers and simplifying AI development.

However, the pursuit of super-sized LLMs is not without its issues. These colossal models come with a plethora of challenges, including:

  • Resource Intensiveness: Training and deploying such large models require substantial computational power and resources. This can be a significant barrier for smaller organizations and startups.

  • Diminishing Returns: As models grow larger, the gains in performance become increasingly marginal. The Law of Diminishing Returns suggests that, beyond a certain point, increasing model size does not necessarily translate to proportional improvements in accuracy or capability.

  • Lack of determinism: When querying these models multiple times, one may receive varying responses, even for the same input. The issue arises from the probabilistic nature of training such models on vast and evolving datasets, making them highly sensitive to subtle changes in the input or the model's underlying parameters. This lack of determinism can be a significant hurdle when consistency and reliability are required. Of course, generative models are generative in nature, so this is an issue that needs to be tackled separately (perhaps complemented by more traditional AI / ML methods). 

  • Ethical Concerns: The sheer size of these models raises ethical concerns about environmental impact and energy consumption. The environmental footprint of training and running these models at scale is a growing issue in the AI community.



Task-Specific LLMs

The pursuit of super-sized LLMs raises questions about the effectiveness of one-size-fits-all models. There's a compelling argument for moving towards more specialized, task-specific LLMs. These smaller models can be fine-tuned for a particular domain or function, resulting in several advantages:

  • Potentially Higher Accuracy: Task-specific LLMs are tailored to excel in their designated domains. With the right fine-tuning, they should be able to produce better and more consistent results. 

  • Resource Efficiency: Smaller models are more resource-efficient and cost-effective, making them accessible to a wider range of organizations.

  • Faster Inference: When fine-tuning a task specific model, oftentimes a smaller model would suffice rather than a large, generic model. The reduced model complexity translates to faster inference times, making these models more suitable for real-time applications.

Here’s a real world analogy. Apple’s Siri assistant and Comcast's voice control remote came out around the same time in 2015-2016. Siri was equipped on every iPhone and was meant to be a general-purpose virtual assistant, capable of answering a wide range of questions but sometimes falling short. Getting Siri to say random things became an internet trend. 

Comcast's remote, on the other hand, was designed to be specific to controlling television, ensuring it excels at this task. While Siri might provide answers to general knowledge questions, the Comcast remote is optimized for video viewing, with a focus on how to achieve higher accuracy and user satisfaction for that scenario – for example, “Play/Pause video”. 

Another analogy is the debate between monolithic code and microservices in software architecture. A monolith offers a certain simplicity in development and deployment, but can be  challenging to scale and maintain, especially for widely different products. Microservices, on the other hand, break the application into smaller, specialized components, offering more agility and scalability.

Even in our daily lives, let's consider the analogy of a language translation task. Imagine a user wants to translate a specific phrase from English to French. They can ask both a general-purpose language model, similar to Siri, or a language model specifically designed for translation, akin to Google Translate. More often than not, users will go directly to Google Translate because it was designed to excel in language translation and is likely to consistently provide accurate and reliable translations for the same input.




The Quest for Determinism and Efficiency

In the real world, there are many tasks that require models to be deterministic, accurate, and efficient (cost & time). Achieving this is a difficult challenge, particularly for generative AI models. The trade-off between generality and specificity must be carefully navigated. A lot of the value may come from fine-tuning on relevant, highly-catered data, which may determine the shape and form of the underlying models. 

At CodeComplete, we both offer a (reasonably sized) general purpose model, and work closely with our customers to scope out and train specific models for individual products, languages, and use cases. What matters most is the knowledge base upon which these models are fine-tuned. By doing so, we can strike a balance between generality and specificity, ultimately leading to more effective and efficient AI-powered coding experience.

The pursuit of "one model to rule them all" is an ambitious goal, but it's important to consider the trade-offs and challenges it presents. Smaller, task-specific LLMs offer a pragmatic approach to achieving greater accuracy and efficiency. The future of AI may lie in finding the right mix of general knowledge and task-specific expertise.


Max Lu

Max Lu

Max Lu

Max Lu

Share this post