The LLM Triad: Tune, Prompt, Reward

As language models become increasingly common, it becomes crucial to employ a broad set of strategies and tools in order to fully unlock their potential. Foremost among these strategies is prompt engineering, which involves the careful selection and arrangement of words within a prompt or query in order to guide the model towards producing the desired response. If you’ve tried to coax a desired output from ChatGPT or Stable Diffusion then you’re one step closer to becoming a proficient prompt engineer.

At the other end of the tuning spectrum lies Reinforcement Learning from Human Feedback (RLHF), an approach that proves most effective when a model requires training across a range of inputs and demands the utmost accuracy. RLHF is widely used in the fine-tuning of general-purpose models that power ChatGPT, Google’s Bard, Anthropic’s Claude, or DeepMind’s Sparrow.

Strategies to help you get the most out foundation models. Click HERE to enlarge.

For most teams, the best option is to use an established model and hone it to fit a particular task or dataset. The process begins with a large language model (LLM), that has been trained on a vast corpus of text data. While many LLMs are presently proprietary and solely accessible through APIs, the emergence of open-source data sets, academic papers, and even model code allows teams to refine these resources for their specific domains and applications.

Another intriguing trend is the emergence of more manageable foundation models, such as LlaMA and Chinchilla, which open up possibilities for more mid-sized models in the future. Selecting the appropriate model to fine-tune requires teams to not only consider the quantity of domain-specific data available but also assess the compatibility of the model’s (open-source) license with their specific requirements.

A simple playbook for fine-tuning foundation models. Click HERE to enlarge.

As our understanding of the practical applications of foundation models expands, bespoke tools are emerging to refine these models prior to their deployment. Here are some tools and resources  for fine-tuning and customizing language models:

Although RLHF has gained traction among teams building cutting-edge language models, its accessibility remains limited due to the lack of available tools. Furthermore, RHLF requires the development of a reward function that is vulnerable to misalignment and other issues, and remains a specialized technique that only a few teams have mastered. Prompt engineering, while useful, falls short in producing a reliable foundation model optimized for specific tasks and domains. Despite the fact that some teams may choose to build their own models from scratch, they are unlikely to do so frequently due to the cost of training models from scratch. The trend, therefore, leans towards fine-tuning pre-trained models. 

Ultimately, teams need simple and versatile tools that enable them to employ various techniques to create custom models. Although fine-tuning can generate optimal models, further adjustments using RHLF are necessary before deploying them. For instance, a recent study conducted by Anthropic indicates that prompting methods may aid LLMs trained with RLHF in producing less detrimental results.

Fine-tuning has some advantages over prompt engineering, or training from scratch. Click HERE to enlarge.

If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:

%d bloggers like this: