As language models become increasingly common, it becomes crucial to employ a broad set of strategies and tools in order to fully unlock their potential. Foremost among these strategies is prompt engineering, which involves the careful selection and arrangement of words within a prompt or query in order to guide the model towards producing the desired response. If you’ve tried to coax a desired output from ChatGPT or Stable Diffusion then you’re one step closer to becoming a proficient prompt engineer.
At the other end of the tuning spectrum lies Reinforcement Learning from Human Feedback (RLHF), an approach that proves most effective when a model requires training across a range of inputs and demands the utmost accuracy. RLHF is widely used in the fine-tuning of general-purpose models that power ChatGPT, Google’s Bard, Anthropic’s Claude, or DeepMind’s Sparrow.

For most teams, the best option is to use an established model and hone it to fit a particular task or dataset. The process begins with a large language model (LLM), that has been trained on a vast corpus of text data. While many LLMs are presently proprietary and solely accessible through APIs, the emergence of open-source data sets, academic papers, and even model code allows teams to refine these resources for their specific domains and applications.
Another intriguing trend is the emergence of more manageable foundation models, such as LlaMA and Chinchilla, which open up possibilities for more mid-sized models in the future. Selecting the appropriate model to fine-tune requires teams to not only consider the quantity of domain-specific data available but also assess the compatibility of the model’s (open-source) license with their specific requirements.

As our understanding of the practical applications of foundation models expands, bespoke tools are emerging to refine these models prior to their deployment. Here are some tools and resources for fine-tuning and customizing language models:
- Hugging Face tutorial for fine-tuning (the company also recently released tools for tuning LLMs with RLHF).
- OpenAI guide to fine-tuning
- co:here guide for how to create a custom model
- AI21 Labs tutorial for creating custom models
- A new set of tools will allow for fine-tuning without writing code. There are already instances of no-code tools available that enable the creation of custom models in NLP (John Snow Labs) and computer vision (Matroid). I anticipate that comparable tools will be developed to refine foundation models.
Although RLHF has gained traction among teams building cutting-edge language models, its accessibility remains limited due to the lack of available tools. Furthermore, RHLF requires the development of a reward function that is vulnerable to misalignment and other issues, and remains a specialized technique that only a few teams have mastered. Prompt engineering, while useful, falls short in producing a reliable foundation model optimized for specific tasks and domains. Despite the fact that some teams may choose to build their own models from scratch, they are unlikely to do so frequently due to the cost of training models from scratch. The trend, therefore, leans towards fine-tuning pre-trained models.
Ultimately, teams need simple and versatile tools that enable them to employ various techniques to create custom models. Although fine-tuning can generate optimal models, further adjustments using RHLF are necessary before deploying them. For instance, a recent study conducted by Anthropic indicates that prompting methods may aid LLMs trained with RLHF in producing less detrimental results.

If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter: