Exploring the Boundaries of LLMs: Learning from Versatile Developer Tools

To facilitate enterprise adoption of LLMs, we need to make it easy for teams to experiment and build simple apps that use them. A new crop of developer tools imbued with a suite of features meticulously designed to simplify the nuances of LLMs, are fast becoming indispensable for developers. They offer the flexibility to switch between different LLMs with minimal code changes, providing much-needed agility in development. They ensure compatibility with a wide array of programming languages and allow seamless integration with other tools and data sources, broadening the scope of what can be achieved with LLMs. 

In addition to providing functional packages (or “skills”) to streamline common tasks, these tools incorporate capabilities such as call chaining for context maintenance, integration with external services and data, orchestration for decision-making, and prompt abstraction for intuitive interactions. They harness the power of semantic memory management to maintain coherence in generated text and can process multimodal data, including text, images, and audio, for richer applications. 

A prototypical example is LangChain – an open source project that facilitates rapid exploration and prototyping, functioning as an ideal platform for ideation and quick testing of concepts. Its versatility enables developers to switch between different LLMs, hosted either on-premise or through external cloud services, with minimal code changes. Furthermore, it integrates seamlessly with numerous existing tools and data sources, amplifying its usefulness and application breadth. LangChain’s undisputed value in prototyping and exploring the possibilities with LLMs underscores its popularity.

As best I can tell LangChain is used mainly for exploration and its use in production environments is limited. More generally, we’re still in the early phases of building out AI developer tools, you should be exploring and learning from a variety of related open source projects. Experimenting with a diverse set of frameworks like LangChain, LlamaIndex, Griptape, LLM, Aim, and others, enables you to learn the strengths and weaknesses of different approaches and architectures. This understanding serves as a valuable guide in developing robust tools that can help in safely and efficiently productionizing your LLM-enabled applications.

Which brings me to Semantic Kernel (SK), an open-source software development kit designed for developers looking to integrate AI & LLM services into their applications using programming languages like Python and C#. What I’ve found particularly enjoyable about SK in my LLM experiments is its simple programming model, which facilitates the seamless combination of AI models and plugins to craft new user experiences. Furthermore, its connectors allow easy addition of memories and models, and its ability to integrate AI plugins extends application capabilities.

Semantic Kernel enables function composition by letting users combine native and semantic functions into a single pipeline.

While the documentation is not yet as extensive compared to some other projects, Microsoft is actively working to enhance it. A standout feature of SK is its proven application in real-world production environments, with deployments not only within Microsoft but also among large-scale Microsoft Azure clients. In fact, SK has already been successfully used for tasks such as enabling chats over proprietary data and building AI agents to accomplish specific tasks. Another project I’ve taken a liking to is Griptape, which targets enterprise grade applications. I’m enjoying using Semantic Kernel and Griptape, and I highly recommend giving them a try.

Join us at the AI Conference in San Francisco (Sep 26-27) to network with the pioneers of LLM development and learn about the latest advances in Generative AI and ML.

If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:

%d bloggers like this: