Knowledge graphs are not new. I expect usage of knowledge graphs to grow in the coming years as language models and AI applications gain traction. An AI application can benefit from knowledge graphs because they provide a structured, interconnected representation of data. This allows AI algorithms to better understand and make use of the information they are working with. By providing an interconnected network of relationships between different entities, knowledge graphs enable AI applications to better comprehend the context and relationships between data points.
The use of knowledge graphs enables AI systems to solve problems more effectively and more efficiently, without getting lost in a sea of data. For example, imagine trying to find information about a specific person. A knowledge graph can provide a quick overview of that person’s background, relationships, and relevant facts without having to search through countless pages of unorganized information. Knowledge graphs are a key component of digital assistants and search engines, and they contribute to a wide range of AI applications, including link prediction, entity relationship prediction, recommendation systems, and question answering systems.
Knowledge graphs can also provide complementary, real-world factual information to augment limited labeled data to train a machine learning algorithm. Recent research and implemented industrial intelligent systems have shown promising performance for machine learning algorithms that combine training data with a knowledge graph. They are used to enhance input data for AI applications such as recommendation and community detection.

Knowledge graphs have already garnered the attention of numerous healthcare and financial services firms, as well as enterprises with intricate supply chains. I expect that as language models and foundation models gain popularity, more companies will create and invest in knowledge graphs.
This is due to the increasing number of firms that will opt to train their own large models. For many enterprises, especially those operating in heavily regulated industries, the mere provision of API access by public foundation models may prove insufficient. The extent to which a company invests technically, the amount of data it possesses, and its comfort level with shipping data to an API or third-party, are all factors that will influence the decision-making process. It is unlikely that a few GPT-like models will monopolize the market, as the dynamics of organizational structure and considerations such as trust and cost will play a critical role.
While startups are developing tools to simplify the training of large models, there has been less investments in tooling for knowledge graphs that cater to AI teams constructing hybrid [neural / KG / retrieval-based] models. As my friend Paco Nathan has long lamented, “It would be nice to be able to train language models on connected data!” For your next project or venture, consider developing tools that can streamline the creation, management, maintenance, and scaling of these graphs.
If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter: