Gauging the popularity of a new class of tools for machine learning.
Individuals and teams who build machine learning models need tools to keep track of and analyze experiments they run. It can be very challenging to organize all the information needed to evaluate, identify, and reproduce specific experiments. In the course of building a single model, teams may need to try different models, different model hyperparameters and training/test data sets, and they may need to tweak the code or the computation environment they use.
Fortunately there are now several open source and commercial tools designed to help ML teams during the model development phase. Experiment tracking and experiment management tools log all relevant metadata and results, and most of these tools include collaboration and visualization features to make ML experiments easier to manage and analyze.
The purpose of this post is to compare several popular experiment tracking and management solutions using an index that measures popularity. Beyond tools that collect and store information related to experiments, we also include visualization tools used during model development, and software used to version machine learning models and related elements. As with our previous post on BI tools, we use an index that relies on public data and is modeled after TIOBE’s programming language index. Our index is comprised of the following components:
- Search: We used a subset from TIOBE’s list (Google, Wikipedia, Amazon) and added Reddit, Twitter, and Stack Overflow into the mix.
- Supply (of talent): This component is based on the number of people who have listed a specific low-code development tool as a skill on their LinkedIn profiles.
- Demand (for talent): We examine the number of U.S. online job postings that mention a specific low-code development tool.
In comparison to our previous BI tool index, this fairly young category has relatively sparse data for a number of the tools we examined. But while scores exhibit more volatility compared to mature categories, the following key segments are quite stable:
- MLflow and TensorBoard are by far the most popular tools.
- Amazon SageMaker Experiments is the sole solution in the middle tier.
- Three startups (Weights & Biases, Neptune AI, Comet) occupy the next tier, and a set of startups and open source projects trail closely below.
We also plot the supply and demand sides of the labor market for each tool. Many of these tools are so new that labor market data was quite sparse for most of them. But it’s clear that MLflow and TensorBoard are the most popular from a talent perspective, and, therefore, have the most developed user ecosystems.
We plan to use our index to monitor trends in machine learning tools. Let us know (using the form below) what tools you want us to include in future editions.
Use this form to suggest companies to include in future editions of this Index.
- The Business Intelligence Index: Measuring the popularity of BI tools
- Ranking Low-code Development Platforms
- Machine Learning Trends You Need to Know
- Distributed Computing for AI: A Status Report
- The AI $100M Revenue Club
To stay up to date, subscribe to the Gradient Flow newsletter.