What machine learning engineers need to know

[A version of this post appears on the O’Reilly Radar.]

The O’Reilly Data Show Podcast: Jesse Anderson and Paco Nathan on organizing data teams and next-generation messaging with Apache Pulsar.

In this episode of the Data Show, I spoke Jesse Anderson, managing director of the Big Data Institute, and my colleague Paco Nathan, who recently became co-chair of Jupytercon. This conversation grew out of a recent email thread the three of us had on machine learning engineers, a new job role that LinkedIn recently pegged as the fastest growing job in the U.S. In our email discussion, there was some disagreement on whether such a specialized job role/title was needed in the first place. As Eric Colson pointed out in his beautiful keynote at Strata Data San Jose, when done too soon, creating specialized roles can slow down your data team.

We recorded this conversation at Strata San Jose, while Anderson was in the middle of teaching his very popular two-day training course on real-time systems. We closed the conversation with Anderson’s take on Apache Pulsar, a very impressive new messaging system that is starting to gain fans among data engineers.

Here are some highlights from our conversation:

Why we need machine learning engineers

Jesse Anderson: One of the issues I’m seeing as I work with teams is that they’re trying to operationalize machine learning models, and the data scientists are not the one to productionize these. They simply don’t have the engineering skills. Conversely, the data engineers don’t have the skills to operationalize this either. So, we’re seeing this kind of gap in between the data science and the data engineering, and the gap I’m seeing and the way I’m seeing it being filled, is through a machine learning engineer.

… I disagree with Paco that generalization is the way to go. I think it’s hyper-specialization, actually. This is coming from my experience having taught a lot of enterprises. At a startup, I would say that super-specialization is probably not going to be as possible, but at an enterprise, you are going to have to have a team that specializes in big data, and that is a part from a team, even a software engineering team, that doesn’t work with data.

Putting Apache Pulsar on the radar of data engineers

Key features of Apache Pulsar. Image by Karthik Ramasamy, used with permission.


Jesse Anderson: A lot of my time, since I’m really teaching data engineering is spent on data integration and data ingestion. How do we move this data around efficiently? For a lot of that time Kafka was really the only open source game in town for that. But now there’s another technology called Apache Pulsar. I’ve spent a decent amount of time actually going through Pulsar and there are some things that I see in it that Kafka will either have difficulty doing or won’t be able to do.

… Apache Pulsar separates pub-sub from storage. When I first read about that, I didn’t quite get it. I didn’t quite see, why is this so important or why this is so interesting. It’s because you can individually scale your pub-sub and your storage resources independently. Now you’ve got something. Now you can say, “Well, we originally decided I wanted to store data for seven days. All right, let’s spin up some more bookkeeper processes and now we can store fourteen days, now we can store twenty one days.” I think that’s going to be a pretty interesting addition there. Where the other side of that, the corollary to that is, “Okay, we’re hitting Black Friday and we don’t have so much more data coming through as we have way more consumption and have way more things hitting our pub-sub. We could spin up more pub-sub with that.” This separation is actually allowing some interesting use cases.

Related resources:

How to train and deploy deep learning at scale

[A version of this post appears on the O’Reilly Radar.]

The O’Reilly Data Show Podcast: Ameet Talwalkar on large-scale machine learning.

In this episode of the Data Show, I spoke with Ameet Talwalkar, assistant professor of machine learning at CMU and co-founder of Determined AI. He was an early and key contributor to Spark MLlib and a member of AMPLab. Most recently, he helped conceive and organize the first edition of SysML, a new academic conference at the intersection of systems and machine learning (ML).

We discussed using and deploying deep learning at scale. This is an empirical era for machine learning, and, as I noted in an earlier article, as successful as deep learning has been, our level of understanding of why it works so well is still lacking. In practice, machine learning engineers need to explore and experiment using different architectures and hyperparameters before they settle on a model that works for their specific use case. Training a single model usually involves big (labeled) data and big models; as such, exploring the space of possible model architectures and parameters can take days, weeks, or even months. Talwalkar has spent the last few years grappling with this problem as an academic researcher and as an entrepreneur. In this episode, he describes some of his related work on hyperparameter tuning, systems, and more.

Here are some highlights from our conversation:

Deep learning

I would say that you hear a lot about the modeling of problems associated with deep learning. How do I frame my problem as a machine learning problem? How do I pick my architecture? How do I debug things when things go wrong? … What we’ve seen in practice is that, maybe somewhat surprisingly, the biggest challenges that ML engineers face actually are due to the lack of tools and software for deep learning. These problems are sort of like hybrid systems/ML problems. Very similar to the sorts of research that came out of the AMPLab.

… Things like TensorFlow and Keras, and a lot of those other platforms that you mentioned, are great and they’re a great step forward. They’re really good at abstracting low-level details of a particular learning architecture. In five lines, you can describe how your architecture looks and then you can also specify what algorithms you want to use for training.

There are a lot of other systems challenges associated with actually going end to end, from data to a deployed model. The existing software solutions don’t really tackle a big set of these challenges. For example, regardless of the software you’re using, it takes days to weeks to train a deep learning model. There’s real open challenges of how to best use parallel and distributed computing both to train a particular model and in the context of tuning hyperparameters of different models.
Continue reading “How to train and deploy deep learning at scale”

Using machine learning to monitor and optimize chatbots

[A version of this post appears on the O’Reilly Radar.]

The O’Reilly Data Show Podcast: Ofer Ronen on the current state of chatbots.

In this episode of the Data Show, I spoke with Ofer Ronen, GM of Chatbase, a startup housed within Google’s Area 120. With tools for building chatbots becoming accessible, conversational interfaces are becoming more prevalent. As Ronen highlights in our conversation, chatbots are already enabling companies to automate many routine tasks (mainly in customer interaction). We are still in the early days of chatbots, but if current trends persist, we’ll see bots deployed more widely and take on more complex tasks and interactions. Gartner recently predicted that by 2021, companies will spend more on bots and chatbots than mobile app development.

Like any other software application, as bots get deployed in real-world applications, companies will need tools to monitor their performance. For a single, simple chatbot, one can imagine developers manually monitoring log files for errors and problems. Things get harder as you scale to more bots and as the bots get increasingly more complex. As in the case of other machine learning applications, when companies start deploying many more chatbots, automated tools for monitoring and diagnostics become essential.

The good news is relevant tools are beginning to emerge. In this episode, Ronen describes a tool he helped build: Chatbase is a chatbot analytics and optimization service that leverages machine learning research and technologies developed at Google. In essence, Chatbase lets companies focus on building and deploying the best possible chatbots.

Here are some highlights from our conversation:

Democratization of tools for bot developers

It’s been hard to get the natural language processing to work well and to recognize all the different ways people might say the same thing. There’s been an explosion of tools that leverage machine learning and natural language processing (NLP) engines to make sense of all that’s being asked of bots. But with increased capacity and capability to process data, there’s now better third-party tools for any company to take advantage of and build a decent bot out of the box.

… I see three levels of bot builders out there. There’s the non-technical kind where marketing or sales might create a prototype using a user interface—like maybe Chatfuel, which requires no programming, and create a basic experience. Or they might even create some sort of decision tree bot that is not flexible, but is good for maybe basic lead-gen experiences. But they often can’t handle type-ins. It’s often button-based. So, that’s one level, the non-technical folks.
Continue reading “Using machine learning to monitor and optimize chatbots”

Unleashing the potential of reinforcement learning

[A version of they post appears on the O’Reilly Radar.]

The O’Reilly Data Show Podcast: Danny Lange on how reinforcement learning can accelerate software development and how it can be democratized.

In this episode of the Data Show, I spoke with Danny Lange, VP of AI and machine learning at Unity Technologies. Lange previously led data and machine learning teams at Microsoft, Amazon, and Uber, where his teams were responsible for building data science tools used by other developers and analysts within those companies. When I first heard that he was moving to Unity, I was curious as to why he decided to join a company whose core product targets game developers.

As you’ll glean from our conversation, Unity is at the forefront of some of the most exciting, practical applications of deep learning (DL) and reinforcement learning (RL). Realistic scenery and imagery are critical for modern games. GANs and related semi-supervised techniques can ease content creation by enabling artists to produce realistic images much more quickly. In a previous post, Lange described how reinforcement learning opens up the possibility of training/learning rather than programming in game development.

Lange explains why simulation environments are going to be important tools for AI developers. We are still in the early days of machine intelligence, and I am looking forward to more tools that can democratize AI research (including future releases by Lange and his team at Unity).

Here are some highlights from our conversation:

Why reinforcement learning is so exciting

I’m a huge fan of reinforcement learning. I think it has incredible potential, not just in game development but in a lot of other areas, too. … What we are doing at Unity is basically making reinforcement learning available to the masses. We have shipped open source software on GitHub called Unity ML Agents, that include the basic frameworks for people to experiment with reinforcement learning. Reinforcement learning is really creating a machine learned-driven feedback loop. Recall the example I previously wrote about, of the chicken crossing the road; yes, it gets hit thousands and thousands of times by these cars, but every time it gets hit, it learns that’s a bad thing. And every time it manages to pick up a gift package on the way over the road, that’s a good thing.

Over time, it gets superhuman capabilities in crossing this road, and that is fantastic because there’s not a single line of code going into that. It’s pure simulation, and through reinforcement learning it captures a method. It learns a method to cross the road, and you can take that into many different aspects of games. There are many different methods you can train. You can add two chickens—can they collaborate to do something together? We are looking at what we call multi-agent systems, where two or more of these trained reinforcement learning-trained agents are acting together to achieve a goal.
Continue reading “Unleashing the potential of reinforcement learning”