Introducing RLlib: A composable and scalable reinforcement learning library

[A version of this post appears on the O’Reilly Radar.]

RISE Lab’s Ray platform adds libraries for reinforcement learning and hyperparameter tuning.

In a previous post, I outlined emerging applications of reinforcement learning (RL) in industry. I began by listing a few challenges facing anyone wanting to apply RL, including the need for large amounts of data, and the difficulty of reproducing research results and deriving the error estimates needed for mission-critical applications. Nevertheless, the success of RL in certain domains has been the subject of much media coverage. This has sparked interest, and companies are beginning to explore some of the use cases and applications I described in my earlier post. Many tasks and professions, including software development, are poised to incorporate some forms of AI-powered automation. In this post, I’ll describe how RISE Lab’s Ray platform continues to mature and evolve just as companies are examining use cases for RL.

Assuming one has identified suitable use cases, how does one get started with RL? Most companies that are thinking of using RL for pilot projects will want to take advantage of existing libraries.

RL training nests many types of computation. Image courtesy of Richard Liaw and Eric Liang, used with permission.

There are several open source projects that one can use to get started. From a technical perspective, there are a few things to keep in mind when considering a library for RL:
Continue reading “Introducing RLlib: A composable and scalable reinforcement learning library”