Gradient Flow #35: Optimizing Inference, Workflow Tools, RL in Large Enterprises

Subscribe • Previous Issues

This edition has 510 words which will take you about 3 minutes to read.

“The thing about machine learning scientists is that they never admit defeat because all of their problems can be solved with more data.” – William Tunstall-Pedoe

Data Exchange podcast

  • Why You Should Optimize Your Deep Learning Inference Platform   As companies deploy deep learning to critical products and services, the number of predictions that models have to render can easily reach millions per day (even hundreds of trillions, in the case of Facebook). I speak with Yonatan Geifman, CEO and co-founder of Deci, as well as with Ran El-Yaniv, Chief Scientist and co-founder of Deci and Professor of Computer Science at Technion. We deep dive into tools for systematically optimizing inference platforms.
  • The Future of Machine Learning Lies in Better Abstractions   Travis Addair previously led the team at Uber that was responsible for building Uber’s deep learning infrastructure. Travis is deeply involved with two popular open source projects related to deep learning: he is maintainer of Horovod, a distributed deep learning training framework, and he is a co-maintainer of Ludwig, a toolbox that allows users to train and test deep learning models without the need to write code.

[Image from pxhere]

Data & Machine Learning tools and infrastructure

[Image: Shoreditch 2015, by SGL]


Closing Short → The return of in-person events:

❛ Convenience and time savings were key factors for using video for events in our survey, but respondents in every country were adamant on their preference that events like concerts and religious services be in-person going forward. Virtual options were welcomed for those who needed a distraction or when in-person was not available.





If you enjoyed this newsletter please support our work by encouraging your friends and colleagues to subscribe:

%d bloggers like this: