While deep learning proceeds to set records across a variety of tasks and benchmarks, the amount of computing power needed is becoming prohibitive. A recent paper – “The Computational Limits of Deep Learning” – from M.I.T., Yonsei University, and the University of Brasilia, estimates of the amount of computation, economic costs, and environmental impact that come with increasingly large and more accurate deep learning models. One of the highlights from the paper is Figure 4, a table wherein the authors “extrapolate the estimates from different domains to understand the projected computational power needed to hit various benchmarks”.
In this short post, I take a subset of that table and translate in into two charts (both scaled logarithmically). The first chart contains the estimated computational power required (in Gigaflops), the second chart contains the estimated cost of running through the computations (in US dollars). The authors computed estimates for three set of error rates: Today (current error rate observed at the time of publication), and two other lower Target error rates. The estimates cover the following popular benchmarks:
- ImageNet: image classification (computer vision)
- MS COCO: object detection (computer vision)
- SQuAD 1.1: Question-Answering
- COLLN 2003: named-entity recognition
- WMT 2014 (English to French): machine translation
The estimated economic and environmental costs of attaining some of these target error rates is truly astounding. For example, achieving Target 1 for ImageNet alone has a projected economic cost of 100 Billion dollars. A subset of the ML research community needs to seriously start investigating alternatives (to neural models) that do not require as much computation!
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.