In 2018, I sat down and listed companies (mainly based in the US and China) that were offering specialized hardware for deep learning. There were plenty of startups in the hardware space at that time but things have changed and that particular list is a bit outdated. Many companies have pivoted, or gone bust, or have gotten acquired.
I’ve long wanted to update my landscape map, and thankfully a recent paper from MIT’s Lincoln Laboratory Supercomputing Center has a much more up to date list. The authors provide a list of companies and research groups that offer specialized hardware that accelerate matrix operations, kernels, methods, or functions. Accelerators also need to factor in constraints (including size, weight, and power) that are present both in embedded applications and in data centers. The paper lists and organizes companies and research groups that offer accelerators for deep neural networks from extremely low power, through embedded and autonomous applications, to data center class accelerators for inference and training.
Here are the companies grouped into the following categories:
- Very Low Power for speech processing, very small sensors, etc.
- Embedded for cameras, small UAVs and robots, etc.
- Autonomous for driver assist services, autonomous driving, and autonomous robots.
- Data Center Chips and Cards: CPUs and GPUs ; programmable FPGA solutions; Dataflow processors / ASIC (custom-designed processors for neural network inference and training).
- Data Center Systems (single-node data center systems)
- Neuromorphic and Neuromorphic-inspired chips: circuits designed to model biological and physiological mechanisms in brains.
The MIT paper also includes a scatterplot that shows Peak performance vs. Power of publicly announced AI accelerators and processors which I include below. The authors gathered “performance and power information from publicly available materials including research papers, technical trade press, company benchmarks, etc.”
At the end of the day, great hardware alone isn’t enough to sway users. Developers and companies also evaluate accelerators based on the accompanying software tools. So it’s no surprise that the older hardware companies have invested heavily in their software libraries: Nvidia has CUDA and related libraries, Intel has MKL and other libraries, and AMD has ROC. In fact if you visit the websites of the startups listed above, many of them showcase their software tools alongside their hardware offerings.
Scaleable machine learning, scalable Python, for everyone. Join Turing Award Winner David Patterson, Michael Jordan, Oriol Vinyals, Manuela Veloso, Azalia Mirhoseini, Zoubin Ghahramani, Wes McKinney, Ion Stoica, Adrian Cockroft, and many other speakers at the first Ray Summit, a FREE virtual conference which takes place Sep 30th and Oct 1st.