I was able to catch the morning sessions of the PyTorch developer conference in San Francisco, and just like last year the event was well-organized and packed. The official blog post has details on the many announcements pertaining to the version 1.3 release, but here are a few that caught my eye:
- Growth: The scrolling list they displayed of companies and organizations using PyTorch was much longer than last year’s. In a recent post, I highlighted growing interest among researchers and it seems that’s being accompanied by adoption among practitioners as well.
- Very promising new libraries are being slowly rolled out with version 1.3, although the current state of functionality varies by library: Captum (model explainability) ; CrypTen (privacy-preserving machine learning via MPC) ; Detectron2 is an object detection and segmentation library that leverages the work of Facebook’s AI Research group.
- Named tensors: Originally proposed by Alexander Rush when he was at Harvard, I think many developers will appreciate being able to access a tensor’s dimensions by comment.
- Other noteworthy developer tools that were introduced or have had significant improvements: quantization (experimental) now supports 8-bit model quantization, TorchScript + JIT (“the path for PyTorch in production”) both continue to expand to cover more Python programs.
The PyTorch community is growing fast and the the developers behind the tools ecosystem are doing a fine job responding to user requests. All in all it was an impressive event to celebrate the progress in PyTorch.