PyTorch v1.0 release: Tensors and Dynamic neural networks in Python
PyTorch is a Python package that provides two high-level features:
- Tensor computation (like NumPy) with strong GPU acceleration
- Deep neural networks built on a tape-based autograd system
You can reuse your favourite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
At a granular level, PyTorch is a library that consists of the following components:
torch | a Tensor library like NumPy, with strong GPU support |
torch.autograd | a tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
torch.nn | a neural networks library deeply integrated with autograd designed for maximum flexibility |
torch.multiprocessing | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training. |
torch.utils | DataLoader, Trainer and other utility functions for convenience |
torch.legacy(.nn/.optim) | legacy code that has been ported over from torch for backward compatibility reasons |
Usually one uses PyTorch either as:
- a replacement for NumPy to use the power of GPUs.
- a deep learning research platform that provides maximum flexibility and speed
Changelog v1.0
- Highlights
- JIT
- Brand New Distributed Package
- C++ Frontend [API Unstable]
- Torch Hub
- Breaking Changes
- Additional New Features
- N-dimensional empty tensors
- New Operators
- New Distributions
- Sparse API Improvements
- Additions to existing Operators and Distributions
- Bug Fixes
- Serious
- Backwards Compatibility
- Correctness
- Error checking
- Miscellaneous
- Other Improvements
- Deprecations
- CPP Extensions
- Performance
- Documentation Improvements