PyTorch v0.3.0 release: Tensors and Dynamic neural networks in Python


PyTorch is a Python package that provides two high-level features:

  • Tensor computation (like NumPy) with strong GPU acceleration
  • Deep neural networks built on a tape-based autograd system

You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.

At a granular level, PyTorch is a library that consists of the following components:

torch a Tensor library like NumPy, with strong GPU support
torch.autograd a tape-based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.nn a neural networks library deeply integrated with autograd designed for maximum flexibility
torch.multiprocessing Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training.
torch.utils DataLoader, Trainer and other utility functions for convenience
torch.legacy(.nn/.optim) legacy code that has been ported over from torch for backward compatibility reasons

Usually one uses PyTorch either as:

  • a replacement for NumPy to use the power of GPUs.
  • a deep learning research platform that provides maximum flexibility and speed

Changelog v0.3.0

  • Breaking changes: removed¬†reinforce()
  • New features
    • Unreduced losses
    • A profiler for the autograd engine
    • More functions support Higher order gradients
    • New features in Optimizers
    • New layers and nn functionality
    • New Tensor functions and Features
    • Other additions
  • API changes
  • Performance improvements
    • Big reduction in framework overhead (helps small models)
    • 4x to 256x faster Softmax/LogSoftmax
    • More…
  • Framework Interoperability
    • DLPack Interoperability
    • Model Exporter to ONNX (ship PyTorch to Caffe2, CoreML, CNTK, MXNet, Tensorflow)
  • Bug Fixes (a lot of them)


Leave a Reply

Your email address will not be published. Required fields are marked *