Sei qui: Home » News ed eventi » Seminario "Large-scale multi-gpu training"

Seminario "Large-scale multi-gpu training"

On April, 14th at 11.00, Giuseppe Fiameni (NVIDIA) will give an invited talk on "Large-scale multi-gpu training". The event is organized as part of the "Computer Vision and Cognitive Systems" course (Prof. R. Cucchiara, L. Baraldi).

The event can be attend at this link.

Abstract

The computational requirements of deep neural networks used to enable AI applications like objects captioning or identification are enormous. A single training cycle can take weeks on a single GPU, or even months for the larger datasets like those used in computer vision research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

This two-hour seminar will introduce you on how to use multiple GPUs to training neural networks. You'll learn:
•  Approaches to multi-GPU training
•  Algorithmic and engineering challenges to large-scale training
Upon completion, you'll be able to effectively parallelize training of deep neural networks.

Speaker Bio

Giuseppe Fiameni, PhD, is a Solution Architect for AI and Accelerated Computing at NVIDIA, helping researchers in optimizing deep learning workloads on High Performance Computing systems. He is the technical lead of the Italian NVIDIA Artificial Intelligence Technology Centre.

Pagine collegate

NVIDIA AI Technology Centre