Beyond the Continuum: The Importance of Quantization in Deep Learning
11-30, 14:10–14:40 (Europe/Amsterdam), Ernst-Curie

In this talk, we will explore the different types of quantization techniques and discuss how they can be applied to deep learning models - everything in a Jupyter Notebook, which allows you to try it at home.


Quantization is a process of mapping continuous values to a finite set of discrete values. It is a powerful technique that can significantly reduce the memory footprint and computational requirements of deep learning models, making them more efficient and easier to deploy on resource-constrained devices. In this talk, we will explore the different types of quantization techniques and discuss how they can be applied to deep learning models. In addition, we will cover the basics of NNCF and OpenVINO Toolkit, seeing how they collaborate to achieve outstanding performance - everything in a Jupyter Notebook, which allows you to try it at home.


Prior Knowledge Expected

No previous knowledge expected

AI Software Evangelist at Intel. Adrian graduated from the Gdansk University of Technology in the field of Computer Science 7 years ago. After that, he started his career in computer vision and deep learning. As a team leader of data scientists and Android developers for the previous two years, Adrian was responsible for an application to take a professional photo (for an ID card or passport) without leaving home. He is a co-author of the LandCover.ai dataset, creator of the OpenCV Image Viewer Plugin, and a Deep Learning lecturer occasionally. His current role is to educate people about OpenVINO Toolkit. In his free time, he’s a traveler. You can also talk with him about finance, especially investments.