Course details

Convolutional Neural Networks

KNN Acad. year 2023/2024 Summer semester 5 credits

Current academic year

Solutions based on machine learning approaches gradually replace more and more hand-designed solutions in many areas of software development, especially in perceptual task focused on information extraction from unstructured sources like cameras and microphones. Today, the dominant method in machine learning is neural networks and their convolutional variants. These approaches are at the core of many commercially successful applications and they push forward the frontiers of artificial intelligence.

Guarantor

Course coordinator

Language of instruction

Czech

Completion

Classified Credit

Time span

  • 26 hrs lectures
  • 26 hrs projects

Assessment points

  • 35 pts mid-term test (written part)
  • 65 pts projects

Department

Lecturer

Learning objectives

Basic knowledge of convolutional neural networks, their capabilities and limitations. Practical applications mostly in computer vision tasks and completed by task from speech recognition and language processing. To allow students to design overall solutions using convolutional networks in practical applications including network architectures, optimization, data collection and testing and evaluation.
Students will gain basic knowledge of convolutional neural networks, their training (optimization), their building blocks and of the tools and software frameworks used to implement them. Students will gain insights in what factors determine accuracy of networks in real applications including data sets, loss functions, network structure, regularization, optimization, overfitting and multi-task learning. They will receive an overview of state-of-the-art networks in a range of computer vision tasks (classification, object detection, segmentation, identification), speech recognition, language understanding, data generation and reinforcement learning.
Students will acquire team work experience during project work and they will acquire basic knowledge of python libraries for linear algebra and machine learning.

Why is the course taught

This course is for you whether your goal is to work as an AI expert in in a large multinational corporation such as Google or Facebook, whether you want to push forward frontiers of artificial intelligence as a part of a top academic team, or whether you just want to broaden your horizons of the state-of-the-art in machine learning. Neural networks are at the core of many commercial applications ranging from speech recognition, content-based image search and intelligent surveillance systems to question answering systems and autonomous cars. At the same time neural networks are the enabling factor of the current rapid advancements in artificial intelligence. This course will enable you to use this powerful tool in practical applications.

Prerequisite knowledge and skills

Basics of linear algebra (multiplication of vectors and matrices), differential calculus (partial derivatives, chain rule), Python and intuitive understanding of probability (e.g. conditional probability). Any knowledge of machine learning and image processing is an advantage.

Study literature

  • Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, 2016.
  • Li, Fei-Fei, et al.: CS231n: Convolutional Neural Networks for Visual Recognition. Stanford, 2018.
  • Bishop, C. M.: Pattern Recognition, Springer Science + Business Media, LLC, 2006, ISBN 0-387-31073-8.

Fundamental literature

  • Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, 2016.

Syllabus of lectures

  1. Introduction, linear models. loss function, learning algorithms and evaluation. (organization, NN review)
  2. Fully connected networks, loss functions for classification and regression. (prezentation)
  3. Convolutional networks, locality in equivariance of computation, weight initialization, batch normalization. (prezentation, weight init. tutorial)
  4. Network architectures for image classification. (prezentation)
  5. Generalization, regularization, data augmentation. multi-task learning, semi supervised learning, active learning, self-supervised learning. (prezentation)
  6. Object detection: MTCNN face detector, R-CNN, Fast R-CNN, Faster R-CNN, YOLO, SSD. (prezentation including image segmentation)
  7. Semantic and instance segmentation. Connections to estimation of depth, surface normals, shading and motion.
  8. Learning similarity and embedding. Person identification. (prezentation)
  9. Recurrent networks and sequence processing (text and speech). Connectionist Temporal Classification (CTC). Attention networks. (prezentation)
  10. Language models. Basic image captioning networks, question answering and language translation. (prezentation)
  11. Generative models. Autoregressive factorization. Generative Adversarial Networks (GAN, DCGAN, cycle GAN). (prezentation)
  12. Reinforcement learning. Deep Q-network (DQN) and policy gradients. (prezentation)
  13. Graph neural networks. (slides)

Syllabus - others, projects and individual work of students

Team project (2-3 students).
Individual assignments - proposed by students, approved by the teacher. Components:

  • Problem Formulation, team formation.
  • Research of existing solutions and usefull tools.
  • Baseline solution and evaluation proposal.
  • Data collection.
  • Experiments, testing and gradual improvement.
  • Final report and Presentation of the project.

Progress assessment

  • Project concluded by public presentation - 65 points.
  • Two tests during the semester - 35 points.


Exam prerequisites

Acquiring at least 50 points.

Course inclusion in study plans

Back to top