Probabilistic Neural Networks and Deep Learning

Probabilistic Neural Networks and Deep Learning

Probability Theory, Neural Networks, Regression, and Representation Learning

Author(s)

  • Stefanus Benhard, S.Kom.

The Managing Department

  • Digital Content Development
  • Universitas
  • Knowledge System & Learning Product
  • Computer Science

Category

  • Development

Language

English

Course Description

Course Probabilistic Foundations of Neural Networks and Deep Learning is designed to equip students with a basic to advanced understanding of the concepts and applications of Deep Learning. In this course, students will learn the probabilistic foundations that underlie machine learning, including probability theory, standard distributions, and parameters. The discussion continues with single-layer networks for regression and classification, which provides insight into how simple models can be linked to probability theory and loss functions.

Next, students will explore deep neural networks with a focus on multilayer perceptron (MLP) architecture, non-linear activation functions, and how network depth increases representation capacity. This course also discusses important issues such as the curse of dimensionality, regularization, and decision theory in making optimal predictions. In the final session, students are introduced to the concepts of representation learning, transfer learning, and various error functions relevant to modern model development.

Through a combination of mathematical and probabilistic theory and practical implementation, this course provides comprehensive skills for understanding, designing, and evaluating artificial neural network architectures. By the end of the course, participants are expected to be able to explain the basic principles of deep learning, implement regression and classification models, and understand the benefits of networks in representation learning and knowledge transfer.

Course Topics

  1. Overview
  2. Deep Learning Revolution
  3. Probabilistic (Bayesian & Densities)
  4. Probabilistic (Characteristic & Bayesian Link)
  5. Standard Distributions (Discrete Variables)
  6. Standard Distributions (Continuous Variables)
  7. Single-Layer Networks: Regression 1
  8. Single-Layer Networks: Regression 2
  9. Single-Layer Networks: Classification 1
  10. Single-Layer Networks: Classification 2
  11. Deep Neural Networks (MLPs & Activations)
  12. Advanced Deep Networks (Depth, Representations & Error Functions)