AI-1.0Z: Introduction to Fundamentals of Machine Learning



Course Details



What was AI-1.0Z ML course in 2019?

AI-1.0Z was first offered in June 2019. It was intended to be a 28 weeks mathematically rigorous course. The 2019 course covered several topics in Machine Learning with graduate level mathematical depth.

What is AI-1.0Z ML course at present?

The first 6 weeks content of the 2019 AI-1.0Z Machine Learning course has been expanded to 7 weeks, and is being offered 2020 onward to serve as a non-exhaustive (number of topics) course in machine learning.

This 7 weeks course serves as a prerequisite (for machine learning mathematics) for some of the Category-II courses being offered by DeepEigen.

This course can be taken by registrants to get deeply trained machine learning mathematics to develop the ability to further read any book in machine learning. Once registrants go through the mathematical rigour of this short 7 weeks course, we are of the opinion that they will be ready to read and understand machine learning related books to futher expand their knowledge.

What about some topics that are mentioned in introductory lectures?

The lectures were recorded in 2019 with an intent of providing full course. Since the course has been shortened, in the lectures some of the topics from 2019 course will be mentioned (in terms of what will be covered in the course), but they are no longer the part of the course being offered.

What if I take longer duration to understand the content?

This is a self-paced course. Typically it should not take you more than 3 months to complete the course. But you have 6 months from the date of registration to complete the course. Beyond that period, please check the details on the Access tab above.

What are prerequisites?

This course assumes that you have basic background knowledge of

  • Linear Algebra
  • Probability
  • (Multivariate-) Calculus

Furthermore, registrants are exptected to know at least one computer programming language. The assignments will be in Python, and thus, we expect the registrants to learn python programming concepts on their own.

We expect that the registrants can learn libraries like numpy on their own, and can implement mathematical programs using the python programming language.

1. Introduction to Machine Learning

2. Linear Regression: Motivation for Supervised Learning and Linear Regression

3. Linear Regression: Feature Functions and Non-Linear Mapping

4. Linear Regression: Loss Function: Least Squares Loss

5. Linear Regression: Normal Equation Solution of Least Squares

6. Linear Regression: Gradient Descent Algorithm for Least Squares

1. Linear Regression:Probabilistic Interpretation of Least Squares

2. Linear Regression: Maximum Likelihood Estimate and Gaussian Prior on Noise

3. Logistic Regression: Motivation for Classification

4. Logistic Regression: MLE Estimate

5. Logistic Regression: Gradient Ascent Algorithm for Solving MLE Estimate

1. Background: Probabilisic Interpretation and Gaussian and Laplacian Distributions

2. Background: MLE vs Prior Distribution on Learnable Parameters

3. Background: Motivation and What is Regularization

4. Background: Expectation of Target Values with Gaussian Noise in the Data-Set

1. Bayes Theorem, Basics of Probability, and Marginalization

2. Prior Distributions on Parameters and Joint Optimization

3. Posterior Distribution of the Learnable Parameters and Dependence on Data-Set

4. Interpretation of the Posterior Distribution

5. Regularization: Bayesian Linear Regression and Maximum Aposteriori Probability Estimate

6. Regularization: Gaussian and Laplacian Priors for L2-Norm and L1-Norm Regularization

7. Regularization: Feature Function Selection and Link to Regularization

8. Regularization: L2-Norm Regularization Solution via Normal Equation and Gradient Descent

1. Convex Optimization: Convex Functions, Convex Sets, and Extended Value Extension

2. Convex Optimization: First Order Condition of Convexity and Proof

3. Convex Optimization: Second Order Condition of Convexity

4. Convex Optimization: First Order Condition of Optimality

5. Convex Optimization: Operations that Preserve Convexity of Functions and Sets

1. Machine Learning and Convex Interpretation of the Loss Functions

2. Subgradients and Sub-Differential Set

3. First Order Condition of Optimality for Non-Differentiable Convex Functions

4. Regularization: L1-Norm Regularization and Subgradient Descent Algorithm

1. Regularization: Iterative Soft Thresholding Algorithm for L1 Regularization

2. Derivation of ISTA: Majorization and Maximization

3. Derivation of ISTA: Least Squares Solution Using Majorization and Maximization

4. Derivation of ISTA: Combining Least Squares and Reconstruction Losses to get ISTA

5. ISTA Pseudo Code and Summary

Access:

6 months to finish from the date of registration.

After 6 months, registrant will have to pay ₹1000 for extension of every 2 months for charges related to server, maintainence and assignment evaluation.

No refund for this course, as the lectures are available immediately upon registration.

Assignment:


  • Instructor’s Name

    Sanjeev Sharma
  • Free videos

    Click Here
  • Course Type

    Self-Paced
  • Fee: India

    ₹ 999
  • Fee: Foreign

    ₹ 1999
  • First Offered:

    2019
  • Current Status:

    Starts Upon Registration
  • Expected Course Engagement

    10 Hrs/Week