Meet the faculty and get an overview of the programme, including certificate requirements, the learning platform and support resources. Receive an introduction to Python and learn about the capstone competition
Become familiar with the fundamental components of and approaches to machine learning problems and ways to classify problems along the major dividing lines of the ML landscape
Understand how to differentiate ML from statistics
Examine real-world applications of machine learning across a variety of industries
Learn how to calculate absolute, conditional and total probabilities and understand how to classify independent versus dependent events
Discover how to run simulations using random number generating libraries in Python and Numpy
Explore the difference between discrete and continuous random variables and learn how to compute probabilities and values related to binomial distribution
Understand how to conduct maximum likelihood estimations using existing data
Understand how to detect outliers and calculate regression or correlation coefficients fora data set
Examine the potential consequences of eliminating outliers from a data set
Explore the validity of statements about data sets and relationships among data
Learn how to compute confidence interval
Recognise when it is feasible to draw meaningful conclusions from data
Learn how to calculate the probability of selecting the correct model from a set of sample data
Examine key components of generalisation bounds
Learn how to estimate the fit of a selected model on a new data set
Learn how to use performance measures to evaluate regression problems with a numerical output variable
Understand how to apply a confusion matrix to evaluate classification problems with a categorical output variable
Examine challenges that can be addressed by machine learning competitions across a variety of industries
Explore effective applications of oversampling to a machine learning problem
Understand how to apply oversampling to classification problems
Learn how to estimate the performance of a given predictor using the k-fold cross-validation algorithm
Learn how to calculate distance functions for k-nearest neighbour methods and understand how to apply normalisation methods to scale data sets
Learn how to use validation and test sets to predict and select the value for regression and classification problems
Explore real-life applications of k-nearest neighbour methods that take into consideration the advantages and shortcomings of such methods
Discover how a decision tree makes predictions and understand the difference between howto measure purity in categorical models with entropy and the Gini Index
Explore how a computer constructs a decision tree and examine strategies for pruning a classification tree
Recognize the differences between constructing regression and classification trees
Learn how to select the most appropriate tree depth for making a prediction
Become familiar with the concepts of interpretability, fairness and non-discrimination
Explore the functionality of the k-nearest neighbour and decision tree methods
Understand key components of Bayes’ theorem and learn how to use Bayes' theorem to calculate conditional probabilities
Recognize when it is better to predict a probability instead of an actual value
Explore real-life applications of the Naïve Bayes theorem
Understand the use of surrogate models as well as their applications and pitfalls
Explore machine learning algorithm parameters and the most common surrogate methodsused for tuning
Examine trade-offs between exploration and exploitation in Bayesian optimisation
Discover when continued parameter tuning is no longer worthwhile
Compare logistic and linear regression to understand the categorical output and best use cases of logistic regression for binary classification
Using a real-life data set, apply the maximum likelihood method to fit a logistic regression
Select an approach for the final code base of your capstone project that aligns with your professional goals
Understand the concepts of linear separation through hyperplanes
Learn about hard-margin and soft-margin support vector machines
Gain knowledge of the kernel trick and discover how support-vector machines can be applied to classification problems with more than one outcome
Explore similarity measures between samples and clusters
Learn about hierarchical and k-means clustering
Discover common concerns that arise with cluster analysis
Examine real-life applications of clustering
Delve deeper into calculating the direction and location of principal components
Learn how to determine the number of principal components to use
Become familiar with the history of deep learning and discover the five building blocks of deep learning
Understand the function approximation technique
Learn how the backpropagation algorithm is used to train a neural network using thechain rule technique
Understand the backward pass component of backpropagation
Discover how to optimise parameters using stochastic gradient descent
Understand how to fine-tune machine learning models using a hyperparameter search
Discover how to use regularisation techniques to make adjustments that will improve model performance
Receive an introduction to using PyTorch
Explore examples of neural networks gone wrong
Learn to interpret the output of a neural network
Understand how to design machine learning models with interpretability in mind
Receive an introduction to convolutional neural networks
See the role convolutions play in computer vision
Examine the building blocks of the LeNet-5 architecture
Learn how to build convolutional neural networks using PyTorch
Understand the differences between model-based and model-free approaches
Learn how to combine reinforcement and supervised learning
Discover how to build systems to create reinforcement learning experiences
Discover how to construct appropriate surrogate models
Understand how to balance exploration and exploitation