Careers360 Logo
Interested in this College?
Get updates on Eligibility, Admission, Placements Fees Structure
Compare

Quick Facts

Medium Of InstructionsMode Of LearningMode Of Delivery
EnglishSelf StudyVideo and Text Based

Course Overview

The Visual Perception for Self-Driving Cars certification course will introduce candidates to the key perception tasks and survey popular computer vision methods for robotic perception in autonomous driving, and dynamic and static object detection. Candidates will acquire the skills to work with the pinhole camera model by the end of this course. Along with them they will detect, explain and fit image characteristics, conduct intrinsic and extrinsic calibration of the camera and build their own convolutional neural networks. For drivable surfaces, estimation candidates can apply these methods to object detection and tracking, visual odometry, and semantic segmentation.

Candidates will build algorithms for the final project in the Visual Perception for Self-Driving Cars training course that defines the limits of the drivable surface and recognise bounding boxes for objects in the scene. On a realistic dataset, students learn to work using synthetic as well as real image data. 

The Visual Perception for Self-Driving Cars online course is part of a self-driving car specialisation programme. It is the third one of a total 4 courses. This specialisation gives a detailed understanding of state-of-the-art engineering approaches used in the self-driving automotive industry. 

The Highlights

  • Advanced level course
  • 3rd course of self-driving car specialisation
  • Learn with own schedule
  • Flexible deadlines
  • Approximately 31 hours to finish
  • Certification by Coursera 
  • Offered by University of Toronto

Programme Offerings

  • online
  • Quiz
  • Assignment
  • Practice Exercises.

Courses and Certificate Fees

Certificate AvailabilityCertificate Providing Authority
yesUniversity of Toronto, TorontoCoursera

The Visual Perception for Self-Driving Cars certification fee is chargeable on a monthly basis. The details of the breakup are given below

Visual Perception for Self-Driving Cars Fee Structure

Description

Amount

1 month, 20+ hours per week

Rs. 6,634 

3 months, 11 hours a week

Rs. 13,268 

6 months, 5 hours a week

Rs. 19,903 


Eligibility Criteria

Education 

Candidates should have basic knowledge of deep learning, computer vision, linear algebra, and Python 3.0

Certification Qualifying Details

Certification will be done only after the successful completion of the Visual Perception for Self-Driving Cars certification by Coursera. To get full access to the programme and to get a certificate of completion candidates should subscribe to it by paying the required fee.

What you will learn

Robotic skills

After the completion of the Visual Perception for Self-Driving Cars certification syllabus, candidates will learn these:

  • Candidates will be working with models of pinhole cameras and performing intrinsic as well as extrinsic camera calibration. Candidates will detect, explain and fit image characteristics, conduct intrinsic and extrinsic calibration of the camera and build their own convolutional neural networks. And they will learn the skills to apply these methods to object detection and tracking, visual odometry and semantic segmentation for drivable surface estimation.
  • Using the open source simulator ‘CARLA’ candidates can get to communicate with actual data sets from an autonomous vehicle (AV)through hands-on projects.
  • Candidates can hear from industry experts working at companies such as Oxbotica and Zoox during their courses as they share insights into autonomous technology and how it fuels job growth in the field.
  • Candidates can benefit from a highly realistic driving experience that features 3D modelling and environmental conditions for pedestrians.
  • They will be able to create their own self-driving software stack and apply for jobs in the autonomous vehicle industry.

Who it is for


Admission Details

Candidates applying for the Visual Perception for Self-Driving Cars classes, follow these steps:

Step 1: Visit the website.

https://www.coursera.org/learn/visual-perception-self-driving-cars

Step 2: Click on “Enrol For Free”.

Step 3: Candidates will be requested to sign up or log in. Do sign up or log in with your Google/ Facebook/ Apple account.

Step 4: Candidates can audit the course for free or subscribe to the whole specialisation for getting a certificate. Select the option accordingly.

Step 5: Candidates will get a 7-day free trial. Payment has to be done only after 7 days. You can cancel the free trial anytime you want. No penalties will be charged.

The Syllabus

Videos
  • Welcome to the Self-Driving Cars Specialization!
  • Welcome to the course
  • Meet the Instructor, Steven Waslander
  • Meet the Instructor, Jonathan Kelly
Readings
  • Course Prerequisites
  • How to Use Discussion Forums
  • How to Use Supplementary Readings in This Course
  • Recommended Textbooks
Discussion Prompt
  • Get to Know Your Classmates

Videos
  • Lesson 1 Part 1: The Camera Sensor
  • Lesson 1 Part 2: Camera Projective Geometry
  • Lesson 2: Camera Calibration
  • Lesson 3 Part 1: Visual Depth Perception - Stereopsis
  • Lesson 3 Part 2: Visual Depth Perception - Computing the Disparity
  • Lesson 4: Image Filtering
Readings
  • Supplementary Reading: The Camera Sensor
  • Supplementary Reading: Camera Calibration
  • Supplementary Reading: Visual Depth Perception
  • Supplementary Reading: Image Filtering
Assignment
  • Module 1 Graded Quiz

Programming Assignment
  • (Submission) Applying Stereo Depth to a Driving Scenario
Ungraded Labs
  • Practice Assignment: Applying Stereo Depth to a Driving Scenario
  • (Solution) Applying Stereo Depth to a Driving Scenario

Videos
  • Lesson 1: Introduction to Image features and Feature Detectors
  • Lesson 2: Feature Descriptors
  • Lesson 3 Part 1: Feature Matching
  • Lesson 3 Part 2: Feature Matching: Handling Ambiguity in Matching
  • Lesson 4: Outlier Rejection
  • Lesson 5: Visual Odometry

Readings
  • Supplementary Reading: Feature Detectors and Descriptors
  • Supplementary Reading: Feature Matching
  • Supplementary Reading: Feature Matching
  • Supplementary Reading: Outlier Rejection
  • Supplementary Reading: Visual Odometry
Programming Assignment
  • Visual Odometry for Localization in Autonomous Driving
Ungraded Lab
  • Visual Odometry for Localization in Autonomous Driving

Videos
  • Lesson 1: Feed Forward Neural Networks
  • Lesson 2: Output Layers and Loss Functions
  • Lesson 3: Neural Network Training with Gradient Descent
  • Lesson 4: Data Splits and Neural Network Performance Evaluation
  • Lesson 5: Neural Network Regularizatio
  • Lesson 6: Convolutional Neural Networks
Readings
  • Supplementary Reading: Feed-Forward Neural Networks
  • Supplementary Reading: Output Layers and Loss Functions
  • Supplementary Reading: Neural Network Training with Gradient Descent
  • Supplementary Reading: Data Splits and Neural Network Performance Evaluation
  • Supplementary Reading: Neural Network Regularization
  • Supplementary Reading: Convolutional Neural Networks
Assignment
  • Feed-Forward Neural Networks

Videos
  • Lesson 1: The Object Detection Problem
  • Lesson 2: 2D Object detection with Convolutional Neural Networks
  • Lesson 3: Training vs. InferenceLesson 4
  • Lesson 4: Using 2D Object Detectors for Self-Driving Cars
Readings
  • Supplementary Reading: The Object Detection Problem
  • Supplementary Reading: 2D Object detection with Convolutional Neural Networks
  • Supplementary Reading: Training vs. Inference
  • Supplementary Reading: Using 2D Object Detectors for Self-Driving Cars
Assignment
  • Object Detection For Self-Driving Cars

Videos
  • Lesson 1: The Semantic Segmentation Problem
  • Lesson 2: ConvNets for Semantic Segmentation
  • Lesson 3: Semantic Segmentation for Road Scene Understanding
Readings
  • Supplementary Reading: The Semantic Segmentation Problem
  • Supplementary Reading: ConvNets for Semantic Segmentation
  • Supplementary Reading: Semantic Segmentation for Road Scene Understanding
Assignment
  • Semantic Segmentation For Self-Driving Cars

Videos
  • Project Overview: Using CARLA for object detection and segmentation5m
  • Final Project Hints6m
  • Final Project Solution [LOCKED]
  • Congratulations for completing the course!
Programming Assignment
  • Environment Perception For Self-Driving Cars
Discussion Prompt
  • Your Learning Journey
Ungraded Lab
  • Environment Perception For Self-Driving Cars

Instructors

University of Toronto, Toronto Frequently Asked Questions (FAQ's)

1: Is university credit provided to candidates?

This course doesn’t come with university credit. Certification is done by Coursera itself.

2: What are the benefits of the subscription plan?

Candidates opting for a subscription plan have unlimited access to the course. They will get access to all contents including assignments and will get a certificate after the completion.

3: Can I learn this course for free?

A free audit option is available. Candidates will have access to all the course contents. Contents like assignments are not included in it. There will not be certification after completion also.

4: When will I get access to the content of the course?

Access will be enabled after the enrolment and it will vary according to the way of enrolment. Only subscribed candidates will gamer access to all the contents.

5: Will I get a scholarship for studying the Visual Perception for Self-Driving Cars online course?

Coursera provides financial aid. If candidates can’t afford the fee, do apply for availing financial aid. It will take 15 days at least to complete the review process.

Articles

Back to top