Introduction to Machine Learning

Rated 5 out of 5 by from Love it! I enjoyed this course and I learned a lot. Exhaustive with a lot of great examples.
Date published: 2021-02-09
Rated 5 out of 5 by from Excellent for New and Experienced Users I found this course to be an excellent introduction to machine learning. The organization of the course provided a progression into the subject matter that I felt made it easy to follow, given that it is a complex subject. Most of all, I found the Python ColLab examples and assignments the most valuable in terms of understanding the material. The Jupyter notebooks were well documented and broke the problem down into digestible chunks that followed the lectures. The lectures with the Colab examples gave me the skills and tools to start engineering my own machine learning applications. I highly recommend this course for anyone interested in machine learning.
Date published: 2021-01-23
Rated 1 out of 5 by from Disappointing This was an awkwardly produced introduction to a fascinating topic. The lecturer clearly knew and loved his stuff, but he presented his material so unhelpfully that my wife and I dropped out in lecture 4. His good-humored nature was appreciated but just couldn't compensate for the way the subject was presented. An abstract subject often benefits from a concrete example, which he provided initially by explaining the issues involved in green-screening and determining which pixels are foreground vs. background. But too often he'd introduce an element without adequate explanation. And though I am familiar with python, the course at least in the early lectures benefited little from the python examples which added enough extraneous detail that they detracted from the primary content. At some point, sure, working code is essential to mastering the topic yourself and it's helpful along the way for learning, but until a basic foundation has been laid, the excursions into coding were premature. The production of the series was lacking as well. Far too often the lecturer was staring at the wrong camera, which gets to you after a while, but more importantly suggests that if such a basic production element was fumbled, maybe the content itself is lacking as well. I have no reason to believe this is so and wish the series had been of sufficient quality to remain worth my time, but sadly it was not.
Date published: 2021-01-10
Rated 5 out of 5 by from Great overview course This class is a broad overview of a wide variety of machine learning techniques. The presentations are accessible to those with an understanding of algebra, the instructor works around the need for calculus. However, what makes this course stand out are the full example programs using Google Colab. The examples are not toys but full fledged machine learning applications with comments. The instructor explains how to access the examples, how the examples work in detail and how to edit and save your own versions of the examples. Nothing helps me understand a topic better than some hands-on examples and this course excels in this area.
Date published: 2020-12-28
Rated 5 out of 5 by from Great course. This course gave me greater insight to machine learning. Having thinkered a bit with the subject but lacking the theoretical understanding I was searching for more knowledge. This course was what I wanted and expected from a university course about the subject. Great teoretical overview and nicely explained. An enjoyment to listen to.
Date published: 2020-12-22
Rated 5 out of 5 by from Very Conceptual I’ve watched a few lectures so far and have really enjoyed it. I’m a college student who avidly looks at YouTube videos and other resources for coding, such as DataCamp, and I would still recommend this course. Professor Littman shares professional experience and insights that would be hard to find elsewhere.
Date published: 2020-12-22
Rated 5 out of 5 by from Excellent introductory technical course on ML This is a great course but please be aware that it's NOT some abstract overview of the concept and future of machine learning. It's a technical course in current machine learning techniques. It IS an introductory course in Machine Learning, but it assumes some basic programming understanding (control flow logic - for loops, if/else statements etc) and maybe a bit of math. But keep in mind that programming, calculus, linear algebra and probability (as in the field of mathematics) are necessary aspects of practical Machine Learning, and you don't need that here. He's not going to throw math problems at you. So that right there makes this a lot more accessible. (he will throw dad jokes at you however). The coding that happens in this course is fairly straightforward, and the source code is included.   I think a lot of the people grousing about this course wanted some non-technical overview or something, but that's not the point of it. It is geared towards people with a little bit of technical knowledge who are interested in this sub-field. I am new to ML but have experience in programming and Data Science, so for me it was a great combination of low-key technical but still fundamentals. I took it alongside other, more technical online courses in ML and it was extremely helpful. If you are interested in actually working with this technology, or even adjacent to it, I highly recommend this course.
Date published: 2020-12-22
Rated 5 out of 5 by from Excellent Introduction to ML, despite neg reviews I think there is a great misunderstanding for what is required and expected of a student before they can effectively jump into an introductory course for machine learning. As I mentioned to another reviewer, what would you expect from a course called "Introduction to Calculus" or "Introduction to Complex Analysis"? Think you'd be alright jumping in with an elementary knowledge of algebra? Probably not. You need a solid understanding of algebra, geometry, and trigonometry. Machine learning is NOT an "anybody can do it, despite your educational background" sort of topic despite the number of YouTube videos that exist that proclaim just that. ML is an extremely complex and math-heavy field. Sure, you can probably (and I do mean probably) learn to build and train a model without linear algebra, calculus, and logical analysis - but it would be akin to learning a few "getting by" phrases in Japanese without learning the language. Sure you'll be able to just get by - but you won't be able to converse or build have meaningful relationships. ML is the same way. Those who claim you can learn ML without the math are basically saying, "you can learn some basic phrases to get by so you can perform a few very limited tricks." If you are one of those reviewers, let me just say this: You have been mislead by YouTubers and others who's basic mission was to get your clicks on their videos and articles. They are interested in getting your clicks and views, not in YOU learning ML. If they were truly interested in getting you to understand AI/ML research & development, they would say, "Listen, this is an extremely complex topic. It's going to take a LOT of personal work to build a foundation of knowledge to be able to even start learning AI/ML. BUT it is worth it! Even if you need two or three years to prep, it will be worth it! Just know that this it's going to be a bit of a journey; after all, ML defines entire careers. It's not a simple field you can just jump into and learn. If you're willing to put the time and energy into this, here are my recommended resources to get you started on the fundamentals of what you'll need to know." ... but they didn't say that, did they ... If you gave this class a poor rating, I would suggest that you give those YouTubers and tech article authors the negative rating, and reconsider your rating for this course. After all, this course, "Introduction to Machine Learning", is EXACTLY what it's supposed to be; not what those click-bait clowns and view-driven toobers are telling you it should be.
Date published: 2020-12-21
  • y_2021, m_5, d_8, h_18
  • bvseo_bulk, prod_bvrr, vn_bulk_3.0.15
  • cp_1, bvpage1
  • co_hasreviews, tv_6, tr_14
  • loc_en_CA, sid_9070, prod, sort_[SortEntry(order=SUBMISSION_TIME, direction=DESCENDING)]
  • clientName_teachco
  • bvseo_sdk, p_sdk, 3.2.1
  • CLOUD, getReviews, 4.14ms
  • REVIEWS, PRODUCT
Introduction to Machine Learning
Course Trailer
Telling the Computer What We Want
1: Telling the Computer What We Want

Professor Littman gives a bird’s-eye view of machine learning, covering its history, key concepts, terms, and techniques as a preview for the rest of the course. Look at a simple example involving medical diagnosis. Then focus on a machine-learning program for a video green screen, used widely in television and film. Contrast this with a traditional program to solve the same problem.

31 min
Starting with Python Notebooks and Colab
2: Starting with Python Notebooks and Colab

The demonstrations in this course use the Python programming language, the most popular and widely supported language in machine learning. Dr. Littman shows you how to run programming examples from your web browser, which avoids the need to install the software on your own computer, saving installation headaches and giving you more processing power than is available on a typical home computer.

17 min
Decision Trees for Logical Rules
3: Decision Trees for Logical Rules

Can machine learning beat a rhyming rule, taught in elementary school, for determining whether a word is spelled with an I-E or an E-I—as in “diet” and “weigh”? Discover that a decision tree is a convenient tool for approaching this problem. After experimenting, use Python to build a decision tree for predicting the likelihood for an individual to develop diabetes based on eight health factors.

31 min
Neural Networks for Perceptual Rules
4: Neural Networks for Perceptual Rules

Graduate to a more difficult class of problems: learning from images and auditory information. Here, it makes sense to address the task more or less the way the brain does, using a form of computation called a neural network. Explore the general characteristics of this powerful tool. Among the examples, compare decision-tree and neural-network approaches to recognizing handwritten digits.

30 min
Opening the Black Box of a Neural Network
5: Opening the Black Box of a Neural Network

Take a deeper dive into neural networks by working through a simple algorithm implemented in Python. Return to the green screen problem from the first lecture to build a learning algorithm that places the professor against a new backdrop.

29 min
Bayesian Models for Probability Prediction
6: Bayesian Models for Probability Prediction

A program need not understand the content of an email to know with high probability that it’s spam. Discover how machine learning does so with the Naïve Bayes approach, which is a simplified application of Bayes’ theorem to a simplified model of language generation. The technique illustrates a very useful strategy: going backwards from effects (in this case, words) to their causes (spam).

29 min
Genetic Algorithms for Evolved Rules
7: Genetic Algorithms for Evolved Rules

When you encounter a new type of problem and don’t yet know the best machine learning strategy to solve it, a ready first approach is a genetic algorithm. These programs apply the principles of evolution to artificial intelligence, employing natural selection over many generations to optimize your results. Analyze several examples, including finding where to aim.

28 min
Nearest Neighbors for Using Similarity
8: Nearest Neighbors for Using Similarity

Simple to use and speedy to execute, the nearest neighbor algorithm works on the principle that adjacent elements in a dataset are likely to share similar characteristics. Try out this strategy for determining a comfortable combination of temperature and humidity in a house. Then dive into the problem of malware detection, seeing how the nearest neighbor rule can sort good software from bad.

29 min
The Fundamental Pitfall of Overfitting
9: The Fundamental Pitfall of Overfitting

Having covered the five fundamental classes of machine learning in the previous lessons, now focus on a risk common to all: overfitting. This is the tendency to model training data too well, which can harm the performance on the test data. Practice avoiding this problem using the diabetes dataset from lecture 3. Hear tips on telling the difference between real signals and spurious associations.

28 min
Pitfalls in Applying Machine Learning
10: Pitfalls in Applying Machine Learning

Explore pitfalls that loom when applying machine learning algorithms to real-life problems. For example, see how survival statistics from a boating disaster can easily lead to false conclusions. Also, look at cases from medical care and law enforcement that reveal hidden biases in the way data is interpreted. Since an algorithm is doing the interpreting, understanding what is happening can be a challenge.

28 min
Clustering and Semi-Supervised Learning
11: Clustering and Semi-Supervised Learning

See how a combination of labeled and unlabeled examples can be exploited in machine learning, specifically by using clustering to learn about the data before making use of the labeled examples.

27 min
Recommendations with Three Types of Learning
12: Recommendations with Three Types of Learning

Recommender systems are ubiquitous, from book and movie tips to work aids for professionals. But how do they function? Look at three different approaches to this problem, focusing on Professor Littman’s dilemma as an expert reviewer for conference paper submissions, numbering in the thousands. Also, probe Netflix’s celebrated one-million-dollar prize for an improved recommender algorithm.

30 min
Games with Reinforcement Learning
13: Games with Reinforcement Learning

In 1959, computer pioneer Arthur Samuel popularized the term “machine learning” for his checkers-playing program. Delve into strategies for the board game Othello as you investigate today’s sophisticated algorithms for improving play—at least for the machine. Also explore game-playing tactics for chess, Jeopardy!, poker, and Go, which have been a hotbed for machine-learning research.

30 min
Deep Learning for Computer Vision
14: Deep Learning for Computer Vision

Discover how the ImageNet challenge helped revive the field of neural networks through a technique called deep learning, which is ideal for tasks such as computer vision. Consider the problem of image recognition and the steps deep learning takes to solve it. Dr. Littman throws out his own challenge: Train a computer to distinguish foot files from cheese graters.

27 min
Getting a Deep Learner Back on Track
15: Getting a Deep Learner Back on Track

Roll up your sleeves and debug a deep-learning program. The software is a neural net classifier designed to separate pictures of animals and bugs. In this case, fix the bugs in the code to find the bugs in the images! Professor Littman walks you through diagnostic steps relating to the representational space, the loss function, and the optimizer. It’s an amazing feeling when you finally get the program working well.

30 min
Text Categorization with Words as Vectors
16: Text Categorization with Words as Vectors

Previously, you saw how machine learning is used in spam filtering. Dig deeper into problems of language processing, such as how a computer guesses the word you are typing and possibly even badly misspelling. Focus on the concept of word embeddings, which “define” the meanings of words using vectors in high-dimensional space—a method that involves techniques from linear algebra.

30 min
Deep Networks That Output Language
17: Deep Networks That Output Language

Continue your study of machine learning and language by seeing how computers not only read text, but how they can also generate it. Explore the current state of machine translation, which rivals the skill of human translators. Also, learn how algorithms handle a game that Professor Littman played with his family, where a given phrase is expanded piecemeal to create a story. The results can be quite poetic!

29 min
Making Stylistic Images with Deep Networks
18: Making Stylistic Images with Deep Networks

One way to think about the creative process is as a two-stage operation, involving an idea generator and a discriminator. Study two approaches to image generation using machine learning. In the first, a target image of a pig serves as the discriminator. In the second, the discriminator is programmed to recognize the general characteristics of a pig, which is more how people recognize objects.

29 min
Making Photorealistic Images with GANs
19: Making Photorealistic Images with GANs

A new approach to image generation and discrimination pits both processes against each other in a “generative adversarial network,” or GAN. The technique can produce a new image based on a reference class, for example making a person look older or younger, or automatically filling in a landscape after a building has been removed. GANs have great potential for creativity and, unfortunately, fraud.

30 min
Deep Learning for Speech Recognition
20: Deep Learning for Speech Recognition

Consider the problem of speech recognition and the quest, starting in the 1950s, to program computers for this task. Then delve into algorithms that machine-learning uses to create today’s sophisticated speech recognition systems. Get a taste of the technology by training with deep-learning software for recognizing simple words. Finally, look ahead to the prospect of conversing computers.

30 min
Inverse Reinforcement Learning from People
21: Inverse Reinforcement Learning from People

Are you no good at programming? Machine learning can a give a demonstration, predict what you want, and suggest improvements. For example, inverse reinforcement turns the tables on the following logical relation, “if you are a horse and like carrots, go to the carrot.” Inverse reinforcement looks at it like this: “if you see a horse go to the carrot, it might be because the horse likes carrots.”

29 min
Causal Inference Comes to Machine Learning
22: Causal Inference Comes to Machine Learning

Get acquainted with a powerful new tool in machine learning, causal inference, which addresses a key limitation of classical methods—the focus on correlation to the exclusion of causation. Practice with a historic problem of causation: the link between cigarette smoking and cancer, which will always be obscured by confounding factors. Also look at other cases of correlation versus causation.

30 min
The Unexpected Power of Over-Parameterization
23: The Unexpected Power of Over-Parameterization

Probe the deep-learning revolution that took place around 2015, conquering worries about overfitting data due to the use of too many parameters. Dr. Littman sets the stage by taking you back to his undergraduate psychology class, taught by one of The Great Courses’ original professors. Chart the breakthrough that paved the way for deep networks that can tackle hard, real-world learning problems.

30 min
Protecting Privacy within Machine Learning
24: Protecting Privacy within Machine Learning

Machine learning is both a cause and a cure for privacy concerns. Hear about two notorious cases where de-identified data was unmasked. Then, step into the role of a computer security analyst, evaluating different threats, including pattern recognition and compromised medical records. Discover how to think like a digital snoop and evaluate different strategies for thwarting an attack.

31 min
Mastering the Machine Learning Process
25: Mastering the Machine Learning Process

Finish the course with a lightning tour of meta-learning—algorithms that learn how to learn, making it possible to solve problems that are otherwise unmanageable. Examine two approaches: one that reasons about discrete problems using satisfiability solvers and another that allows programmers to optimize continuous models. Close with a glimpse of the future for this astounding field.

34 min
Michael L. Littman

Join me to understand the mind-bending and truly powerful ways that machine learning is shaping our world and our future.

ALMA MATER

Brown University

INSTITUTION

Brown University

About Michael L. Littman

Michael L. Littman is the Royce Family Professor of Teaching Excellence in Computer Science at Brown University. He earned his bachelor’s and master’s degrees in Computer Science from Yale University and his PhD in Computer Science from Brown University.

 

Professor Littman’s teaching has received numerous awards, including the Robert B. Cox Award from Duke University, the Warren I. Susman Award for Excellence in Teaching from Rutgers University, and both the Philip J. Bray Award for Excellence in Teaching in the Physical Sciences and the Distinguished Research Achievement Award from Brown University. His research papers have been honored for their lasting impact, earning him the Association for the Advancement of Artificial Intelligence (AAAI) Classic Paper Award at the Twelfth National Conference on Artificial Intelligence and the International Foundation for Autonomous Agents and Multiagent Systems Influential Paper Award at the Eleventh International Conference on Machine Learning.

 

Professor Littman is the codirector of the Humanity Centered Robotics Initiative at Brown University. He served as program cochair for the 26th International Conference on Machine Learning, the 27th AAAI Conference on Artificial Intelligence, and the 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making. He is a fellow of the AAAI, the Association for Computing Machinery, and the Leshner Leadership Institute for Public Engagement with Science.

 

Professor Littman gave two TEDx talks on artificial intelligence, and he appeared in the documentary We Need to Talk about A.I. He also hosts a popular YouTube channel with computer science research videos and educational music videos.

Also By This Professor