Learn more about BrainStation's Intro Day, a popular event series that gives aspiring students a chance to try our coding bootcamps for a day while connecting with bootcamp alumni and hiring partners.
Machine learning is an essential part of being a Data Scientist. In simplest terms, machine learning uses algorithms to discover patterns and make predictions. It’s one of the more popular methods used to process large amounts of raw data and will only increase in popularity as more companies try to make data-driven decisions.
Machine learning encompasses a vast set of ideas, tools, and techniques that Data Scientists and other professionals use. We’ve explained these concepts more broadly, but this time, let’s take a look at some of the specific components, and how they can be used to solve problems.
Supervised Machine Learning
The most straightforward tasks fall under the umbrella of supervised learning.
In supervised learning, we have access to examples of correct input-output pairs that we can show to the machine during the training phase. The common example of handwriting recognition is typically approached as a supervised learning task. We show the computer a number of images of handwritten digits along with the correct labels for those digits, and the computer learns the patterns that relate images to their labels.
Learning how to perform tasks in this way, by explicit example, is relatively easy to understand and straightforward to implement, but there is a crucial task: We can only do it if we have access to a dataset of correct input-output pairs. In the handwriting example, this means that at some point we need to send a human in to classify the images in the training set. This is laborious work and often infeasible, but where the data does exist, supervised learning algorithms can be extremely effective at a broad range of tasks.
Regression and Classification
Supervised machine learning tasks can be broadly classified into two subgroups: regression and classification. Regression is the problem of estimating or predicting a continuous quantity. What will be the value of the S&P 500 one month from today? How tall will a child be as an adult? How many of our customers will leave for a competitor this year? These are examples of questions that would fall under the umbrella of regression. To solve these problems in a supervised machine learning framework, we would gather past examples of “right answer” input/output pairs that deal with the same problem. For the inputs, we would identify features that we believe would be predictive of the outcomes that we wish to predict.
For the first problem, we might try to gather as features the historical prices of stocks under the S&P 500 on given dates along with the value of the S&P 500 one month later. This would form our training set, from which the machine would try to determine some functional relationship between the features and eventual S&P 500 values.
Classification deals with assigning observations into discrete categories, rather than estimating continuous quantities. In the simplest case, there are two possible categories; this case is known as binary classification. Many important questions can be framed in terms of binary classification. Will a given customer leave us for a competitor? Does a given patient have cancer? Does a given image contain a hot dog? Algorithms for performing binary classification are particularly important because many of the algorithms for performing the more general kind of classification where there are arbitrary labels are simply a bunch of binary classifiers working together. For instance, a simple solution to the handwriting recognition problem is to simply train a bunch of binary classifiers: a 0-detector, a 1-detector, a 2-detector, and so on, which output their certainty that the image is of their respective digit. The classifier just outputs the digit whose classifier has the highest certainty.
Unsupervised Machine Learning
On the other hand, there is an entirely different class of tasks referred to as unsupervised learning. Supervised learning tasks find patterns where we have a dataset of “right answers” to learn from. Unsupervised learning tasks find patterns where we don’t. This may be because the “right answers” are unobservable, or infeasible to obtain, or maybe for a given problem, there isn’t even a “right answer” per se.
Clustering and Generative Modeling
A large subclass of unsupervised tasks is the problem of clustering. Clustering refers to grouping observations together in such a way that members of a common group are similar to each other, and different from members of other groups. A common application here is in marketing, where we wish to identify segments of customers or prospects with similar preferences or buying habits. A major challenge in clustering is that it is often difficult or impossible to know how many clusters should exist, or how the clusters should look.
A very interesting class of unsupervised tasks is generative modeling. Generative models are models that imitate the process that generates the training data. A good generative model would be able to generate new data that resembles the training data in some sense. This type of learning is unsupervised because the process that generates the data is not directly observable – only the data itself is observable.
Recent developments in this field have led to startling and occasionally horrifying advances in image generation. The image here is created by training a kind of unsupervised learning model called a Deep Convolutional Generalized Adversarial Network model to generate images of faces and asking it for images of a smiling man.
Reinforcement Learning, Hybrids, and More
A newer type of learning problem that has gained a great deal of traction recently is called reinforcement learning. In reinforcement learning, we do not provide the machine with examples of correct input-output pairs, but we do provide a method for the machine to quantify its performance in the form of a reward signal. Reinforcement learning methods resemble how humans and animals learn: the machine tries a bunch of different things and is rewarded when it does something well.
Reinforcement learning is useful in cases where the solution space is enormous or infinite, and typically applies in cases where the machine can be thought of as an agent interacting with its environment. One of the first big success stories for this type of model was by a small team that trained a reinforcement learning model to play Atari video games using only the pixel output from the game as input. The model was eventually able to outperform human players at three of the games, and the company that created the model was acquired by Google for over $500M shortly thereafter.
To implement supervised learning to the problem of playing Atari video games, we would require a dataset containing millions or billions of example games played by real humans for the machine to learn from. By contrast, reinforcement learning works by giving the machine a reward according to how well it is performing at its task. Simple video games are well suited to this type of task since the score works well as a reward. The machine proceeds to learn by simulation which patterns maximize its reward.
Often, hybrid approaches lead to good results. For instance, an important task in some areas is the task of anomaly detection. An anomaly detection algorithm monitors some signal and indicates when something weird happens. A good example is fraud detection. We want an algorithm that monitors a stream of credit card transactions and flags weird ones. But what does weird mean? This problem is suited to a sort of hybrid supervised/unsupervised approach. There are certainly some known patterns that we would like the algorithm to be able to detect, and we can train a supervised learning model by showing it examples of the known fraud patterns. But we also want to be able to detect previously unknown examples of potential fraud or otherwise abnormal activity, which might be accomplished by methods of unsupervised learning.
Machine Learning Basics Can Have a Big Impact
Many of the most advanced tools require a great deal of sophisticated knowledge, in advanced mathematics, statistics, and software engineering. For a beginner wanting to get started, it might seem overwhelming, especially if you want to work with some of the exciting new models.
The good news is that you can do a lot with the basics, which are widely accessible. A variety of supervised and unsupervised learning models are implemented in R and Python, which are freely available and straightforward to set up on your own computer, and even simple models like linear or logistic regression can be used to perform interesting and important machine learning tasks.