For the next 2.5 weeks, we will be looking at how to build predictive models,
with a focus on text and image processing. It will be a nice self-contained
introduction, but also a good introduction to my *MATH389: Introduction to
Statistical Learning* course that is being offered this Spring.

Statistical learning, synonymous with machine learning, is the process of extracting knowledge from data automatically, usually with the goal of making predictions on new, unseen data.

A classical example is a spam filter, for which a user labels incoming mails as either spam or not spam. A machine learning algorithm then "learns" a predictive model from data that distinguishes spam from normal emails, a model which can predict for new emails whether they are spam or not.

Here are some explicit examples:

- using physical characteristics of animals to predict whether they are carnivores
- estimate how much a house is worth given properties such as number of bedrooms, square footage and its address
- predict whether a flight will be delayed given the carrier, scheduled departure time, arrival and departure airports
- a crime has been reported at a specific place and time in Chicago; what type of crime is it?
- here is a picture of a flower, what kind of flower is it?
- given two sentences of text, predict which President used it in a public speech
- how many page views will a specific Wikipedia page receive tomorrow?

And can look at some explicit examples of these models in the wild:

In my experience, most learning algorithms fall into one of two broad categories:

- nearest neighbors (local): estimate values of new points by finding previously observed points close to the new ones
- linear models (global): estimate weights for each parameter; classify new points by summing up these weights

Within these classes, I typically find the need to use only some combination of the following four algorithms:

- k-nearest neighbors: a straightforward application of nearest neighbors
- gradient boosted trees: adaptively implement nearest neighbors by determining which directions "matter"
- elastic net: a linear model with controls on the sizes of the weights
- neural networks: iteratively apply collections of elastic nets to learn a hierarchy of increasingly complex weights

If some of these concepts seem hazy at the moment, that is perfectly natural. We'll go into much more detail throughout the next few weeks.

Today I want you to be become familiar with the basic mechanics of building predictive models. Here is my rough schedule for the next few classes:

- 2018-10-18 (today): mechanics of predictive models
- 2018-10-23: Using matricies in predictive models; introduction to sklearn
- 2018-10-25: unsupervised learning and dimensionality reduction
- 2018-10-30: Neural networks I; introduction to keras
- 2018-11-01: Neural networks II; application to Wikipedia

That should take us to working on your final project for the semester.

Let's consider an example here where we have two variables, the number of capital letters in a text message an a classification of whether the message is spam (1) or not (0).

In [1]:

```
caps = [0, 0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 4, 5, 5, 6, 6, 8, 8]
spam = [0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1]
```

And let's consider a model that simply classifies a message as being spam based on a threshold for how many capital letters it contains:

$$ f(caps) = \begin{cases} 0, & caps \leq \alpha \\ 1, & caps > \alpha \end{cases} $$

For some threshold $\alpha$. How might we select the best value of $\alpha$? One approach would be to test a bunch of values and determine what the best value is.

So, consider all of the sensible values:

In [2]:

```
alpha_vals = list(range(0, 9))
alpha_vals
```

Out[2]:

And then we can test for each value how many predictions are correct:

In [3]:

```
for alpha in alpha_vals:
error = []
for x, y in zip(caps, spam):
error.append(int(int(x > alpha) == y))
print("alpha={0:d} percent correct={1:f}".format(alpha, sum(error) / len(error)))
```

So, the best choice appears to by predicting anything with more than 1 capital letter equal to "spam" and otherwise assuming the message is not spam. It appears that we will be correct 72.2% of the time.

There is one way we are cheating in this simple example. We are using the same dataset to pick the value of $\alpha$ that we are using to validate that it is a good choice. This doesn't matter very much here, but becomes an issue when comparing models of different complexities. It will always seem — if we use the approach here — that a more complex model is better.

A common method to avoid this issue is to split the dataset into two groups:

- A
*training*set, which is used to select the parameters of the model (e.g., the $\alpha$ above). - A
*testing*set, which is use to test how well a model performs on new data.

In some cases you may create these datasets yourself; in others, they may be given to you by an external source.

In the spam example, the thing we were trying to predict was a categorical variable.
It has just two values, 'spam' and 'not spam', and we want to guess which of these
two states a new message belongs to. There are also other prediction tasks that involve
predicting a continuous variable. For example, predicting the price of a house. The
first type of problem is called *classsification* and the second are often called
*regression*. The biggest difference between these for us will be how we measure how
good a model is at predicting the response. For classification, just computing the
percentage of correct guesses is a good start. For regression typically we use absolute
error:

$$ | y - \widehat{y} | $$

Or squared error:

$$ | y - \widehat{y} |^2 $$

Different choices lead to different models, something that we discuss in depth in MATH389.

One Python library that will be very important useful for us in predictive models
is **numpy**. I've already used this a bit in the python files created for the class,
but we haven't seen them directly yet. I'll explain more on Tuesday, but here is a
very minimal introduction.

Typically, numpy is imported as the abbrevition **np**:

In [4]:

```
import numpy as np
```

The key function for us here is called `np.array`

. It converts a Python list into
a numpy array. At first it will not appear that much has happened:

In [5]:

```
caps = np.array(caps)
caps
```

Out[5]:

In [6]:

```
spam = np.array(spam)
spam
```

Out[6]:

But, the big difference is that we can now perform vector arithmetic on the data. That is,
we can operate on the object as a whole without needing to write a bunch of `for`

loops:

In [7]:

```
caps + 1
```

Out[7]:

And:

In [8]:

```
spam / (caps + 1)
```

Out[8]:

There are also a bunch of numeric functions that operate on an entire vector,
such as `np.mean`

, `np.log`

, and `np.round`

:

In [9]:

```
np.mean(spam)
```

Out[9]:

And, here is how we would re-write the code to check for the alpha values:

In [10]:

```
for alpha in alpha_vals:
error = np.mean((caps > alpha) == spam)
print("alpha={0:d} percent correct={1:f}".format(alpha, error))
```

On the whole, this is quite a bit easier to write and read. This will be even more important, as we will see, when building more complex models.