# Class 11: Numeric Summaries

## Learning Objectives

• Describe what the mean of a numeric variable measures
• Describe what the quantiles of a numeric variable measures
• Apply the definition of the variance and standard deviation to a numeric variable.
• Plot group-wise summary statistics using a violin or boxplot.

## Motivation

Graphics are an excellent way of summarizing and presenting information contained in a dataset. In many cases it can be useful to combine these with purely numeric summaries. These summaries are something colloquially called statistics, though I prefer to avoid this terminology.

In these notes, I will use the msleep dataset in order to show various numerical summaries. Here is an example of the dataset:

Each row corresponds to a type of mammal, and gives basic numeric values that describe their sleeping cycles.

## Mean

The first statistical summary that most people learn about is the mean, also commonly known as an average. It is calculated by adding all of a variables values together and dividing by the total number of values. If we have a dataset of n points with a variable x (denoting x with an underscore 1 as the first value, with a 2 as the second, and so forth), the mean can be formally defined as:

The notation of using x with a line above it to represent the mean is very common throughout the sciences and social sciences. It is often used in textbooks and papers without even being defined. To calculate means in R, as we have already, seen we can use the mean function. Here is an illustration that mean behaves as expected using the sum and nrow functions for comparison.

Note, I am showing in the second line a description of what the mean is doing and verifying that it works. Do not copy and use the second form in your work.

## Quantiles

Also here are a number of functions that allow us to compute quantiles, a generalization of percentiles. For example, the deciles function splits the dataset into 10 equally sized buckets:

This shows that about 1/2 of the mammals are awake less than 14.20 hours and about 1/2 are awake more than 14.20. I use the word “about” here due to subtleties regarding ties and repeated values; for all practical purposes this is generally not important. Similarly, we see that roughly 1/10 of the mammals are awake less than 8.12 hours and 1/10 are awake less than 20.8 hours. We also see that the sleepiest mammal is awake for only 4.1 hours and that one mammal is awake 22.1 hours of the day.

Note that the 50% percentile has a special name that you have probably heard before: the median.

We can also calculate what are called quartiles, splitting the data into four equally sized groups using the quartiles function:

Notice that four buckets requires 5 numbers, and that three of these line up with the deciles above. There are also functions ventiles (20) and percentiles that can be quite useful:

Ventiles are a bit esoteric, but I have found in my work that they can be very useful in practice. Percentiles are often useful when we want to look at the extreme values, such as the 97th, 98th and 99th percentiles.

## Deviation

Once we have defined the mean, we can then define the deviation of a data value as the difference between the value and its mean:

There is not a special R function for deviances because they are very easy to calculate using the mean function. As an example, here is how to create them:

Typically we will not need deviances directly, but they are used in the calculation of quantities measuring the variation about a mean.

## Variance

We can use deviations to measure the spread of a variable by adding the squared values of deviances. Why squares? For one thing, squaring the value makes negative deviations positive; though, the same effect would come from applying the absolution value function. The specific reason specifically for choosing the square is a bit too technical for our discussion.

The sum of squared deviances are calculated by the following formula:

And can be computed in R as:

The sum of squares cannot be used directly to compare datasets of different sizes as it grows with the number of points. In order to compare sums of squares across datasets, we use a measurement called variance which is basically just the average of the sums of squares:

The notation of using s^2 to represent the variance of a dataset is quite common.

Why do we use (n-1) rather than (n) to take the average? The technical reason is that if we want to measure the variance of a population using a sample from that population, we need to use $n-1$ in order to have an unbiased estimate of the population value… The short answer is not to worry about it, which I strongly suggest at this point.

The variance can be computed manually as follows:

Or, using the var function as follows:

Note: like the mean function, you should only use the var function in your work. I show the other form simply as a demonstration of the definition.

## Standard deviation

We often work with a quantity called the standard deviation, defined as simply the square root of the variance.

Why bother taking the square root? For one thing, it is a matter of units. In our example, the variance is given in “squared people” (a nearly meaningless quantity), but the standard deviation is given in “people” just like the variable itself. We can calculate the standard deviation for the awake variable using the function sd:

## Graphing Variation

Finally, we can use these measurments of distribution and variation in graphical forms. Typically, this comes up when we have a grouping categorical variable and another numeric variable of interest.

A boxplot shows, for each group on the x-axis, the distribution of the variable on the y-axis. The solid bar indicates the y-axis variable’s median and the height of the box and the “whiskers” indicate measurments of variation (see the link boxplot for more information about the different ways these can be computed).

Similarly, a violin plot is a newer twist on the boxplot that attempts to show more details about the distribution by varying its width with the distribution of the points:

### Changing the Unit of Observation

Often, it is useful to change the unit of observation within a dataset. The most common way of doing this is to group the dataset by a combination of variables and aggregate the numeric variables by taking sums, means, or some other summary statistics. Some common examples include:

• aggregating individual shot attempts in soccer to summary statistics about each player
• aggregating census tract data to a county or state level

We have seen a few simple ways of doing this already within a plot (such as counting occurances in a group with geom_bar). Now, we will see how to do this with the group_summarize command.

### Summarizing data

The group summarize command comes from the smodels package. Applying it to a dataset with no additional options yields a new dataset with just a single line. Variables in the new dataset summarize the numeric variables in the raw data.

Specifically, we see the following summaries for each numeric variable (the new names add a suffix to the original variable name):

• mean: the average of all the observations
• median: if we ordered all observations from smallest to largest, the middle value
• sd: the standard deviation, a measurment of how much the number varies across observations (more on this after the break)
• sum: the sum of all the observations

There is also a variable just called n at the end, giving the total number of observations in the entire dataset.

### Group Summarize

The magic of the group_summarize command comes from specifying other variables in function call. If we specify a grouping variable, here I’ll use vore, the summarizing will be done within each category. So, here, the new dataset has 12 rows with each row summarizing a given diet type:

This dataset can then be used in further visualizations.

### Summarize By Multiple Variables

By supplying multiple variables to the group_summarize command, we can group simultaneously by both. Here we have a unique row for each combination of vore and order:

As you can imagine, summarizing data can quickly allow for very complex graphics and analyses.