Class 18: Lilacs and Tigerlilys and Buttcups! Oh My!
Today we will look at another image dataset. The data consists
of photos of flowers. There are 17 types of flowers and the task
is to recognize the flower from the image. We will look at just
10 of the classes in the notes today; your lab will look at working
with the entire set of 17. If you are curious, the original paper
of the dataset can be found here:
It was constructed by the Visual Geometry Group (VGG) at Oxford
University. Keep this in mind as that name will come up again in
our study of image processing.
Once again, the data will be read into R in two parts. The first
containing the photo metadata and the second an array consisting
of the image pixel values.
These images are 64 pixels by 64 pixels. Four times larger than
the thumbnails we used of our data. Class “1” consists of the
snowdrop flower. Let’s see at a few of the images to get a
sense of what this flower looks like:
We can look at these and instantly see the similarities. The difficulty
is going to be teaching the computer to understand these as well. Let’s
now look at a representative from all 10 classes of flowers:
Notice that color will be useful for telling some of these apart, but
not sufficent for distinguishing all classes. Crocus’ and irises,
for example, look very similar.
Collapse into data matrix
To start, we will use the same trick we tried last time of flattening
the array into a matrix and applying the elastic net to the data.
The resulting model, even at lambda.1se, has over 300
non-zero components. This is a dense model that has included
many of the pixel values. Evaluating the model we see that it
heavily overfits to the training data:
There are ten classes here, so a 36% classification rate is
not terrible. I think we can probably do better though!
One difficulty with using the red, green, and blue pixel values
is that these do not map very well into a “natural” meaning of
color. They are useful for digital screens to display images but
not ideal for much else.
Instead, we can use a different color space model that translated
red, green, and blue into a different set of variables. One popular
choice in statistical learning is the hue, saturation, value space.
These three values range from 0 to 10. A good picture helps a lot to
understand what the values mean:
Value tells how dark a pixel is, saturation how much color it
has (with a low value being close to grey), and hue gives the specific
point on a color wheel. Usually a hue of 0 indicates red. Notice that
hue is a circular variable, so that a hue of 0.99 is close to a
hue of 0.01.
We can conver into HSV space with the base R function rgb2hsv:
To make sure we understand exactly what these values mean,
let’s plot some values in R. The hsv function maps a set
of HSV coordinates into a name of the color. Here we look
at a bunch of grey color with varying values (hue is set
to 1 and saturation to zero), as well as a set of 10 of
hues where saturation and value are set at 1.
A good trick with HSV space is to discritize the pixels into
a small set of fixed colors. We will start by using the buckets
defined in the previous plot.
We will do that we creating a vector called color set to #000000
(pure black) and then changing the color depending on the HSV
coordinates. If the saturation is less than 0.2 this is a pixel
too washed out to make out a reasonable color. We then set it to
a shade of grey depending on value split into five buckets. If
the saturation is higher than 0.2 and value is higher than 0.2
(i.e., it is not too dark), we bucket the hue into ten buckets.
Points with a low value are all kept at the default of black.
For the one test image, we see that the dominant color is “#FF9900”,
an orange, followed by “#0066FF”, a blue.
We can use these counts as features to tell us about a given flower.
Let’s cycle over the entire dataset and grab these features.
The 8th column is the orange color and the 9th a greenish color,
both popular from the flowers and the background greenery.
We can use this new matrix to fit another elastic net. The matrix
is small enough that we could use other techniques too, but I’ll
keep it consistent here.
We see that some expected patterns here. Snowdrops have a large
white coefficient (“#FFFFFF”) and bluebells have a large value
for blue (“#3300FF”) and purple (“#CC00FF”). Sunflowers have a
large coefficient for orange (“#FF9900”).
This model is slightly more predictive, but importantly is not
nearly as overfit.
The reason for this that the first elastic net likely
approximated the kind of analysis we did here, but in doing overfit
to the way hue, value, and saturation looked on the training data.
We can improve our model by including more colors. We don’t need
any more greys, but lets include a set of 100 hues. This will give
us more information about the particular colors for each flower.
We will use the elastic net again here. With the increased
set of colors, let’s set alpha to 0.2 to spread the weights
out over similar colors.
The model is more predictive with the more grainular
We can create an interesting visualization of these values by
showing the weights as a function of the actual colors for
Tigerlily’s are very red, whereas bluebells, crocuses, and
irises have a blue/purple color.
At the very least, I think it visually looks very neat even if
it is not particularly helpful from a predictive standpoint.
If we want to improve our model further, we need to include information
beyond just the color of the flower. When we look at the images, our
brains also use information about shape and texture. Let’s try to find
a way to measure this in the image.
I will start by taking a sample flower image and creating a black and
white version of it. A simple way to do this is to average the red,
green, and blue pixels.
To detect texture we can take the brightness of each pixel and
subtract it from the brightness of the pixel to its lower right.
We can do this in a vectorized fashion as such:
The resulting image roughly detects edges in the image. Notice
that is has only 63-by-63 pixels due to the fact that we cannot
compute this measurement on the rightmost or bottommost edges
of the plot.
We’ll do this for each image, and save the number of pixels that
have an edge value greater than 0.1. You could of course play around
with this cutoff, or save a number of different cutoff values. This
number will tell us roughly how much of the image consists of edges.
A low number indicates a smooth petal and a a high one indicates
a grassy texture to the flower.
A boxplot shows that there are differences between the flowers
in this measurement. Crocuses in particular have a lot of edges.
Most of the photos have a flower in the middle, but the background
may include grass, sky, or other non-related elements. Let’s repeat
the edge detector but now only such as the degree of edge-ness only
for the middle of the image.
This shows a clearly differentiation of the flowers. Fritillary
have a lot of edges due to their spots in the middle of
the photo. Notice that the patterns here are quite different
from those in the whole image.
We will create a data matrix by putting together the color information
with the mean_edge and mean_edge_mid metrics.
I’ve included the cross-validation curve because it is a
perfect textbook example of what the curve should look like
(but rarely does so nicely). The resulting model performs much
better than the color alone.
A confusion matrix shows us that only a few flowers are still
difficult to differentiate.
We won’t have time here, but the next step would be to figure out
what features would help distinguish the “snowdrop”, “daffodil”,
and “bluebell” flowers from the others as false positives and
negatives from these groups are causing a large portion of the