This is a story of a moth and a butterfly often living in the wild. But much to our surprise, a moth is always assumed as a butterfly. Well, this happened to me when I was young and always wondered what makes them separate from the butterfly. They are related, but they do have some differences (features not opinions!!).

Visually, most of the moths I have seen have features like more feathery compared to butterfly mostly nocturnal more powdery residue when you touch them (don’t do that)
These still hold true with few exceptions which you can sure find on the internet. Also, the powdery residue is due to millions of scales present in their wings. Anyways, I was just going through the fast.ai v3 course and the first course describes a simple object recognition architecture which can distinguish between different breeds of cats and dogs in Google Collaboratory. Can this architecture be applied to other datasets as well? Why not!

At this point of time, I was picking random datasets in my mind when I decided to go with Moth vs Butterfly task – simple, easy and fun. I scraped images of butterfly and moths from the internet and saved in target specific folders as in all butterfly images will go in a folder named butterfly. This makes it easy to use an ImageDataBunch data structure present in fast.ai v1 framework for accessing the butterfly and moth data folder and do other operations on it. The next task is to create a Convolution Neural Network for classifying the object present in the image into either moth or butterfly (2 class problem). Here are a few sample moth and butterfly images.

There are two folders present in the data. One for training the architecture (train) and other of validating (valid) the performance of the architecture on unseen data. The CNN used here is the famous ResNet architecture and all the deep learning frameworks contain ResNets API with different number of layers. The CNN used for moth vs butterfly classification contains 34 parametric layers which are learned through back-propagation based on the Gradient Descent approach on the classification loss function employed for this task. Although the number of train and valid numbers are small (30 and 10 respectively), it still helps to understand the performance of the architecture in terms of prediction effectiveness and class probability. Luckily fast.ai already has an API which helps to see the actual-label/predicted-label/loss/class-probability. It also generates a heat-map of the region in the image where the neural network believes contain highly likely predicted class features.

The leftmost image has low-class probability among all validation images. Well, it does resemble a butterfly just by looking at it, but it is a moth. The highlighted regions are the reasons for their respective class label. Most of the time, a butterfly tends to have more coloured patterns and thin antennae bulb compared to moth (some exceptions do apply).

This task of classifying between moth and butterfly is a simple task which can be extended for different species of butterfly and moth for more number of classes. I would suggest the readers visit fast.ai courses and start applying deep learning foundations to your beloved dataset. Lazy, but fun!!

Leave a Reply

Your email address will not be published. Required fields are marked *