So, the next thing that I was curious about was finding out how I could do some facial recognition in people’s instagram posts. This was based on some discussions that I’d had with my research partners about meaning interaction, engagement – and the direction of that engagement. Of course, a lot of this has already been covered (in much greater detail than I can or will) but I think it’s an interesting staring point for discussions about social movement learning.
In this case, I was curious about whether a selfie would lead to greater levels of engagement with the post. This is part of my thinking about the different kinds of posts that social movement make us of, and how that alters the level of engagement and interaction. So, my thinking here is that it might be interesting to take 100 posts from someone’s instagram feed, and process them to count the number of faces that can be recognised in each of the posts. Once I’ve worked that out, I could plot that against the number of likes and the number of comments, to see if there is a positive correlation.
-
Trying and failing with R
My first idea was based on the work that I had been doing with R. I had seen that Microsoft’s Azure could be integrated with R Studio in order to process some facial recognition. This was something that I discovered on a LinkedIn Learning course. However, I really struggled to make this work; while I could access the image and download it, I couldn’t get the facial recognition to process. Time for plan B.
-
Setting up OpenCV
Undeterred, I decided that I would return to Python and the OpenCV library. Fortunately, there are plenty of options here. I found this article useful to get started: https://www.digitalocean.com/community/tutorials/how-to-detect-and-extract-faces-from-an-image-with-opencv-and-python
The code is below:
import cv2 import sys imagePath = sys.argv[1] image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml") faces = faceCascade.detectMultiScale( gray, scaleFactor=1.3, minNeighbors=3, minSize=(30, 30) ) print("[INFO] Found {0} Faces!".format(len(faces))) for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2) status = cv2.imwrite('faces_detected.jpg', image) print("[INFO] Image faces_detected.jpg written to filesystem: ", status)
To call this app:
python app.py path/to/input_image
The app takes in a single argument, and assigns it to imagePath. It then reads in the image, and then processes it into a gray image (to make it easier to detect faces – apparently this is a thing).
It then uses a Haar cascade to identify thee faces in the image. It’s not perfect, but it’s pretty good. Once the images are found, it draws the rectangles on the image, and saves it to ‘faces_detected.jpg’
A couple of points -the imread and I write images are pretty important here. In addition, this only works on one image, that is passed in when the app is run. This will need to be changed.
-
Making it work with more than one image
Okay, the next step is making it run with more than one image. Let’s assume that these are all stored in a folder called /images
import cv2 import sys import os path = /image #needs to be corrected outpath = /image/processed for image_path in os.listdir(path) # create the full path and read it in input_path = os.path.join(path, image_path) image = cv2.imread(input_path) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml") faces = faceCascade.detectMultiScale( gray, scaleFactor=1.3, minNeighbors=3, minSize=(30, 30) ) print("[INFO] Found {0}Faces!".format(len(faces))) for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2) fullpath = os.path.join(outpath, ‘rectangle_’+image_path) misc.imsave(fullpath, image) status = cv2.imwrite('faces_detected.jpg', image) print("[INFO] Image faces_detected.jpg written to filesystem: ", status)
-
Writing to the CVS
Okay, assuming that works, the next challenge is to
import cv2 import sys import os import csv path = /image #needs to be corrected outpath = /image/processed for image_path in os.listdir(path) # create the full path and read it in input_path = os.path.join(path, image_path) image = cv2.imread(input_path) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml") faces = faceCascade.detectMultiScale( gray, scaleFactor=1.3, minNeighbors=3, minSize=(30, 30) ) print("[INFO] Found {0}Faces!".format(len(faces))) for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2) fullpath = os.path.join(outpath, ‘rectangle_’+image_path) misc.imsave(fullpath, image) status = cv2.imwrite('faces_detected.jpg', image) print("[INFO] Image faces_detected.jpg written to filesystem: ", status) Wwith open(‘text.csv’, ‘w’, newline=‘ ’) as file: writer = csv.writer(file) writer.writerow(len(faces)
Okay, with a bit of work, I’ve now got this to work! Excellent news.