Previously, we’ve worked on facial expression recognition of a custom image. Additionally, we can detect multiple faces in a image, and then apply same facial expression recognition procedure to these images. As a matter of fact we can do that on a streaming data continuously. These additions can be handled without a huge effort.
Face Detection
The easiest way to detect a face is haar cascade within OpenCV.
🙋♂️ You may consider to enroll my top-rated machine learning course on Udemy

import cv2 face_cascade = cv2.CascadeClassifier('C:/ProgramData/Anaconda3/envs/tensorflow/Library/etc/haarcascades/haarcascade_frontalface_default.xml') img = cv2.imread('/data/friends.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #transform image to gray scale faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: #print(faces) cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) cv2.imshow('img',img)
There are several face detection solutions. OpenCV offers haar cascade and Single Shot Multibox Detector (SSD). Dlib offers Histogram of Oriented Gradients (HOG) and a CNN based Max-Margin Object Detection (MMOD) and finally Multi-task Cascaded Convolutional Networks (MTCNN) is a common solution for face detection.
Here, you can watch how to use different face detectors in Python.
Streaming Data
What would be if the source were cam instead of a steady image? We can get help from opencv again.
cap = cv2.VideoCapture(0) while(True): ret, img = cap.read() #apply same face detection procedures gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) if cv2.waitKey(1) & 0xFF == ord('q'): #press q to quit break cap.release() cv2.destroyAllWindows()
No matter what the source is (steady image or cam), it seems that we can detect faces. Once coordinates of detected faces calculated, we can extract them from the original image. The following code should be put in the faces for iteration. We also need to gray scale and 48×48 resized image to recognize its facial expression based on facial expression recognition requirements.
detected_face = img[int(y):int(y+h), int(x):int(x+w)] #crop detected face detected_face = cv2.cvtColor(detected_face, cv2.COLOR_BGR2GRAY) #transform to gray scale detected_face = cv2.resize(detected_face, (48, 48)) #resize to 48x48
Expression Analysis
In previous post, we’ve constructed a model and train it to recognize facial expressions. We would use same pre-constructed model and its pre-trained weights.
from keras.models import model_from_json model = model_from_json(open("facial_expression_model_structure.json", "r").read()) model.load_weights('facial_expression_model_weights.h5') #load weights
Now, we can classifiy the facial expression of detected faces on a image.
img_pixels = image.img_to_array(detected_face) img_pixels = np.expand_dims(img_pixels, axis = 0) img_pixels /= 255 predictions = model.predict(img_pixels) #find max indexed array max_index = np.argmax(predictions[0]) emotions = ('angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral') emotion = emotions[max_index] cv2.putText(img, emotion, (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,255,255), 2)
Evaluation
Applying the both face detection and facial expression recognition procedures on a image seems very successful.
Also, applying same procedures to a video stream data seems very satisfactory.
Besides, we can apply this for web cam streaming. We try to act all emotion candidates.
So, we’ve already recognized facial expressions of human beings. Today, we’ve consumed opencv to process stream data and detect faces on an image. Finally, we’ve merged them and process stream data to detect emotions. Code of the project is pushed to GitHub. Also, you can find the pre-constructed model and pre-trained weights in same repository.
Bonus
You can apply both face recognition and facial attribute analysis including age, gender and emotion in Python with a few lines of code. The all pipeline steps such as face detection, face alignment and analysis are covered in the background.
Deepface is an open source framework for Python. It is available on PyPI as well.
Anti-Spoofing and Liveness Detection
What if DeepFace is given fake or spoofed images? This becomes a serious issue if it is used in a security system. To address this, DeepFace includes an anti-spoofing feature for face verification or liveness detection.
Support this blog if you do like!
hey Sefik, I’m a beginner in computer vision and I want to know how can I use SSD with this method
Please read this post: https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/
I’ve already read it, and yet it doesn’t work. Can you do a tutorial applying SSD with face expressions ?