Real Time Facial Expression Recognition on Streaming Data

Previously, we’ve worked on facial expression recognition of a custom image. Additionally, we can detect multiple faces in a image, and then apply same facial expression recognition procedure to these images. As a matter of fact we can do that on a streaming data continuously. These additions can be handled without a huge effort.

Face Detection

Opencv enables to detect human faces with a few lines of code.

import cv2
face_cascade = cv2.CascadeClassifier('C:/ProgramData/Anaconda3/envs/tensorflow/Library/etc/haarcascades/haarcascade_frontalface_default.xml')
img = cv2.imread('/data/friends.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #transform image to gray scale
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
 #print(faces)
 cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
cv2.imshow('img',img)

In this way, we can detect human faces easily.

friends-face-detection
Friends (1994)

Streaming Data

What would be if the source were cam instead of a steady image? We can get help from opencv again.

cap = cv2.VideoCapture(0)

while(True):
 ret, img = cap.read()

 #apply same face detection procedures
 gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
 faces = face_cascade.detectMultiScale(gray, 1.3, 5)

 for (x,y,w,h) in faces:
  cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)

 if cv2.waitKey(1) & 0xFF == ord('q'): #press q to quit
  break

cap.release()
cv2.destroyAllWindows()

No matter what the source is (steady image or cam), it is shown that we can detect faces. Once coordinates of detected faces are calculated, we can extract them from the original image. The following code should be put in the faces for iteration. We also need to gray scale and 48×48 resized image to recognize its facial expression based on facial expression recognition requirements.

detected_face = img[int(y):int(y+h), int(x):int(x+w)] #crop detected face
detected_face = cv2.cvtColor(detected_face, cv2.COLOR_BGR2GRAY) #transform to gray scale
detected_face = cv2.resize(detected_face, (48, 48)) #resize to 48x48

Expression Analysis

In previous post, we’ve constructed a model and train it to recognize facial expressions. We would use same pre-constructed model and its pre-trained weights.

from keras.models import model_from_json
model = model_from_json(open("facial_expression_model_structure.json", "r").read())
model.load_weights('facial_expression_model_weights.h5') #load weights

Now, we can classifiy the facial expression of detected faces on a image.

img_pixels = image.img_to_array(detected_face)
img_pixels = np.expand_dims(img_pixels, axis = 0)

img_pixels /= 255

predictions = model.predict(img_pixels)

#find max indexed array
max_index = np.argmax(predictions[0])

emotions = ('angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral')
emotion = emotions[max_index]

cv2.putText(img, emotion, (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,255,255), 2)

Evaluation

Applying the both face detection and facial expression recognition procedures on a image seems very successful.

kid-expressions
Different expressions of a kid

Also, applying same procedures to a stream data seems very satisfactory.

So, we’ve already recognized facial expressions of human beings. Today, we’ve consumed opencv to process stream data and detect faces on an image. Finally, we’ve merged them and process stream data to detect emotions. Code of the project is pushed to GitHub. Also, you can find the pre-constructed model and pre-trained weights in same repository.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s