OpenCV becomes a de facto standard for image processing studies. The library offers some legacy techniques for face recognition as well. Local binary patterns histograms (LBPH), EigenFace and FisherFace methods are covered in the package. It is a fact that these conventional face recognition algorithms ARE NOT state-of-the-art techniques anymore. Nowadays, CNN based deep learning approaches overperform than these old-fashioned methods. Interestingly, its competitor library dlib adopts deep learning based approach. Still it would be a baseline study for newbies. In this post, we will mention how to apply face recognition with OpenCV in Python.
Vlog
You can either watch the following vlog or follow this blog post. They both cover the same topic.
🙋♂️ You may consider to enroll my top-rated machine learning course on Udemy
Prerequisites
Face recognition in opencv requires to install the following python packages. If you install just opencv-python package, then you will have “AttributeError: module ‘cv2.cv2’ has no attribute ‘face’.” exception. Installation of opencv-contrib-python will solve this issue.
pip install --user opencv-python==4.3.0 pip install --user opencv-contrib-python==4.3.0.36
Pre-built face recognition models
OpenCV supports local binary patterns histograms (or shortly LBPH), eigenface and fisherface methods. We can run them all within opencv.
model = cv2.face.LBPHFaceRecognizer_create() #Local Binary Patterns Histograms #model = cv2.face.EigenFaceRecognizer_create() #model = cv2.face.FisherFaceRecognizer_create()
Face detection
We will feed the images in our database to the model in the next step. But focusing on the facial area will increase the accuracy of the model. Besides, we will do it for the image we are looking for. That’s why, I will create a generic face detection function.
def detect_face(img_path): img = cv2.imread(img_path) detected_faces = faceCascade.detectMultiScale(img, 1.3, 5) x, y, w, h = detected_faces[0] #focus on the 1st face in the image img = img[y:y+h, x:x+w] #focus on the detected area img = cv2.resize(img, (224, 224)) img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) return img
Face recogntion models in opencv expects same sized images in gray scale.
Face detection can be done with several models. OpenCV offers haar cascade and single shot multibox detector (SSD). Dlib offers Histogram of Oriented Gradients (HOG) and Max-Margin Object Detection (MMOD). Finally Multi-task Cascaded Convolutional Networks (MTCNN) is a common solution for face detection. You can see the detection performance of those models in the following video.
Facial Database
I will store the following images in the database. You can find them here in the unit tests in the deepface package.
face_db = [ "deepface/tests/dataset/img1.jpg", "deepface/tests/dataset/img3.jpg", "deepface/tests/dataset/img8.jpg", "deepface/tests/dataset/img13.jpg", "deepface/tests/dataset/img30.jpg", "deepface/tests/dataset/img44.jpg" ]
Training
We’ve built and initialized the LBPH face recognizer in the previous steps. Now, we will train the model with the facial database images. Recognizer expects gray scale facial images and index values in numpy format. Detect face function handles this already.
faces = [] for img_path in face_db: print(img_path) img = detect_face(img_path) faces.append(img) ids = np.array([i for i in range(0, len(faces))])
Now, we have facial images and indexes in numpy format. We can start the training.
model.train(faces, ids) model.save("model.yml")
Training finds the histograms of the images in the facial database. That might last long. That’s why, if you’ve found the histograms already, then you can re-use it.
model.read("model.yml")
Histograms
Trained model object stores histograms of the images in the facial database. It will compare the histogram of the target image with them later.
histograms = model.getHistograms() for i in range(0, len(face_db)): histogram = histograms[i][0] #histogram = histogram[0:100] axis_values = np.array([i for i in range(0, len(histogram))]) fig = plt.figure(figsize=(10, 5)) ax1 = fig.add_subplot(1,2,1) plt.imshow(cv2.imread(face_db[i])[:,:,::-1]) plt.axis('off') ax2 = fig.add_subplot(1,2,2) plt.bar(axis_values, histogram) plt.show()
Face recognition
We will feed a target image to LBPH model and it looks for it in the facial database. Predict function serves this duty. It returns both confidence score and index in the facial database of the found one.
target_file = "deepface/tests/dataset/img7.jpg" img = detect_face(target_file) idx, confidence = model.predict(img) print("Found: ", face_db[idx]) print("Confidence: ", confidence) #-------------------- fig = plt.figure() ax1 = fig.add_subplot(1,2,1) #plt.imshow(img[:,:,::-1]) plt.imshow(cv2.imread(target_file)[:,:,::-1]) plt.axis('off') ax2 = fig.add_subplot(1,2,2) #plt.imshow(faces[id], cmap='gray') plt.imshow(cv2.imread(face_db[idx])[:,:,::-1]) plt.axis('off') plt.show()
Some testing results are shown below. It seems satisfactory for a legacy method.
Interpretability
We’ve shown the histograms of facial database images before. Besides, recognizer can find the Angelina Jolie in the database correctly. How this happened?
def displayHistogram(target_file): img = detect_face(target_file) tmp_model = cv2.face.LBPHFaceRecognizer_create() tmp_model.train([img], np.array([0])) histogram = tmp_model.getHistograms()[0][0] #histogram = histogram[0:100] axis_values = np.array([i for i in range(0, len(histogram))]) plt.bar(axis_values, histogram) plt.show()
Let’s visualize the histogram of this target image. Really, the both Angelina Jolie images in database and target have similar visualizations.
Conclusion
So, we’ve mentioned how to apply face recognition with OpenCV in Python in this blog post. OpenCV covers legacy face recognition techniques and they are not state-of-the-art solutions anymore. Interestingly, its competor package dlib covers modern techniques for face recognition. Still, this would be a pretty baseline study for beginners.
You should adopt CNN based deep learning models to have state-of-the-art face recognition models.
You can run a modern face recognition application with a few lines of code in Python as well.
I pushed the source code of this study to GitHub as a notebook. You can support this study by starring the repo.
Support this blog if you do like!