Race and Ethnicity Prediction In The Perspective of AI Ethics

We can predict the demographic and cultural information of people including identityage and gender, ethnicity based on their photos already. Herein, race and ethnicity prediction might be the most fragile field. This can disturb or bother anyone because its potential use-case might be racism. It could raise privacy concerns as well. In this post, we will focus on the products in the market covering facial attribute prediction including race and ethnicity. Also, I will share my ideas for these products in the perspective of AI ethics.

BTW, you can find my race and ethnicity prediction application here. This post covers the subject by the view of ethics perspective.


🙋‍♂️ You may consider to enroll my top-rated machine learning course on Udemy

Decision Trees for Machine Learning

Commercial applications

Face++, Clarifai and Kairos are commercial products supporting some facial attribute prediction including race and ethnicity . For example, this site collected the demographic information of marvel universe characters in the Black Panther movie from some popular APIs or SDKs.

demography-in-black-panther
Demography in Black Panther

Current documentation of Face++, Clarifai and Kairos mention that they still support race prediction.

Face++ mentions ethnicity as face related attribute.

face-plus-plus-cover
Face++ documentation

Clarifai mentiones ethnicity as multicultural appearance.

clarifai-cover
Clarifai API

Furthermore, Kairos predicts ethnicity probabilities as an face attribute.

kairos-cover
Kairos API

Formerly, Kairos released a web interface to predict races. Its results are shared with #DiversityRecognition in Twitter. They closed this project later. Notice that they still support this attribute in their commercial API.

diversity-recognition
Web interface solution of Kairos

Besides, ever ai mentioned some face related information including face verification, emotion, age, gender and ethnicity in a poster. They currently called this capability as phenotype detection in the home page.

ever-ai-cover
Ever ai

Recently, a Russia based start-up NtechLab announced that they will recognize people ethnicity soon. Its current home page does not mention ethnicity. They might give up to add this feature.





NtechLab-cover
NtechLab announcement

Academic research

Moreover, a research group of The University of Melbourne (Australia) announced a biometric mirror. It basically detects some demographic and cultural attributes of faces.

biometric-mirror-demo
Biometric mirror

I’ve found FairFace and UTKFace open data sets including race and ethnicity labeled face photos. FairFace stores 97K instances whereas UTKFace stores 10K instances. FairFace includes East Asian, Southeast Asian, Black, White, Latino-Hispanic and Middle-Eastern races as labels. On the other hand, East Asian and Southeast Asian races are mentioned as just Asian, and Latino-Hispanic and Middle-Eastern races are grouped in Others.

Foreseeable use cases

Kinship recognition based on facial images is a challenging task. Its foreseeable use cases are missing children, search investigations, modern-day regugee crisis, genealogy research, social media recommendations (To Recognize Families In The Wild: A Machine Vision Tutorial).

human-trafficking
Missing children

Race and ethnicity prediction would be the important key of Kinship recognition.

Besides, genetic factors in racial and ethnic differences in health and disease is currently the focus of intense scrutiny.

In the perspective of AI Ethics

No ethical or lawful process can evaluate you based on your demography or cultural info. Still, even job applications including anti-discrimination notice expect you to clarify some of your demographic information such as age, gender, race, ethnicity and veteran status. So, if your demographic information will not impress the process, why you are expected clarify?

This is all because organizations can validate that they are not discriminating people based on their backgrounds. Otherwise, some demography group can be eliminated at a higher rate than others.

In my opinion, deep learning offers more fair evaluation process. Herein, official authorities must be in charge of regulations to avoid abuse. This is all science. Otherwise, it would be similar to blame Einstein for the atomic bombing in Hiroshima. Similarly, you can use a knife either to slice bread or stab someone.


Support this blog if you do like!

Buy me a coffee      Buy me a coffee