Image for Face Recognition in Constrained Environment

Face Recognition in Constrained Environment

See all formats and editions

Images from the surveillance networks are being extensively used for monitoring and criminal investigations. However, it is often difficult to recognize faces using the data from the surveillance networks because the resolution of the face in the images captured is too low. Further, the low-resolution images have a varying magnitude of facial feature content because of wide variations in illumination, pose, resolution, and the distance from which the image is captured. Also, a single face recognition solution cannot recognize faces efficiently in both high and lowresolution images. Wide variations in facial feature content in high and low-resolution images cause difficulty in classifying the features by a single model for face recognition. We present a Deep-CNN-based architecture called mpdCNN to solve the problem of face recognition in low as well as high-resolution images with high accuracy and robustness. Our proposed architecture mpdCNN gives 88.6% accuracy on the SCface dataset, which is an impressive improvement over the state-of-the-art algorithms. We also achieved an accuracy of above 99% on normal to high-resolution datasets for face recognition. For face recognition using deep learning architectures, loss functions have become a topic of research these days. It is because a loss function can significantly increase face recognition accuracy if designed to overcome face recognition challenges. Hardmining loss is one such generic loss function that can be used with any basic loss function and has the ability to enhance the face recognition accuracy of the given loss function. Hardmining loss is capable of mining the hard examples in the training set. Those samples in the training set that tend to be ignored by the deep learning model during the training process and are subsequently not learned by the model because of being sparse in number are called hard examples. The model learns the samples in the majority and is called soft examples. Identifying the hard examples in the training set such that the model learns the sparse samples is called Hardmining. Hardmining loss achieves the improvement in accuracy by introducing a more significant penalty for hard examples. However, the problem with Hardmining loss is that the easy examples are allocated

Read More
Title Unavailable: Out of Print
Product Details
Nayaneesh Kumar Mishra
8196431554 / 9788196431556
Paperback / softback
05/07/2023
160 pages
152 x 229 mm, 222 grams
General (US: Trade) Learn More