Best writers. Best papers. Let professionals take care of your academic papers

Order a similar paper and get 15% discount on your first order with us
Use the following coupon "FIRST15"
ORDER NOW

Automatic Attendance System with Face Recognition Using Machine Learning

Chapter
1 Introduction

1.1
Introduction

The
face detection or recognition is one of the most popular researches in the
computer system, and it is the most advanced and better application to analyse
the images which are algorithm based technology. This technology mainly used in
the offices for attendance purpose as well as the security reason. This study
will be based on to evaluate the systems of multiple face detection, and a
major objective of the study is to evaluate those with algorithms.  Face recognition mainly formulated as the
still images, identifying one or more images that are stored in the database
management. Face detection is taken as the more successful and advanced
technology in a computer system nowadays. It will detect the exact location and
the faces of the human being which is stored in the digital images. This
application mainly detects the human faces and avoided another thing which is
surrounded by. The face recognition is one of the hardest tasks in the computer
system. With the help of an algorithm and many updated software required to
control this application on the system. The main problems arise due to the
following application is to implement new techniques and innovations in the
system.

1.2
Rationale

There
are several problems like a variation in the scale, orientation, exact
location, expression, lighting condition, and many things. Basically, it is the
process of establishing the identification of the individual person in terms of
physical, chemical and other attributes of the human being. In modern society,
the biometrics is used for the large scale identification of the management system.
The face recognition system uses many algorithms which creates many
difficulties in taking the values. The algorithm, which is used in this
particular application, is very complicated to proceed (Samarasinghe,
2016).
The harrascode recogniser is used as the ranking algorithm in face
recognition. Most of the old processes of the face recognition systems are slow
in the process, and it cannot develop multiple images in a single time. Image
qualities of the pictures are also unclear, and there are no related researches
available which can highlight on this impact directly. Carrasco de recogniser
and LBPH detector are having many variables and mathematical formula which
creates a difficulty in getting the exact values for the studies. Many
assumptions are taken for the harrascode recogniser and LBPH detector which may
be responsible for getting an exact value from the mathematical formula.

1.3
Background and definition

Based
on these two algorithms, this multiple face detection process will be developed
in this study. A significant process of face detection work is formulated
through stored images and videos in the database of the system. Face
recognition is one of the best researches in the computer application which
included the application of the image sensor and the algorithm properties. The
face recognition is based on the property of the still form of images,
verification of more than two people which is stored in the system of the
database management (Huang et al. 2015). This particular technology had a various application in the indexing
of the images and the retrieval of the information, and it can be used in
searching of images that contain people, associated the face with the names, by
the method of clustering the primary person can be identified. The face
recognition technology also used as the determination of the user attention an
example like the screen is facing publicly in which the face is detected once,
the personage and sex will be determined for the advertising purpose. This
application is also useful in the biometric process. It is the first advanced
technology in which the face valuation is required.

Chapter
2 Existing Methods

2.1
Existing systems

The
face recognition technology adopted many kinds of the method from the last few
years, but among all these methods the classical method is prevalent. The
prestigious face recognition classified into the component analysis, discriminate
analysis, discrete transformation, and component analysis. It is taken as the
primary factor in face recognition technology. The method of eigenfaces is used
by many researchers in the face recognition technology. The eigenfaces are the
principal component of this technology. It basically discriminated many input
variables into several classes (Li & Hua, 2015). The original form of the image
data can be extracted by using PCA (Principal Component Analysis) formula. One
of the essential features that PCA follows that it can reconstruct the original
form of the image from the original set by using the eigenfaces. Eigenfaces are
taken as the central element in the face recognition technology. Eigenfaces
generally represent the main features of the faces, which may not be contained
in the original form of the image.

Some
of the networks which are used in large scale are the artificial neural network
and the neural network. Viola Jones algorithms have also been improvised. The
filtering method involving false positives provides us with insight into
different colour used for the false face detection. All of these existing
systems are very old, and they consist of a lack of proper specifications and
detailing in the system which needs to be changed. However, in the next part of
the study, advantages of the multiple face recognition will be stated which
follows the Multi related scale LBPS.

2.2
Disadvantages of those

Facial
recognition has several disadvantages like the quality of the image, size,
facing of an angle and the processing part. At first, the quality of the image
fundamentally affects the algorithm work of facial recognition. The quality of
the image in the scanning of the video is deficient compared to the digital
camera. The quality of the images affected the entire facial detection process.
The storage and processing purpose of face recognition has significant
difficulties. The facing of an angle is chosen to recognise the real image of
the person (Minaee & Wang, 2015). To get an appropriate face by using
recognition software there were many forms of angles are used up, and this will
create a massive problem in the face detection process. Basically, they used
the format of the 2D facial type’s photos. Due to this format, the multiple
faces cannot be detected at the moment by the facial recognition. The motion of
the person was taking inaccurate images, and it will create a problem in the
facial recognition system. For more accuracy, the updated software is required
which is very costly in the market. Sometimes the images come hazy, and it will
create a problem in the detection process. 
The influences of the angle in the camera also affect the process of
facial recognition technology.

http://peachyessay.com/

Figure
1: Disadvantages of face recognition

(Source: Created by Author)

2.3
Proposed system and advantages

The
primary advantage of facial recognition in the computer system is that the
process of integration is very smooth and easy. Binary patterns which are local
consist of an LBP operator which has a firm texture to measure the exact
complexity. The proposed system will automatically measure the multiple face
data at the same time, and it will be based on the previous loaded images and
videos (Richardson et al. 2017). This study will be based on the Discriminant Linear Analysis (LDA)
a
nd it will be operated through the local patterns of binary. Through this
process, the original image will be converted to the LBP u2 8 images. However,
apart from the original image, the face recognition system will also load the
normalised and cropped images. LBP images are stored according to the different
patterns of black, white and grey dark spots. An experimental setup will be
done to increase the effectiveness of the process in brief.

The
automating time tracker is used to measure the time tracking, and in this
method, there is no need to monitoring the system 24 hours in a day. With the
automated formula of this method, the error is being dismissed. This automating
time tracker is also helpful in the attendance system by using facial
recognition technology (Siddiqi et al. 2017). 3D technology is used nowadays to detect more than one face at a
time. In the case of 3D technology, the accuracy level is remarkably higher
than 2D technology. The security system used in this technology is far better.
It can also be used without the knowledge of the user. The face recognition
system is easy to handle with a better security structure, time saver and also
the cost is less on the development of the software.

Image result for advantages of facial recognition technology
http://peachyessay.com/

Figure
2: Advantages of face recognition

(Source: Created by Author)

Chapter 3: System Requirements and Software’s

3.1
Hardware Requirements

Processor: Processor above 550 MHz.

Ram: Minimum
of 6 GB

Hard Disk: Minimum 8 GB

Input device: Keyboards and Mouses.

Output device: High-Resolution Monitors and VGA outputs

3.2
Software Specification

Operating System: Windows 10 and above or MacOS or Linux

Programming: Python 3.7.0 and other related libraries

3.3
Software

Python

Python
is an interactive, general purpose, high level and object-oriented programming
language. It was developed by Guido van Rossum in 1991. The python source code
is also available under the General Public License. It is mainly used for web
development, scripting, software development and mathematics.

Python Philosophies:

According
to Tim Peters, python can be philosophised in many ways and forms. The
highlighted inclusions are documented under the Zen of Python. Some of them
are: 

  • Beautiful is always better as compared to ugly.
  • Explicit should be better as compared implicit.
  • Simple better than complex.
  • The complex is, therefore, better as compared to complicated.
  • Readability counts.

Why Python?

  • Python can work on most platforms. (Mac, Windows, Raspberry Pi,
    Linux etc.)
  • Python has a language that is more similar to English.
  • In python, similar instances of the case use fewer lines of code
    than compared to other languages.
  • Python code can be executed as soon as it is written as it uses an
    interpreter. Therefore prototyping can be quick.
  • Python can be treated in a functional, procedural or
    object-oriented way.

Python
is designed to be extremely extensible rather than having all its functionality
in its core. This has made it famous in order to add interfaces as deemed
necessary by the programmer. Different libraries and packages can be included
and modified in order to adhere to the result (geeksforgeeks.org,
2019).

OpenCV

OpenCV
is a software library for machine learning. It stands for Open Source Computer
Vision Library. It is used to promote machine and deep learning in commercial
products and applications. It has a considerable number of optimised libraries
which includes state of the art and classic machine learning algorithms. These
algorithms can be used to identify faces, recognise objects, produce 3D clouds
from cameras, extract 3D models, find matching images from the database, stitch
images to obtain a higher resolution picture of the entire scene, remove red
eyes using flash from images, follow movements of the eye, recognise scenery
etc.

Well-known
tech giants such as Google, Microsoft, and Yahoo make use of this platform, but
it is most popular with startups. Making use of this library has enabled
different minds in the industry to produce innovative ways to approach
different problems. It supports most of the programming languages and open
source platforms. It is inclined more towards real-time applications. It in
itself is an open source network that allows developers to modify its code and
make public changes in its architecture. It is written in C/C++, in the library
here this can take advantage of the multilevel core processing. This is
optimised with OpenCL, which can take the influence of the acceleration in
hardware with the different underlying computer platform.

Chapter
4 Design and Implementation

4.1
Model

Face
Detection is a combination of computer intelligence and image and data
processing to achieve the goal to match face information to our data for
security or recognition purpose. The complete Face Recognition algorithm uses
mainly 3 phases to work together to achieve the result. They are classified as:

  • Face Detection and Identification
  • Recognition Algorithm
  • Face recognition
https://lh6.googleusercontent.com/1PwslE6AxrdjpASwjpAo--wV3BbNFYSJzI2wv1zp7UTHPWqkSqG19GTz2lh4eWniG1NUrRtb0I3EbXhy9P7BIKHPQRPYAeXA9Apsmbe1lu86UwN9BmaiwcAy-DnAV8UXnQ1h-ynZ

Figure
3: Steps to Recognition

(Source:
https://medium.com/mjrobot-org/real-time-face-recognition-an-end-to-end-project-6a6d6173a6a3)

4.2
Face Detection and Identification

The
Face detection algorithms are used to locate a human face in a particular
scene. The detection techniques that are practised are divided into two types
of scenes; for controlled backgrounds and for constrained scenes.

Finding
faces in controlled backgrounds: This includes the detection of faces in single
colour backgrounds (Bourlai, 2016). The faces are separated from the background
by virtue of its motion or colour and are then put through recognition
algorithms.

  • Identification on the basis of colour: This section uses the aspect of colour as an indicator for
    detection. Colour is an efficient yet fruitful method which is vigorous
    under consideration in partial occlusions and depth. It can easily be
    combined with the motion for detection
equation415

Gaussian distribution for calculating colour
ranges in a picture

          
Maximisation to update mixture components

equation417
equation419

          
The model used to assign a probability

equation421

            The size of the box is
approximated by computing S.D weighted by the probability of pixels: 

equation423
  • Identification on the basis of motion: Here face capture in case of motion is done by dividing the
    structure into four different parts: Frame differencing, noise removal,
    thresholding and adding up pixels. These are carried out by calculation
    the time difference between the present and previous frames.
  • Combining the previous techniques.

Finding
faces in controlled backgrounds: In these type of images, the situation is more
challenging as images are mostly black and white, a human can identify the
faces but what algorithms will help in doing so (Ahmed
et al. 2018). These can be done by model-based
tracking, using weak classifier cascades which include haar algorithm and deep
learning.

4.3
Face Recognition algorithms

Face
Recognition in general cases dealing and identifying only faces in a multitude
of faces. This study will deal with how multiple faces can be identified using
recognition techniques.

Face Recognition using Haar Cascade: Haar Cascade involves advanced face
recognition which includes combinations of positive and negative images to
detect and recognise faces. Positive images include an image with faces whereas
negative image includes an image without faces. The features are separated from
these images, and they acquire different value after using pixel operations
using only 4 pixels. But with the use of these techniques lots of feature
values are obtained which are irrelevant (Ding
et al. 2016). This is limited by using AdaBoost.
We compare these values with the image set by matching its threshold by
assigning different weights and the data with the least error rated are
considered. These reduce the features list significantly, almost by 60% but it
still is not enough. A further solution to this can be to eliminate the nation
face windows and keep the windows containing facial information.

http://peachyessay.com/

Figure 4:
Features Recognition

(Source:
https://docs.opencv.org/3.4.3/d7/d8b/tutorial_py_face_detection.html)

LBPH Face Recognition: Using the LBPH algorithm to identify faces is done by following step by
step instructions to carry out the process. It uses four parameters namely
Radius, neighbours, Grid X, Grid Y. These data are used to build a local
circular binary pattern, and the grid values are set to 8 (Kątek
et al. 2016).
This algorithm also requires training, so it needs a dataset to match the
faces. The first step in LBPH is to obtain an intermediate image from the
original image containing mostly the facial structure. It uses the process of
bilinear interpolation. After this step, the histograms are extracted. The
histogram contains spatial information about the features such as eyes, nose,
ears etc. The spatial encoding also gives weights to the histograms from each
cell separately, giving more distinguishable power to more specific features of
the face.

https://cdn-images-1.medium.com/max/800/1*lcA9poRiT4KnWDaW-KfKjw.png

Figure 5:
Bilinear Interpolation

(Source:
https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b)

https://cdn-images-1.medium.com/max/800/1*-cyqWPcas3CXp4O2O7xPpg.png

Figure 6:
Histogram Mapping

(Source:
https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b)

4.4
Identifying Multiple Faces

The
main change when identifying multiple faces from only faces is the amount of
data that should be taken into account while analysing the detection patterns.
The main change to detect multiple faces is that we have to run the algorithms
every frame more often to detect any face that may have appeared in the
meantime. While tracking a single face, this is not a matter of concern as we
only have to start tracking a different face only after the current data has
been lost.

An
important point to consider while approaching this outcome will be that we
should be able to determine that which of the faces that are being detected
already match the correlation algorithm for the current face that is being
tracked (Kutty & Mathai, 2017). A simplified approach to solve
this problem is to check if the detected face matches an already existing
tracker point to the centre point of the already detected face and also if the
centre part of that similar tracker is also within the region covered by the
already detected face. So the approach to detect all the faces in a frame is to
include in our main loop the following factors:

  • To update the correlation trackers and to remove the trackers those
    aren’t considered to be reliable anymore.
  • For every few handfuls of frames perform the following:
  •  Use detection in current frames and find all
    the faces.[Referred to Appendix 1]
  • For the faces found, check if there
    is a tracker to match the centre of the detected face to an already existing
    tracker or if the tracker is within the bounded region of another tracker.
    [Referred to Appendix 2]
  • If there is an already existing
    tracker, then this face was already detected before; otherwise, we have to set
    a new tracker for the face. [Referred to Appendix 3]
  • Use information for all the trackers to determine the bounding
    rectangle.

4.5
Recognition using dataset

To
match the faces to existing images we need to provide a dataset to analyse the
faces that are detected. Before we begin the training of our algorithms we need
first to define the dataset itself and gather the faces. If there is an already
pre-structured dataset, then most of the work is done. But in this case, the
dataset has to defined and updated continuously (W. Lee & S.Z,
2007).
This requires gathering data and quantifying them in some particular manner.
The datasheet updated here will be used to match names with the attendance list
to mark the presence of individuals.  This is better known as enrolment as faces are
enrolled in daily routine and data updated continuously. One of the most common
ways to do this is by using OpenCV.

  • Via OpenCV and Webcam: This method is useful when it involves on-site
    face recognition, and we have physical access to the persons. The language used
    will be Python. We may perform this process over multiple days under varying
    lighting conditions, time of day, moods etc. We build a python script to detect
    the faces through webcam and write the facial data to the disk. Two main
    command line arguments are to be used namely –cascade and –output. –cascade
    is a path to haar cascade file and –output is to write the images to the
    output directory (Yi et al. 2017). The OpenCV’s detector will do the main task as it will load the video
    stream and capture the image frame by frame. One such frame will be captured
    and transferred to the output directory. After this face detection is performed
    by using the algorithms as discussed before or we can use the deep learning set
    in OpenCV. OpenCV’ s deep learning is based on a face detector with Single Shot
    Detector (SSD) framework with a ResNet base network. The Caffe prototxt files
    and weight files are present for determining face detection in OpenCV. [Referred
    to Appendix 4]

4.6 Attendance Management

We will use Excel to
update the face that is recognised. When the face is identified the attendance
system will mark present for those who are there in the excel sheet, and this
can be then used for validation.

Chapter 5 Data
Flow

http://peachyessay.com/


Chapter
6 Conclusion and Future Work

Face
detection in its present form is an active area of study. Many researchers,
scholars and academic personalities are continually trying to find better
solutions to the existing methods. Accurate analysis in this field involves
machine learning and artificial intelligence which requires improvements in
hardware that are very expensive. The use of techniques such as Haar cascade
and LBPH in facial recognition is efficient and inexpensive, but they are less inaccurate
for random faces. They all require a prerequisite data set to compare their
data. The types of equipment used in these cases have advanced through the last
decade like with the use of high definition cameras to adjust lighting,
exposures, autofocus which enables the system to get more accurate information
on the face. The study here discussed the background of the work and the
limitations which they had. This research also dealt with the advantages and
disadvantages of the Face recognition system and the proposed changes that will
help the system to become better. The data analysis section enlightened on the
Haar cascade and LBPH techniques for facial recognition and also gave an
assumption as to how the system can be modified to include the functionality to
detect multiple faces at the same time.

Future Recommendations

The
technology involving face recognition is progressing steadily, but few problems
need to be tackled soon so as to the this to the next step. First among many
includes the use of better face detectors and video streamers in order to
better capture images. Another problem that needs to be rectified is the
assumptions and formulations of different algorithms that are hypothecated. The
third step is the inclusion of deep learning and artificial intelligence to
achieve recognition results from a comprehensive set of variables better. The
advanced AI architecture can be used to gather datasets from open source
networks and online interfaces to increase the information pool. Last but not
the least the ML architecture in OpenCV should be implemented more than present
architecture for better perfecting the face recognition system (sciencedirect.com,
2019).

Reference
List

Book

Bourlai, T. (Ed.). (2016). Face
recognition across the imaging spectrum
.Springer, retrieved from:  
https://www.researchgate.net/profile/Vitalii_Pertsevyi/post/best_algorithm_for_face_spoofing_detctions/attachment/59d6339279197b80779913f7/AS%3A375374950223874%401466507770369/download/1burlai_t_ed_face_recognition_across_the_imaging_spectrum.pdf

Samarasinghe, S. (2016). Neural
networks for applied sciences and engineering: from fundamentals to complex
pattern recognition
. Auerbach publications. Retrieved from: :http://sutlib2.sut.ac.th/sut_contents/H106956.pdf

Journal

Ahmed, A., Guo, J., Ali, F., Deeba,
F., & Ahmed, A. (2018, May). LBPH based improved face recognition at low
resolution. In 2018 International Conference on Artificial Intelligence
and Big Data (ICAIBD)
 (pp. 144-147). IEEE. , retrieved from:   https://www.researchgate.net/profile/Farah_Deeba23/publication/327980768_LBPH_Based_Improved_Face_Recognition_At_Low_Resolution/links/5bb1d028a6fdccd3cb80a7d2/LBPH-Based-Improved-Face-Recognition-At-Low-Resolution.pdf

Ding, C., Choi, J., Tao, D., &
Davis, L. S. (2016). Multi-directional multi-level dual-cross patterns for
robust face recognition. IEEE transactions on pattern analysis and
machine intelligence
38(3), 518-531. , retrieved from:   https://arxiv.org/pdf/1401.5311

Huang, X., Wang, S. J., Zhao, G.,
& Piteikainen, M. (2015). Facial micro-expression recognition using
spatiotemporal local binary pattern with integral projection. In Proceedings
of the IEEE international conference on computer vision workshops
(pp. 1-9).
Retrieved from: https://www.cvfoundation.org/openaccess/content_iccv_2015_workshops/w2/papers/Huang_Facial_Micro-Expression_Recognition_ICCV_2015_paper.pdf

Kątek, G., Holik, A., Zabłocki, T.,
& Dobrzyńska, P. (2016). Face recognition using the Haar classifier cascade
and face detection based on detection of skin color areas. I
ELEKTRONIKA
, 29. , retrieved from:   http://www.wu.utp.edu.pl/uploads/oferta/Telekomunikacja%20i%20Elektronika%2019%20(2016)%20-%20nowy.pdf#page=29

Kutty, N. M., & Mathai, S.
(2017). Face Recognition–A Tool for Automated Atttendance System. International
Journals of Advanced Research in Computer Science and Software Engineering
7(6),
334-336. , retrieved from:  
https://pdfs.semanticscholar.org/e6c9/fa7c43177ebbbc9f6a220d858dc0ec78b25b.pdf

Li, H., & Hua, G. (2015). Hierarchical-pep
model for real-world face recognition. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition
 (pp. 4055-4064).
Retrieved from:  http://openaccess.thecvf.com/content_cvpr_2015/papers/Li_HierarchicalPEP_Model_for_2015_CVPR_paper.pdf

Minaee, S., & Wang, Y. (2015,
December). Fingerprint recognition using translation invariant scattering
network. In 2015 IEEE Signal Processing in Medicine and Biology
Symposium (SPMB)
 (pp. 1-6). IEEE. Retrieved from:  https://arxiv.org/pdf/1509.03542

Richardson, E., Sela, M., Or-El, R.,
& Kimmel, R. (2017). Learning detailed face reconstruction from a single
image. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition
 (pp. 1259-1268). Retrieved from :http://openaccess.thecvf.com/content_cvpr_2017/papers/Richardson_Learning_Detailed_Face_CVPR_2017_paper.pdf

Siddiqi, M. H., Ali, R., Khan, A. M.,
Park, Y. T., & Lee, S. (2015). Human facial expression recognition using
stepwise linear discriminant analysis and hidden conditional random
fields. IEEE Transactions on Image Processing24(4),
1386-1398. Retrieved from:
:https://www.researchgate.net/profile/Muhammad_Siddiqi2/publication/273003972_Human_Facial_Expression_Recognition_Using_Stepwise_Linear_Discriminant_Analysis_and_Hidden_Conditional_Random_Fields/links/54f54d120cf2ba6150657d84.pdf

W. Lee and S.Z. Li (Eds.): ICB 2007, LNCS 4642, pp.
809–818, 2007.

Xu, Y., Price, T., Frahm, J. M.,
& Monrose, F. (2016). Virtual u: Defeating face liveness detection by
building virtual models from your public photos. In 25th {USENIX}
Security Symposium ({USENIX} Security 16)
 (pp. 497-512). Retrieved from: https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_xu.pdf

Yi, S., Lai, Z., He, Z., Cheung, Y.
M., & Liu, Y. (2017). Joint sparse principal component analysis. Pattern
Recognition
61, 524-536. 
, retrieved from:  
http://www.comp.hkbu.edu.hk/~ymc/papers/journal/PR-D-16-00081_publication_version.pdf

Online Article

.ncbi.nlm.nih.gov, (2019), Face recognition principal, retrieved
from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5525078/ [Retrieved on 25th February]

sciencedirect.com, (2019) Theory of evidence for face
detection and tracking, retrieved from:   https://www.sciencedirect.com/science/article/pii/S0888613X12000126

[retrieved on 23.02.2019]

Website

geeksforgeeks.org , 2019 ,
retrieved from: https://www.geeksforgeeks.org/opencv-c-program-face-detection/

[retrieved on 22.02.2019]

ieeexplore.ieee.org, (2019), ieeexplore, retrieved from:
https://ieeexplore.ieee.org/document/5432754 [Retrieved
on 25th February]

Appendices

Appendix 1:

#Update all the trackers and remove the ones for which
the update

#indicated the quality was not good enough

fidsToDelete = []

for
fid in faceTrackers.keys():

   
trackingQuality = faceTrackers[
fid ].update( baseImage )

    #If the tracking quality is good enough, we must
delete

    #this tracker

    if
trackingQuality < 7:

       
fidsToDelete.append( fid )

for
fid in fidsToDelete:

    print(“Removing tracker ” + str(fid) + ”
from list of trackers”)

    faceTrackers.pop(
fid , None )

Appendix 2:

#Now use the haar cascade detector to find all faces
#in the image
faces = faceCascade.detectMultiScale(gray, 1.3, 5)
 
#Loop over all faces and check if the area for this
#face is the largest so far
#We need to convert it to int here because of the
#requirement of the dlib tracker. If we omit the cast to
#int here, you will get cast errors since the detector
#returns numpy.int32 and the tracker requires an int
for (_x,_y,_w,_h) in faces:
    x = int(_x)
    y = int(_y)
    w = int(_w)
    h = int(_h)
 
    #calculate the centerpoint
    x_bar = x + 0.5 * w
    y_bar = y + 0.5 * h
 
    #Variable holding information which faceid we 
    #matched with
    matchedFid = None
 
    #Now loop over all the trackers and check if the 
    #centerpoint of the face is within the box of a 
    #tracker
    for fid in faceTrackers.keys():
        tracked_position =  faceTrackers[fid].get_position()
 
        t_x = int(tracked_position.left())
        t_y = int(tracked_position.top())
        t_w = int(tracked_position.width())
        t_h = int(tracked_position.height())
 
        #calculate the centerpoint
        t_x_bar = t_x + 0.5 * t_w
        t_y_bar = t_y + 0.5 * t_h
 
        #check if the centerpoint of the face is within the 
        #rectangleof a tracker region. Also, the centerpoint
        #of the tracker region must be within the region 
        #detected as a face. If both of these conditions hold
        #we have a match
        if ( ( t_x <= x_bar   <= (t_x + t_w)) and 
             ( t_y <= y_bar   <= (t_y + t_h)) and 
             ( x   <= t_x_bar <= (x   +)) and 
             ( y   <= t_y_bar <= (y   +))):
            matchedFid = fid
 
    #If no matched fid, then we have to create a new tracker
    if matchedFid is None:
        print("Creating new tracker " + str(currentFaceID))
        #Create and store the tracker 
        tracker = dlib.correlation_tracker()
        tracker.start_track(baseImage,
                            dlib.rectangle( x-10,
                                            y-20,
                                            x+w+10,
                                            y+h+20))
 
        faceTrackers[ currentFaceID ] = tracker
 
        #Increase the currentFaceID counter
        currentFaceID += 1

Appendix 3:

#Now loop over all the trackers we have and draw the rectangle
#around the detected faces. If we 'know' the name for this person
#(i.e. the recognition thread is finished), we print the name
#of the person, otherwise the message indicating we are detecting
#the name of the person
for fid in faceTrackers.keys():
    tracked_position =  faceTrackers[fid].get_position()
 
    t_x = int(tracked_position.left())
    t_y = int(tracked_position.top())
    t_w = int(tracked_position.width())
    t_h = int(tracked_position.height())
 
    cv2.rectangle(resultImage, (t_x, t_y),
                            (t_x + t_w , t_y + t_h),
                            rectangleColor ,2)
 
    #If we do have a name for this faceID already, we print the name
    if fid in faceNames.keys():
        cv2.putText(resultImage, faceNames[fid] , 
                    (int(t_x + t_w/2), int(t_y)), 
                    cv2.FONT_HERSHEY_SIMPLEX,
                    0.5, (255, 255, 255), 2)
    else:
        cv2.putText(resultImage, "Detecting..." , 
                    (int(t_x + t_w/2), int(t_y)), 
                    cv2.FONT_HERSHEY_SIMPLEX,
                    0.5, (255, 255, 255), 2)


Appendix 4:

# import the
necessary packages

import numpy as np

import argparse

import cv2

# construct the
argument parse and parse the arguments

ap =
argparse.ArgumentParser()

ap.add_argument(“-i”,
“–image”, required=True,

            help=”path to input
image”)

ap.add_argument(“-p”,
“–prototxt”, required=True,

            help=”path to Caffe ‘deploy’
prototxt file”)

ap.add_argument(“-m”,
“–model”, required=True,

            help=”path to Caffe pre-trained
model”)

ap.add_argument(“-c”,
“–confidence”, type=float, default=0.5,

            help=”minimum probability to
filter weak detections”)

args =
vars(ap.parse_args())

# load our serialized
model from disk

print(“[INFO]
loading model…”)

net = cv2.dnn.readNetFromCaffe(args[“prototxt”],
args[“model”])

# load the input
image and construct an input blob for the image

# by resizing to a
fixed 300×300 pixels and then normalizing it

image =
cv2.imread(args[“image”])

(h, w) =
image.shape[:2]

blob = cv2.dnn.blobFromImage(cv2.resize(image,
(300, 300)), 1.0,

            (300, 300), (104.0, 177.0, 123.0))

# pass the blob
through the network and obtain the detections and

# predictions

print(“[INFO]
computing object detections…”)

net.setInput(blob)

detections = net.forward()

# loop over the
detections

for i in range(0,
detections.shape[2]):

            # extract the confidence (i.e.,
probability) associated with the

            # prediction

            confidence = detections[0, 0, i, 2]

            # filter out weak detections by
ensuring the `confidence` is

            # greater than the minimum
confidence

            if confidence >
args[“confidence”]:

                        # compute the (x,
y)-coordinates of the bounding box for the

                        # object

                        box = detections[0, 0,
i, 3:7] * np.array([w, h, w, h])

                        (startX, startY, endX,
endY) = box.astype(“int”)

                        # draw the bounding box
of the face along with the associated

                        # probability

                        text =
“{:.2f}%”.format(confidence * 100)

                        y = startY – 10 if
startY – 10 > 10 else startY + 10

                        cv2.rectangle(image,
(startX, startY), (endX, endY),

                                    (0, 0, 255),
2)

                        cv2.putText(image, text,
(startX, y),

                                    cv2.FONT_HERSHEY_SIMPLEX,
0.45, (0, 0, 255), 2)

# show the output
image

cv2.imshow(“Output”,
image)

cv2.waitKey(0)

 
Looking for a Similar Assignment? Order now and Get 10% Discount! Use Coupon Code "Newclient"