Automatic Face Recognition Using Enhanced Firefly Optimization Algorithm and Deep Belief Network - inass

Page created by Alfredo Martin
 
CONTINUE READING
Automatic Face Recognition Using Enhanced Firefly Optimization Algorithm and Deep Belief Network - inass
Received: February 17, 2020. Revised: March 30, 2020. 19

 Automatic Face Recognition Using Enhanced Firefly Optimization Algorithm
 and Deep Belief Network

 Prakash Annamalai1*

 1
 Tagore Engineering College, Department of Information Technology, India
 * Corresponding author’s Email: prakash1712@yahoo.com

 Abstract: Face recognition is an emerging research area in biometric identification systems, which attracted many
 researchers in pattern recognition and computer vision. Since, the human facial images are high dimensional in
 nature, so a new optimization algorithm was proposed in this paper to reduce the dimension of facial images. At first,
 the input facial images were collected from ORL, YALE, and FAce Semantic SEGmentation (FASSEG) databases.
 Then, feature extraction was then performed by using Local Ternary Pattern (LTP) and Binary Robust Invariant
 Scalable Key points (BRISK) to extract the features from the face images. In addition, an enhanced fire-fly optimizer
 was used to reject the irrelevant features or to reduce the dimension of the extracted features. In enhanced fire-fly
 optimization algorithm, Chi square distance measure was used to find the distance between the fire-flies and also it
 uses only a limited number of feature values for representing the data that effectively reduces the “curse of
 dimensionality” concern. Finally, Deep Belief Network (DBN) was used to classify the individual’s facial images.
 The experimental outcome showed that the proposed work improved recognition accuracy up to 7-20% compared to
 the existing work.
 Keywords: Binary robust invariant scalable key points, Deep belief network, Enhanced fire-fly optimizer, Face
 recognition, Local ternary pattern.

 circumstances like Volterra kernels direct
1. Introduction discriminant analysis [9], principal component
 analysis [10], linear discriminant analysis [10], etc.
 In recent years, face recognition is extensively
 Additionally, the prior approaches classify the
studied due to its applications in several fields like
 individuals on the basis of hand designed manner
human computer interaction, internetwork
 that consumes more time for individual
communication, computer entertainment, artificial
 classification [11, 12].
intelligence, e-commerce security, and security
 To address this issue, a new system is proposed
monitoring [1-3]. Although, several researchers
 for achieving good performance in face recognition.
have achieved significant advances in face
 Initially, the input facial images were collected from
recognition system, still this application have many
 the ORL, YALE, and FASSEG databases and then
challenges like intra-class variations in illumination,
 histogram equalization was applied for denoising the
expression, noise, pose and occlusion [4, 5]. So,
 collected images. The histogram equalization
robust and discriminant feature generation have
 calculates the histograms from the images to
become a problem in face recognition [6]. In
 redistribute the lightness pixel values, which
addition, the dimensionality of collected human face
 significantly improves the contrast and edges of the
images is often very high, which leads to high
 images. Then, feature extraction was performed by
computational complexity and a curse of
 utilizing BRISK, and LTP for extracting the feature
dimensionality issue [7, 8]. Currently, various
 vectors from the denoised images. After attaining
dimensionality reduction techniques are developed
 the hybrid feature values, enhanced fire-fly
in semi-supervised, unsupervised, and supervised
International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 20

optimizer was applied to reduce the dimension of order to retain the relevant information. Then, PNN
the extracted features. In enhanced fire-fly optimizer, methodology was adopted for solving the face
the Chi square distance was utilized instead of recognition problems. The developed method
Cartesian distance to find the distance between the (IKLDA+PNN) not only improves the computing
fire-flies. Hence, the Chi square distance utilizes efficiency, and also its precision value. In this
only a limited number of features to represent the literature, the experimental investigation was
data that effectively reduces the “curse of conducted on the AR, ORL, and YALE databases,
dimensionality” concern. Then, the output of feature which comprises of a wide variety of facial details,
selection was given as the input for DBN classifier and expressions. The experimental result showed
for classifying the facial images. At last, the that the developed methodology (IKLDA+PNN)
proposed work performance was compared with an attained better performance by means of standard
existing work in light of accuracy, precision, and deviation and accuracy. If the number of collected
recall. images were low, the developed hybrid approach
 This research paper is prepared as follows. In significantly lessens the recognition performance.
section 2, numerous research papers on face Q. Liu, C. Wang, and X.Y. Jing [15] developed
recognition topic are reviewed. The detailed a new nonlinear feature extraction approach (dual
explanation about the proposed work is given in the multi-kernel discriminant analysis) for colour face
section 3. In addition, section 4 illustrates about the recognition. At first, a kernel selection method was
quantitative and comparative analysis of the designed for selecting the optimum kernel mapping
proposed work. The conclusion is made in section 5. function for each colour component of the facial
 images. Then, a colour space selection model was
2. Literature survey developed for choosing the appropriate space, and
 then map the dissimilar colour components of the
 Several approaches are developed by the
 facial images into dissimilar high dimensional
researchers in face recognition topic. In this section,
 kernel spaces. At last, discriminant analysis and
a brief review of some important contributions to the
 multi kernel learning were applied not only within
existing literature is presented.
 each component but also between dissimilar
 L. Zhou, W. Li, Y. Du, B. Lei, and S. Liang [13]
 components. The performance of the developed
developed an illumination invariant local binary
 approach was tested on face recognition grand
descriptor learning methodology for face
 challenge version 2 and Labelled Faces in the Wilds
recognition. Usually, the developed methodology
 (LFW) databases. From the experimental outcome,
utilizes the rigid sign function for binarization
 the developed approach outperforms the existing
instead of data distribution. Initially, the developed
 approaches in terms of recognition rate. In this work,
method calculates the dynamic thresholds, which
 the training and testing process requires manual
includes the information of illumination variation to
 intervention, which needed to be automated.
extract the nonlinear multi-layer contrast features. In
 C.H. Yoo, S.W. Kim, J.Y. Jung, and S.J. Ko [16]
this research, exponential discriminant analysis was
 developed a new feature extraction approach by
used as a pre-processing method that significantly
 utilizing the high dimensional feature space for
enhance the discriminative ability of the face image
 enhancing the discriminative ability for face
by enlarging the margin between dissimilar classes
 recognition. At first, the local binary pattern was
relative to the same class. In addition, the adaptive
 decomposed into several bit planes that consists of
fuzzy fusion system was used for incorporating the
 scale specific directional information of the facial
recognition results for multi-scale features space.
 images. Each bit plane includes inherent local
From the experimental simulation, the developed
 structure of the facial images and also the
methodology attained better consequence related to
 illumination robust characteristics. In addition, a
the existing works in light of recognition accuracy.
 supervised dimension reduction approach
The developed methodology performance was
 (orthogonal linear discriminant analysis) was used to
highly degraded in the conditions like illumination
 lessen the computational complexity, when
conditions and facial variations.
 preserving the incorporated local structural
 A. Ouyang, Y. Liu, S. Pei, X. Peng, M. He, and
 information. The extensive experiments were
Q. Wang [14] presented a hybrid approach for face
 conducted and validated the efficiency of the
recognition by combining Probabilistic Neural
 developed approach on AR, LFW and Extended
Networks (PNNs) and Improved Kernel Linear
 Yale B datasets. In contrast, the developed
Discriminant Analysis (IKLDA). Initially, the
 methodology failed to achieve better recognition
dimension of the extracted features was reduced in
 accuracy in a large facial dataset.
International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 21

 I.M. Revina, and W.S. Emmanuel, [17]
developed a new system for facial expression
recognition that combines whale grasshopper
optimization approach and multi support vector
neural network. Initially, scale invariant feature
transform and scatter local directional pattern were
used to extract the features from the collected facial
images. The extracted feature values were classified
by applying the developed classifier for recognizing
the facial expressions. In the experimental phase, the
developed system was tested on Japanese female
facial expression database and Cohn-Kanade AU-
Coded Expression dataset. The developed system
outperforms the existing works in light of
recognition accuracy. By utilizing low level features
in high level network, the semantic gap was
maximized and it leads to poor recognition rate.
 In order to address the above-mentioned
concerns, a new facial recognition system is
proposed in this paper. Figure. 1 Work flow of proposed system

3. Proposed system Let be a facial image that is represented by
 matrix integer pixel intensities ranging from 0
 In recent years, facial recognition is an emerging to − 1. The histogram equalized facial image is
research area in the fields of artificial intelligence defined in Eq. (1).
and pattern recognition. Hence, the face recognition
is a challenging topic, because the real world facial ( , )
 , = ( − 1) ∑ =0 (1)
images are formed with the interaction of multiple
factors such as facial rotation, illumination
conditions, background interferences, etc. In order Where, is indicated as possible intensity value
to develop an effective facial recognition system, of 256, is denoted as normalized histogram of
some of the problems need to be resolved, with a bin for every possible intensity, and
particularly on feature extraction and classification. () rounds down the nearest integers that are
 While classifying, the individual facial images equivalent to transform the pixel intensities , and it
are converted into vectors that are very high is expressed in Eq. (2).
dimensional in nature. In order to lessen the “curse
of dimensionality” concern, a new facial recognition ( ) = ( − 1) ∑ =0 (2)
system is proposed in this research paper. The
proposed facial recognition system comprises of five In transformation, the intensities and are
steps such as image collection, image denoising, considered as continuous random variables of =
feature extraction, optimization, and classification. on [0, − 1] with , which is mathematically
The working process of proposed facial recognition defined in Eq. (3).
system is indicated in Fig. 1. The detailed
 
explanation about the proposed facial recognition = ( ) = ( − 1) ∫0 ( ) (3)
system is given below.
 Where, is represented as probability density
3.1 Image pre-processing function of , is represented as cumulative
 At first, the input face images are collected from distributive function of multiplied by ( − 1), and
ORL, YALE, and FASSEG datasets. After is differentiable and invertible. It shows that is
collecting the facial images, histogram equalization determined by ( ), which is uniformly distributed
 1
is used for pre-processing the images. It is a on [0, − 1] namely ( ) = as expressed in
 −1
technique to adjust facial image intensities for the Eqs. (4), (5), and (6) [18]. The original and
enhancing the image contrast. enhanced facial images are graphically denoted in
 Fig. 2.

International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 22

 ( , )− ( , )
 ( , ) = ( − ) × 2 (7)
 ‖ − ‖

 Then, assume the set of sampling point pairs
 as defined in Eq. (8).
 (a)

 = {( , ) ℜ2 × ℜ2 | <
 ∧ < ∧ , } (8)

 Where, is indicated as number of sampling
 (b) point pairs. Then, partition the pixel pairs into two
 Figure. 2 (a) original image and (b) denoised image sub-sets; short distance pairs 1 and long distance
 pairs 2 that is mathematically indicated in the Eqs.
 1 −1 ( )
 ∫0 ( ) = −1 = ∫0 ( ) (4) (9) and (10).

 1 = {( , ) |‖ − ‖ < } ⊆ (9)
 (∫0 ( ) ) =
 
 ( ) = ( −1 ( )) ( −1 ( )) (5) 2 = {( , ) |‖ − ‖ < } ⊆ (10)
 
 ( −1 ( )) = ( − The threshold distance is set as = 9.75 
 | = −1 ( ) and = 13.67 (where is the scale of ).
 
1) ( −1 ( )) ( −1 ( )) = 1 (6) Hence, the point pairs are iterated using for
 
 finding the complete pattern direction of the key
3.2 Feature extraction points that is mathematically represented in Eq.
 (11).
 After denoising the facial images, feature
extraction is accomplished for extracting the feature 1
 = ( ) = × ∑( , )∈ ( , ) (11)
values. Here, hybrid feature descriptors are applied 
to extract the feature information from the denoised
facial images. The hybrid feature descriptors include The sampling pattern rotation of orientation is
BRISK, and LTP. A brief description about hybrid represented as = 2( , ) of the key-
feature extraction is detailed below. point. The binary descriptor is generated by using
 short distance paring and each bit is calculated
3.2.1. Binary robust invariant scalable key points
 from a pair in . In this scenario, the descriptor
 BRISK is a texture descriptor that attains a length is 512 , which is fixed by performing the
significant quality of matching with limited short distance intensity at each binary feature
computation time and also generate a valuable key- vectors as stated in Eq. (12).
points from a face image. Generally, BRISK
descriptor applies a symmetric sampling pattern 1 ( , ) > ( , )
 ={ } ∀ ( , ) 
over sample point of smooth pixels in feature 0 ℎ 
descriptor. The image intensity is represented as (12)
and then apply Gaussian smoothing with standard
 3.2.2. Local ternary pattern
deviation that is equivalent to the distance
between the points and circles. The key-point in a LTP is an extension of Local Binary Pattern
facial image is patterned according to its scaling and (LBP) that utilizes a thresholding constant for
position, and the sampling-point pairs are denoted thresholding the pixels into three values. Let us
as ( , ) . Correspondingly, the intensity of consider, as a thresholding constant, as a
smoothed values of points is represented as ( , ) neighboring pixels, and as a center pixel value.
and ( , ) that helps to find the local gradients The result of the thresholding is determined by using
[19]. The local gradient ( , ) is mathematically the Eq. (13).
denoted in Eq. (7).

International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 23

 ( , , ) = absorption at the source. The distance between the
 1 > + fire-flies and in the points and is
{ 0 > − < + (13) mathematically represented in the Eqs. (15) and (16)
 −1 < − on the basis of Chi square distance measure.

 In this scenario, every threshold pixel has one of = ‖ − ‖ (15)
the three values and the neighbouring pixels are
combined after thresholding into a ternary pattern. 1 [ ( , )− ( , )]2
By calculating the histogram of ternary values = √ ∑ −1 −1
 =0 ∑ =0 (16)
 2 ( , )+ ( , )
results in a large range, so the ternary pattern is split
into two binary patterns. The histograms are Where, is indicated as total number of gray
concatenated for generating a descriptor (double the levels, and are denoted as spatial coordinates
size of LBP), which is very successful in the
 of the fire-flies and , and , is represented as the
application of facial recognition. The basic idea of
LTP is to transform the intensity space to order portion of spatial coordination of fire-fly in
space, where the order of neighbouring pixels is the two dimensional state. Usually, Cartesian
utilized for creating a monotonic change distance is used for finding the distance between the
illumination invariant code for every image [20]. fire-flies and . In enhanced algorithm, Chi square
 distance is used to identify the distance between the
3.3 Feature optimization fire-flies and , because it consumes only limited
 time for deciding the distances between the fire-
 After feature extraction, optimization is flies and that is enough for achieving accurate
accomplished for reducing the dimension of the neighbourhood selection for better recognition. The
extracted features. In recent periods, numerous fire-fly movement and absorption of moves
optimization algorithms are developed for feature luminously, which is determined by using the Eq.
optimization. In that, fire-fly optimizer is superior in (17).
dealing with global optimization concerns. The fire-
 2 1
fly optimization algorithm is established by Xin-She = + 0 − ( − ) + ( − )(17)
Yang at Cambridge University based on the 2
behaviour of fire-fly and flashing patterns. 2
Generally, fire-fly optimizer contains three idealized Where, the second term 0 − ( − ) is
rules such as attractiveness, unisex and brightness. 1
 denoted as attractiveness, third term ( − ) is
Usually, all fire-fly insects are unisex, so it is 2
 represented as randomization with a parameter ,
attracted to other fire-flies on the basis of their sex.
 and is stated as random number generator,
The brightness of the fire-flies is evaluated by the
land-scape of the objective function value. In which ranges from [0, 1] interval.
addition, the attractiveness is directly proportional to In most of the cases, 0 is equal to one
the brightness of a fire-fly, whereas the less bright and [0,1] . The parameter determines the
fire-flies are attracted by the brighter fire-flies. variations in attractiveness and also its objective
 The attractiveness between the two fire-flies is function value is essential in evaluating the speed of
automatically decreased, if the distance between the convergence and fire-fly behaviour. Theoretically,
fire-flies is increased. The two major problems in the parameter is evaluated on the basis of
fire-fly optimization algorithm are formulating the absorbency changes that ranges within [0, ∞]. The
amount of absorbency and difference in the light pseudo code of enhanced firefly optimization
intensity. For simplicity purpose, consider fire-fly algorithm is detailed below.
absorbency with luminosity that depends on the 3.3.1. Pseudo code of enhanced firefly optimization
target functions. Since, the fire-fly absorbency is algorithm
proportional to the light intensity of adjacent fire-fly.
The absorbency from fire-fly is mathematically Initialize firefly optimization algorithm
defined in Eq. (14). parameters;
 MaxGen→maximum number of generations.
 2
 ( ) = 0 − (14) = 1.
 Objective-function → ( ) , where , =
 Where, 0 is represented as attractiveness at = ( 1 , … … ) .
0 and is represented as coefficient of light
International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 24

 Generate initial population of fireflies 1
 Where, ( ) = ( − ) is indicated as
 1+ 
or (1,2, … . , ).
 sigmoid function. The parameters = ( , , ) is
 Define light intensity at via ( ) and
 learned using contrastive divergence. The reason for
distance between the fire-flies and is calculated
 using DBN classifier is that the parameters 
using Chi square distance
 obtained by using RBM define both ( | , )and
 prior distribution ( | ). Therefore, the probability
While (t< MaxGen);
 of developing visible variables are given in Eq. (21).
 For = 1 to ;
 For = 1 to ;
 ( ) = ∑ ( | ) ( | , ) (21)
 If ( > ), move firefly towards ; end if
Determine new solutions and update light intensity;
 After is learned from an RBM, ( | , ) is
 End for ;
 kept. In addition, ( , ) is replaced by a
 End for ;
 consecutive RBM, which treats the hidden layer of
Rank the fire-flies and identify the current best;
 the previous RBM as a visible value.
 End while;
Post process; visualization and result;
 4. Result and discussion
End procedure;
 In the experimental segment, the proposed work
3.4 Classification was simulated by utilizing MATLAB (version
 2018a) with 3.0 GHZ-Intel i3 processor, 2TB hard
 After selecting the optimal features, disc, and 4 GB RAM. The proposed work
classification is accomplished by using DBN for performance was related with the existing works [13,
classifying the facial images. The DBN is an 14] to estimate the effectiveness and efficiency of
important methodology among all deep learning the proposed work. The performance of the
models, which have a large number of Restricted proposed work was validated in light of accuracy,
Boltzmann Machines (RBMs). The learned precision, and recall.
activation unit of one RBM is utilized as the “data”
for the succeeding RBM in the stack. Also, it is an 4.1 Performance metric
undirected graphical approach in which visible
variables are linked to hidden unit by using Performance metric is the procedure of
undirected weight. Where, the DBN are constrained, collecting, reporting and analysing information
so there is no connection within the hidden or about the performance of a group or individual.
visible variables. The Eq. (18) shows a probability Mathematical equation of precision, accuracy, and
distribution over , ( ( , )) and energy function recall are denoted in the Eqs. (22), (23), and (24).
is indicated as ( , ; ) . The binary RBM is
 
mathematically denoted in Eq. (18). = × 100 (22)
 + 

 − ( , ) ( , ; ) = + 
 | | | | | | | | = × 100 (23)
− ∑ =1 ∑ =1 − ∑ =1 − ∑ =1 (18) + + + 

 Where, = ( , , ) is denoted as parameter set, = × 100 (24)
 + 
 is indicated as symmetric weight between the
visible units, is indicated as hidden unit, Where, is denoted as true positive, is
 and are denoted as bias. In DBN, the number of represented as false positive, is indicated as false
visible and hidden layer is considered as | | and | |. negative, and is denoted as true negative.
The configuration makes it easy to process the
conditional probability distribution, where and 
are fixed that are mathematically described in the
Eqs. (19) and (20).

 | |
 ( | ; ) = ∑ =1 + (19)

 | | Figure.3 Sample images of ORL database
 ( | ; ) = ∑ =1 + (20)

International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 25

Table 1. Performance evaluation of proposed work with
 dissimilar performance measures on ORL database
Classifier Precision (%) Recall (%) Accuracy
 (%)
 SVM 61.2 67.09 70.98
 KNN 72.39 78.01 79
 LSTM 91.90 93.98 90.87
 DBN 96 97.96 98.92 Figure.5 Sample images of YALE database

4.2 Quantitative analysis on ORL dataset 4.3 Quantitative analysis on YALE dataset

 In this segment, ORL dataset is used to evaluate In this section, YALE dataset is used to evaluate
the performance of the proposed work. The ORL the performance of the proposed work. The YALE
facial dataset comprises of 400 facial images with database comprises of 165 grayscale images of 15
40 individuals, each individual subject includes 10 individuals. There are 11 images per subject, one per
face images. For some individuals, the facial images different facial expressions or configurations like
are taken at dissimilar times, lighting variations, center-light, happy, sad, sleepy, with/without glasses,
facial expressions (smiling, not smiling, eye open, normal, surprised, and wink. The sample images of
and eye closed) and facial details (glass/no glass). YALE dataset are indicated in Fig. 5.
The sample images of ORL database is represented In Table 2, the performance of the proposed
in Fig. 3. work is evaluated on YALE database. From the
 In Table 1, the performance of the proposed experimental investigation, the accuracy of DBN is
work is evaluated by means of precision, accuracy, 97.92%, and the existing classification approaches
and recall on ORL database. Here, the performance (SVM, KNN and LSTM) achieves 67.20%, 82.30%
evaluation is validated with 70% training of data and and 90%. Respectively, the precision, and recall of
30% testing of data. The validation outcome shows DBN classification approach is 98.90%, and 97%. In
that the DBN classifier out-performed the existing contrast, the comparative classifiers achieve
classification methodologies like Support Vector minimum precision and recall compared to DBN.
Machine (SVM), K-Nearest neighbour (KNN) and Graphical representation of proposed work by
Long Short Term Memory (LSTM). The precision means of precision, recall, and accuracy on YALE
of DBN classifier is 96% and the comparative dataset is indicated in Fig. 6.
classification methodologies: SVM, KNN and
LSTM delivers 61.2%, 72.39 % and 91.90% of Table 2. Performance evaluation of proposed work with
precision. Similarly, the recall of DBN classifier is dissimilar performance measures on YALE database
97.96% and the comparative classification methods Classifier Precision Recall Accuracy
delivers 67.09%, 78.01% and 93.98% of recall. (%) (%) (%)
 SVM 68.27 70.92 67.20
Additionally, the accuracy of DBN is 98.92% and
 KNN 75.88 80.80 82.30
the comparative classifiers: SVM, KNN and LSTM
 LSTM 92.93 94.98 90
attains 70.98%, 79% and 90.87% of accuracy. DBN 98.90 97 97.92
Graphical representation of proposed work by
means of precision, recall, and accuracy on ORL
dataset is indicated in Fig. 4.

 Figure.6 Graphical comparison of proposed work with
 Figure.4 Graphical comparison of proposed work with dissimilar classifiers on YALE dataset
 dissimilar classifiers on ORL dataset
International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 26

 Figure.8 Graphical comparison of proposed work with
 Figure.7 Sample facial images of FASSEG dataset dissimilar classifiers on FASSEG dataset

4.4 Quantitative analysis on FASSEG dataset 4.5 Comparative analysis

 In this section, FASSEG dataset is used for The comparative study of proposed and existing
evaluating the performance of the proposed work. work is represented in the Table 4. L. Zhou, W. Li,
The FASSEG dataset comprises of four subsets Y. Du, B. Lei, and S. Liang [13] developed an
(multipose01, frontal03, frontal02, and frontal01), Illumination Invariant Local Binary Descriptor
which includes several facial images. In face Learning (IILBDL) method for face recognition. In
segmentation, FASSEG dataset is very useful for this research, exponential discriminant analysis was
training and testing the automatic methods. The utilized as a pre-processing method that enhances
subset (multipose01) consists of 200 facial images the discriminative ability of the mage by enlarging
with ground-truth masks for six classes (background, the margin between dissimilar classes relative to the
mouth, eyes, nose, skin, and hair). Consecutively, same class. In the experimental phase, the developed
the subsets (frontal01 and frontal02) consists of 70 method attained 91.48% of recognition accuracy in
RGB images with labelled ground truth masks. At YALE dataset. In addition, A. Ouyang, Y. Liu, S.
last, the frontal03 subset contains 150 face masks, Pei, X. Peng, M. He, and Q. Wang [14] developed a
which are captured in different face expressions, hybrid approach for face recognition by combining
orientations, and illumination conditions. The PNNs and IKLDA. The developed method
sample collected facial images of FASSEG dataset effectively (IKLDA+PNN) enhances the precision
is denoted in Fig. 7. value and computing efficiency. In this work, the
 In Table 3, the proposed work performance is experimental investigation was conducted on the AR,
evaluated on FASSEG database. From the ORL, and YALE datasets, which comprises of a
experimental study, the accuracy of DBN is 98.12%, wide variety of facial details, and expressions.
and the existing classification approaches (SVM, In the experimental consequence, the developed
KNN and LSTM) achieves 68.18%, 84.98% and work averagely achieved 91.31% of accuracy in
93.22%. ORL dataset, and 77% of accuracy in YALE dataset.
 Similarly, the precision, and recall of DBN Compared to the existing work, the proposed work
classifier is 94.09%, and 96.5%. In contrast, the attained 98.92% of accuracy in ORL dataset, and
comparative classifiers achieve minimum precision 97.92% of accuracy in YALE dataset, which were
and recall compared to DBN. Graphical superior related to the existing work. As discussed
representation of proposed work by means of in the proposed work, feature optimization is a
precision, recall, and accuracy on FASSEG database fundamental part of facial recognition in this article.
is specified in Fig. 8.
 Table 4. Comparative study of proposed and existing
 Table 3. Performance evaluation of proposed work with work
 dissimilar performance measures on FASSEG database Methodology Datasets Accuracy (%)
 Classifie Precision Recall Accuracy (%) IILBDL [13] YALE 91.48
 r (%) (%) ORL 91.31
 IKLDA+PNN [14]
 SVM 80.83 87.93 68.18 YALE 77
 KNN 77.09 88 84.98 ORL 98.92
 Proposed work
 LSTM 89 91.25 93.22 YALE 97.92
 DBN 94.09 96.5 98.12

International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 27

 Whereas, several feature vectors are achieved [6] B. Li, and G. Huo, “Face recognition using
under the feature extraction using BRISK and LTP, locality sensitive histograms of oriented
so enhanced firefly optimizer methodology is used gradients”, Optik-international journal for light
for selecting the active features, which are fit for and electron optics, Vol. 127, No. 6, pp. 3489-
better classification. 3494, 2016.
 [7] W.H. Lin, P. Wang, and C.F. Tsai, “Face
5. Conclusion recognition using support vector model classifier
 for user authentication”, Electronic Commerce
 The main aim of this experimental study is to
 Research and Applications, Vol. 18, pp. 71-82,
propose a proper feature optimization and
 2016.
classification algorithm for recognising the facial
 [8] X. Wu, Q. Li, L. Xu, K. Chen, and L. Yao,
images from the ORL, YALE, and FASSEG
 “Multi-feature kernel discriminant dictionary
datasets. In this research, BRISK, and LTP features
 learning for face recognition”, Pattern
are utilized for extracting the features from the
 Recognition, Vol. 66, pp. 404-411, 2017.
denoised facial images. Then, enhanced firefly
 [9] G. Feng, H. Li, J. Dong, and J. Zhang, “Face
optimization algorithm is used for selecting the
 recognition based on Volterra kernels direct
optimal feature vectors. By selecting the optimal
 discriminant analysis and effective feature
features from the extracted features, a set of most
 classification”, Information Sciences, Vol. 441,
dominant discriminative features are obtained.
 pp. 187-197, 2018.
These optimal features are classified by using DBN
 [10] H. Gan, “A noise-robust semi-supervised
classifier. Compared to the existing work, the
 dimensionality reduction method for face
proposed work delivered an effective performance
 recognition”, Optik, Vol. 157, pp. 858-865, 2018.
by means of quantitative and comparative analysis.
 [11] A. Alelaiwi, W. Abdul, M.S. Dewan, M.
From the experimental study, the proposed work
 Migdadi, and G. Muhammad, “Steerable
attained 98.92% of accuracy in ORL dataset,
 pyramid transform and local binary pattern
97.92% of accuracy in YALE dataset, and 98.12%
 based robust face recognition for e-health
of accuracy in FASSEG dataset, but the existing
 secured login”, Computers & Electrical
work attained only limited recognition accuracy. In
 Engineering, Vol. 53, pp. 435-443, 2016.
the future work, a new hybrid optimization
 [12] W. Yang, Z. Wang, and B. Zhang, “Face
algorithm is developed for further improving the
 recognition using adaptive local ternary patterns
performance of face recognition.
 method”, Neurocomputing, Vol. 213, pp. 183-
 190, 2016.
References [13] L. Zhou, W. Li, Y. Du, B. Lei, and S. Liang,
[1] M. Iqbal, M.S.I. Sameem, N. Naqvi, S. Kanwal, “Adaptive illumination-invariant face
 and Z. Ye, “A deep learning approach for face recognition via local nonlinear multi-layer
 recognition based on angularly discriminative contrast feature”, Journal of Visual
 features”, Pattern Recognition Letters, Vol. 128, Communication and Image Representation, Vol.
 pp. 414-419, 2019. 64, pp. 102641, 2019.
[2] W. Yang, X. Zhang, and J. Li, “A Local [14] A. Ouyang, Y. Liu, S. Pei, X. Peng, M. He, and
 Multiple Patterns Feature Descriptor for Face Q. Wang, “A hybrid improved kernel LDA and
 Recognition”, Neurocomputing, 2019. PNN algorithm for efficient face recognition”,
[3] M. Kas, Y. Ruichek, and R. Messoussi, “Mixed Neurocomputing, 2019.
 neighborhood topology cross decoded patterns [15] Q. Liu, C. Wang, and X.Y. Jing, “Dual multi-
 for image-based face recognition”, Expert kernel discriminant analysis for color face
 Systems with Applications, Vol. 114, pp. 119- recognition”, Optik, Vol. 139, pp. 185-201, 2017.
 142, 2018. [16] C.H. Yoo, S.W. Kim, J.Y. Jung, and S.J. Ko,
[4] Y. Li, W. Zheng, Z. Cui, and T. Zhang, “Face “High-dimensional feature extraction using bit-
 recognition based on recurrent regression neural plane decomposition of local binary patterns for
 network”, Neurocomputing, Vol. 297, pp. 50-58, robust face recognition”, Journal of Visual
 2018. Communication and Image Representation, Vol.
[5] F. Cao, H. Hu, J. Lu, J. Zhao, Z. Zhou, and J. 45, pp. 11-19, 2017.
 Wu, “Pose and illumination variable face [17] I.M. Revina, and W.S. Emmanuel, “Face
 recognition via sparse representation and Expression Recognition with the Optimization
 illumination dictionary”, Knowledge-Based based Multi-SVNN Classifier and the Modified
 Systems, Vol. 107, pp. 117-128, 2016. LDP Features”, Journal of Visual
International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
Received: February 17, 2020. Revised: March 30, 2020. 28

 Communication and Image Representation, Vol.
 62, pp. 43-55, 2019.
[18] H. S. Suresha and S. S. Parthasarathi, “ReliefF
 feature selection based alzheimer disease
 classification using hybrid features and support
 vector machine in magnetic resonance imaging”,
 International Journal of Computer Engineering
 & Technology (IJCET), Vol. 10, No. 1, pp. 124-
 137, 2019.
[19] S. Leutenegger, M. Chli, and R. Siegwart,
 “BRISK: Binary robust invariant scalable key
 points”, In: Proc. of IEEE International
 Conference on Computer Vision (ICCV), pp.
 2548-2555, 2011.
[20] V. Naghashi, “Co-occurrence of adjacent sparse
 local ternary patterns: A feature descriptor for
 texture and face image retrieval”, Optik, Vol.
 157, pp.877-889, 2018.
[21] ORL dataset:
 http://www.cad.zju.edu.cn/home/dengcai/Data/F
 aceData.html
[22] YALE dataset:
 https://www.kaggle.com/olgabelitskaya/yale-
 face-database
[23] FASSEG dataset:
 http://massimomauro.github.io/FASSEG-
 repository/

International Journal of Intelligent Engineering and Systems, Vol.13, No.5, 2020 DOI: 10.22266/ijies2020.1031.03
You can also read