This page has only limited features, please log in for full access.

Dr. Hammam Alshazly
University of Luebeck

Basic Info


Research Keywords & Expertise

0 Biometrics
0 Computer Vision and Image Processing
0 Computer Vision, Machine Learning, Artificial Intelligence, , Security, Biometrics, Intelligent Transportation Systems
0 Computer Vision and Artificial Intelligence
0 Computer Vision and deep learning

Fingerprints

Computer Vision and Artificial Intelligence
Computer Vision and Image Processing
Computer Vision and deep learning
Biometrics

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 29 July 2021 in PeerJ Computer Science
Reads 0
Downloads 0

In this paper we propose two novel deep convolutional network architectures, CovidResNet and CovidDenseNet, to diagnose COVID-19 based on CT images. The models enable transfer learning between different architectures, which might significantly boost the diagnostic performance. Whereas novel architectures usually suffer from the lack of pretrained weights, our proposed models can be partly initialized with larger baseline models like ResNet50 and DenseNet121, which is attractive because of the abundance of public repositories. The architectures are utilized in a first experimental study on the SARS-CoV-2 CT-scan dataset, which contains 4173 CT images for 210 subjects structured in a subject-wise manner into three different classes. The models differentiate between COVID-19, non-COVID-19 viral pneumonia, and healthy samples. We also investigate their performance under three binary classification scenarios where we distinguish COVID-19 from healthy, COVID-19 from non-COVID-19 viral pneumonia, and non-COVID-19 from healthy, respectively. Our proposed models achieve up to 93.87% accuracy, 99.13% precision, 92.49% sensitivity, 97.73% specificity, 95.70% F1-score, and 96.80% AUC score for binary classification, and up to 83.89% accuracy, 80.36% precision, 82.04% sensitivity, 92.07% specificity, 81.05% F1-score, and 94.20% AUC score for the three-class classification tasks. We also validated our models on the COVID19-CT dataset to differentiate COVID-19 and other non-COVID-19 viral infections, and our CovidDenseNet model achieved the best performance with 81.77% accuracy, 79.05% precision, 84.69% sensitivity, 79.05% specificity, 81.77% F1-score, and 87.50% AUC score. The experimental results reveal the effectiveness of the proposed networks in automated COVID-19 detection where they outperform standard models on the considered datasets while being more efficient.

ACS Style

Hammam Alshazly; Christoph Linse; Mohamed Abdalla; Erhardt Barth; Thomas Martinetz. COVID-Nets: deep CNN architectures for detecting COVID-19 using chest CT scans. PeerJ Computer Science 2021, 7, e655 .

AMA Style

Hammam Alshazly, Christoph Linse, Mohamed Abdalla, Erhardt Barth, Thomas Martinetz. COVID-Nets: deep CNN architectures for detecting COVID-19 using chest CT scans. PeerJ Computer Science. 2021; 7 ():e655.

Chicago/Turabian Style

Hammam Alshazly; Christoph Linse; Mohamed Abdalla; Erhardt Barth; Thomas Martinetz. 2021. "COVID-Nets: deep CNN architectures for detecting COVID-19 using chest CT scans." PeerJ Computer Science 7, no. : e655.

Journal article
Published: 20 July 2021 in IEEE Access
Reads 0
Downloads 0

Seagull Optimization Algorithm (SOA) is a metaheuristic algorithm that mimics the migrating and hunting behaviour of seagulls. SOA is able to solve continuous real-life problems, but not to discrete problems. The eight different binary versions of SOA are proposed in this paper. The proposed algorithm uses four transfer functions, S-shaped and V-shaped, which are used to map the continuous search space into discrete search space. Twenty-five benchmark functions are used to validate the performance of the proposed algorithm. The statistical significance of the proposed algorithm is also analysed. Experimental results divulge that the proposed algorithm outperforms the competitive algorithms. The proposed algorithm is also applied on data mining. The results demonstrate the superiority of binary seagull optimization algorithm in data mining application.

ACS Style

Vijay Kumar; Dinesh Kumar; Manjit Kaur; Dilbag Singh; Sahar Ahmed Idris; Hammam Alshazly. A Novel Binary Seagull Optimizer and its Application to Feature Selection Problem. IEEE Access 2021, 9, 103481 -103496.

AMA Style

Vijay Kumar, Dinesh Kumar, Manjit Kaur, Dilbag Singh, Sahar Ahmed Idris, Hammam Alshazly. A Novel Binary Seagull Optimizer and its Application to Feature Selection Problem. IEEE Access. 2021; 9 ():103481-103496.

Chicago/Turabian Style

Vijay Kumar; Dinesh Kumar; Manjit Kaur; Dilbag Singh; Sahar Ahmed Idris; Hammam Alshazly. 2021. "A Novel Binary Seagull Optimizer and its Application to Feature Selection Problem." IEEE Access 9, no. : 103481-103496.

Journal article
Published: 14 June 2021 in Sensors
Reads 0
Downloads 0

A k-means algorithm is a method for clustering that has already gained a wide range of acceptability. However, its performance extremely depends on the opening cluster centers. Besides, due to weak exploration capability, it is easily stuck at local optima. Recently, a new metaheuristic called Moth Flame Optimizer (MFO) is proposed to handle complex problems. MFO simulates the moths intelligence, known as transverse orientation, used to navigate in nature. In various research work, the performance of MFO is found quite satisfactory. This paper suggests a novel heuristic approach based on the MFO to solve data clustering problems. To validate the competitiveness of the proposed approach, various experiments have been conducted using Shape and UCI benchmark datasets. The proposed approach is compared with five state-of-art algorithms over twelve datasets. The mean performance of the proposed algorithm is superior on 10 datasets and comparable in remaining two datasets. The analysis of experimental results confirms the efficacy of the suggested approach.

ACS Style

Tribhuvan Singh; Nitin Saxena; Manju Khurana; Dilbag Singh; Mohamed Abdalla; Hammam Alshazly. Data Clustering Using Moth-Flame Optimization Algorithm. Sensors 2021, 21, 4086 .

AMA Style

Tribhuvan Singh, Nitin Saxena, Manju Khurana, Dilbag Singh, Mohamed Abdalla, Hammam Alshazly. Data Clustering Using Moth-Flame Optimization Algorithm. Sensors. 2021; 21 (12):4086.

Chicago/Turabian Style

Tribhuvan Singh; Nitin Saxena; Manju Khurana; Dilbag Singh; Mohamed Abdalla; Hammam Alshazly. 2021. "Data Clustering Using Moth-Flame Optimization Algorithm." Sensors 21, no. 12: 4086.

Preprint content
Published: 27 April 2021
Reads 0
Downloads 0

This paper introduces two novel deep convolutional neural network (CNN) architectures for automated detection of COVID-19. The first model, CovidResNet, is inspired by the deep residual network (ResNet) architecture. The second model, CovidDenseNet, exploits the power of densely connected convolutional networks (DenseNet). The proposed networks are designed to provide fast and accurate diagnosis of COVID-19 using computed tomography (CT) images for the multi-class and binary classification tasks. The architectures are utilized in a first experimental study on the SARS-CoV-2 CT-scan dataset, which contains 4173 CT images for 210 subjects structured in a subject-wise manner for three different classes. First, we train and test the networks to differentiate COVID-19, non-COVID-19 viral infections, and healthy. Second, we train and test the networks on binary classification with three different scenarios: COVID-19 vs. healthy, COVID-19 vs. other non-COVID-19 viral pneumonia, and non-COVID-19 viral pneumonia vs. healthy. Our proposed models achieve up to 93.96% accuracy, 99.13% precision, 94% sensitivity, 97.73% specificity, and a 95.80% F1-score for binary classification, and up to 83.89% accuracy, 80.36% precision, 82% sensitivity, 92% specificity, and a 81% F1-score for the three-class classification tasks. The experimental results reveal the validity and effectiveness of the proposed networks in automated COVID-19 detection. The proposed models also outperform the baseline ResNet and DenseNet architectures while being more efficient.

ACS Style

Hammam Alshazly; Christoph Linse; Mohamed Abdalla; Erhardt Barth; Thomas Martinetz. COVID-Nets: Deep CNN Architectures for Detecting COVID-19 Using Chest CT Scans. 2021, 1 .

AMA Style

Hammam Alshazly, Christoph Linse, Mohamed Abdalla, Erhardt Barth, Thomas Martinetz. COVID-Nets: Deep CNN Architectures for Detecting COVID-19 Using Chest CT Scans. . 2021; ():1.

Chicago/Turabian Style

Hammam Alshazly; Christoph Linse; Mohamed Abdalla; Erhardt Barth; Thomas Martinetz. 2021. "COVID-Nets: Deep CNN Architectures for Detecting COVID-19 Using Chest CT Scans." , no. : 1.

Journal article
Published: 11 January 2021 in Sensors
Reads 0
Downloads 0

This paper explores how well deep learning models trained on chest CT images can diagnose COVID-19 infected people in a fast and automated process. To this end, we adopted advanced deep network architectures and proposed a transfer learning strategy using custom-sized input tailored for each deep architecture to achieve the best performance. We conducted extensive sets of experiments on two CT image datasets, namely, the SARS-CoV-2 CT-scan and the COVID19-CT. The results show superior performances for our models compared with previous studies. Our best models achieved average accuracy, precision, sensitivity, specificity, and F1-score values of 99.4%, 99.6%, 99.8%, 99.6%, and 99.4% on the SARS-CoV-2 dataset, and 92.9%, 91.3%, 93.7%, 92.2%, and 92.5% on the COVID19-CT dataset, respectively. For better interpretability of the results, we applied visualization techniques to provide visual explanations for the models’ predictions. Feature visualizations of the learned features show well-separated clusters representing CT images of COVID-19 and non-COVID-19 cases. Moreover, the visualizations indicate that our models are not only capable of identifying COVID-19 cases but also provide accurate localization of the COVID-19-associated regions, as indicated by well-trained radiologists.

ACS Style

Hammam Alshazly; Christoph Linse; Erhardt Barth; Thomas Martinetz. Explainable COVID-19 Detection Using Chest CT Scans and Deep Learning. Sensors 2021, 21, 455 .

AMA Style

Hammam Alshazly, Christoph Linse, Erhardt Barth, Thomas Martinetz. Explainable COVID-19 Detection Using Chest CT Scans and Deep Learning. Sensors. 2021; 21 (2):455.

Chicago/Turabian Style

Hammam Alshazly; Christoph Linse; Erhardt Barth; Thomas Martinetz. 2021. "Explainable COVID-19 Detection Using Chest CT Scans and Deep Learning." Sensors 21, no. 2: 455.

Journal article
Published: 14 September 2020 in IEEE Access
Reads 0
Downloads 0

This paper employs state-of-the-art Deep Convolutional Neural Networks (CNNs), namely AlexNet, VGGNet, Inception, ResNet and ResNeXt in a first experimental study of ear recognition on the unconstrained EarVN1.0 dataset. As the dataset size is still insufficient to train deep CNNs from scratch, we utilize transfer learning and propose different domain adaptation strategies. The experiments show that our networks, which are fine-tuned using custom-sized inputs determined specifically for each CNN architecture, obtain state-of-the-art recognition performance where a single ResNeXt101 model achieves a rank-1 recognition accuracy of 93.45%. Moreover, we achieve the best rank-1 recognition accuracy of 95.85% using an ensemble of fine-tuned ResNeXt101 models. In order to explain the performance differences between models and make our results more interpretable, we employ the t-SNE algorithm to explore and visualize the learned features. Feature visualizations show well-separated clusters representing ear images of the different subjects. This indicates that discriminative and ear-specific features are learned when applying our proposed learning strategies.

ACS Style

Hammam Alshazly; Christoph Linse; Erhardt Barth; Thomas Martinetz. Deep Convolutional Neural Networks for Unconstrained Ear Recognition. IEEE Access 2020, 8, 170295 -170310.

AMA Style

Hammam Alshazly, Christoph Linse, Erhardt Barth, Thomas Martinetz. Deep Convolutional Neural Networks for Unconstrained Ear Recognition. IEEE Access. 2020; 8 (99):170295-170310.

Chicago/Turabian Style

Hammam Alshazly; Christoph Linse; Erhardt Barth; Thomas Martinetz. 2020. "Deep Convolutional Neural Networks for Unconstrained Ear Recognition." IEEE Access 8, no. 99: 170295-170310.

Article
Published: 19 August 2020 in Multimedia Tools and Applications
Reads 0
Downloads 0

Extraction and description of image features is an active research topic and important for several applications of computer vision field. This paper presents a new noise-tolerant and rotation-invariant local feature descriptor called robust local oriented patterns (RLOP). The proposed descriptor extracts local image structures utilizing edge directional information to provide rotation-invariant patterns and to be effective in noise and changing illumination situations. This is achieved by a non-linear amalgamation of two specific strategies; binarizing neighborhood pixels of a patch and assigning binomial weights in the same formula. In the encoding methodology, the neighboring pixels is binarized with respect to the mean value of pixels in an image patch of size 3 × 3 instead of the center pixel. Thus, the obtained codes are rotation-invariant and more robust against noise and other non-monotonic grayscale variations. Ear recognition is considered as the representative problem, where the ear involves localized patterns and textures. The proposed descriptor encodes all images’ pixels and the resulting RLOP-encoded image is divided into several regions. Histograms of the regions are constructed to estimate the distribution of features. Then, all histograms are concatenated together to form the final descriptor. The robustness and effectiveness of the proposed descriptor are evaluated through conducting several identification and verification experiments on four different ear databases: IIT Delhi-I, IIT Delhi-II, AMI, and AWE. It is observed that the proposed descriptor outperforms the state-of-the-art texture based approaches achieving a recognition rate of 98% on the average providing the best performance among the tested descriptors.

ACS Style

M. Hassaballah; H. A. Alshazly; Abdelmgeid A. Ali. Robust local oriented patterns for ear recognition. Multimedia Tools and Applications 2020, 79, 31183 -31204.

AMA Style

M. Hassaballah, H. A. Alshazly, Abdelmgeid A. Ali. Robust local oriented patterns for ear recognition. Multimedia Tools and Applications. 2020; 79 (41-42):31183-31204.

Chicago/Turabian Style

M. Hassaballah; H. A. Alshazly; Abdelmgeid A. Ali. 2020. "Robust local oriented patterns for ear recognition." Multimedia Tools and Applications 79, no. 41-42: 31183-31204.

Journal article
Published: 08 December 2019 in Symmetry
Reads 0
Downloads 0

Ear recognition is an active research area in the biometrics community with the ultimate goal to recognize individuals effectively from ear images. Traditional ear recognition methods based on handcrafted features and conventional machine learning classifiers were the prominent techniques during the last two decades. Arguably, feature extraction is the crucial phase for the success of these methods due to the difficulty in designing robust features to cope with the variations in the given images. Currently, ear recognition research is shifting towards features extracted by Convolutional Neural Networks (CNNs), which have the ability to learn more specific features robust to the wide image variations and achieving state-of-the-art recognition performance. This paper presents and compares ear recognition models built with handcrafted and CNN features. First, we experiment with seven top performing handcrafted descriptors to extract the discriminating ear image features and then train Support Vector Machines (SVMs) on the extracted features to learn a suitable model. Second, we introduce four CNN based models using a variant of the AlexNet architecture. The experimental results on three ear datasets show the superior performance of the CNN based models by 22%. To further substantiate the comparison, we perform visualization of the handcrafted and CNN features using the t-distributed Stochastic Neighboring Embedding (t-SNE) visualization technique and the characteristics of features are discussed. Moreover, we conduct experiments to investigate the symmetry of the left and right ears and the obtained results on two datasets indicate the existence of a high degree of symmetry between the ears, while a fair degree of asymmetry also exists.

ACS Style

Hammam Alshazly; Christoph Linse; Erhardt Barth; Thomas Martinetz. Handcrafted versus CNN Features for Ear Recognition. Symmetry 2019, 11, 1493 .

AMA Style

Hammam Alshazly, Christoph Linse, Erhardt Barth, Thomas Martinetz. Handcrafted versus CNN Features for Ear Recognition. Symmetry. 2019; 11 (12):1493.

Chicago/Turabian Style

Hammam Alshazly; Christoph Linse; Erhardt Barth; Thomas Martinetz. 2019. "Handcrafted versus CNN Features for Ear Recognition." Symmetry 11, no. 12: 1493.

Journal article
Published: 24 September 2019 in Sensors
Reads 0
Downloads 0

The recognition performance of visual recognition systems is highly dependent on extracting and representing the discriminative characteristics of image data. Convolutional neural networks (CNNs) have shown unprecedented success in a variety of visual recognition tasks due to their capability to provide in-depth representations exploiting visual image features of appearance, color, and texture. This paper presents a novel system for ear recognition based on ensembles of deep CNN-based models and more specifically the Visual Geometry Group (VGG)-like network architectures for extracting discriminative deep features from ear images. We began by training different networks of increasing depth on ear images with random weight initialization. Then, we examined pretrained models as feature extractors as well as fine-tuning them on ear images. After that, we built ensembles of the best models to further improve the recognition performance. We evaluated the proposed ensembles through identification experiments using ear images acquired under controlled and uncontrolled conditions from mathematical analysis of images (AMI), AMI cropped (AMIC) (introduced here), and West Pomeranian University of Technology (WPUT) ear datasets. The experimental results indicate that our ensembles of models yield the best performance with significant improvements over the recently published results. Moreover, we provide visual explanations of the learned features by highlighting the relevant image regions utilized by the models for making decisions or predictions.

ACS Style

Hammam Alshazly; Christoph Linse; Erhardt Barth; Thomas Martinetz. Ensembles of Deep Learning Models and Transfer Learning for Ear Recognition. Sensors 2019, 19, 4139 .

AMA Style

Hammam Alshazly, Christoph Linse, Erhardt Barth, Thomas Martinetz. Ensembles of Deep Learning Models and Transfer Learning for Ear Recognition. Sensors. 2019; 19 (19):4139.

Chicago/Turabian Style

Hammam Alshazly; Christoph Linse; Erhardt Barth; Thomas Martinetz. 2019. "Ensembles of Deep Learning Models and Transfer Learning for Ear Recognition." Sensors 19, no. 19: 4139.

Chapter
Published: 15 December 2018 in Econometrics for Financial Applications
Reads 0
Downloads 0

Feature keypoint descriptors have become indispensable tools and have been widely utilized in a large number of computer vision applications. Many descriptors have been proposed in the literature to describe regions of interest around each keypoint and each claims distinctiveness and robustness against certain types of image distortions. Among these are the conventional floating-point descriptors and their binary competitors that require less storage capacity and perform at a fraction of the matching times compared with the floating-point descriptors. This chapter gives a brief description to the most frequently used keypoint descriptors from each category. Also, it provides a general framework to analyze and evaluate the performance of these feature keypoint descriptors, particularly when they are used for image matching under various imaging distortions such as blur, scale and illumination changes, and image rotations. Moreover, it presents a detailed explanation and analysis of the experimental results and findings where several important observations are derived from the conducted experiments.

ACS Style

M. Hassaballah; Hammam A. Alshazly; Abdelmgeid A. Ali. Analysis and Evaluation of Keypoint Descriptors for Image Matching. Econometrics for Financial Applications 2018, 113 -140.

AMA Style

M. Hassaballah, Hammam A. Alshazly, Abdelmgeid A. Ali. Analysis and Evaluation of Keypoint Descriptors for Image Matching. Econometrics for Financial Applications. 2018; ():113-140.

Chicago/Turabian Style

M. Hassaballah; Hammam A. Alshazly; Abdelmgeid A. Ali. 2018. "Analysis and Evaluation of Keypoint Descriptors for Image Matching." Econometrics for Financial Applications , no. : 113-140.

Journal article
Published: 04 October 2018 in Expert Systems with Applications
Reads 0
Downloads 0

Identity recognition using local features extracted from ear images has recently attracted a great deal of attention in the intelligent biometric systems community. The rich and reliable information of the human ear and its stable structure over a long period of time present ear recognition technology as an appealing choice for identifying individuals and verifying their identities. This paper considers the ear recognition problem using local binary patterns (LBP) features. Where, the LBP-like features characterize the spatial structure of the image texture based on the assumption that this texture has a pattern and its strength (amplitude)-two locally complementary aspects. Their high discriminative power, invariance to monotonic gray-scale changes and computational efficiency properties make the LBP-like features suitable for the ear recognition problem. Thus, the performance of several recent LBP variants introduced in the literature as feature extraction techniques is investigated to determine how can they be best utilized for ear recognition. To this end, we carry out a comprehensive comparative study on the identification and verification scenarios separately. Besides, a new variant of the traditional LBP operator named averaged local binary patterns (ALBP) is proposed and its ability in representing texture of ear images is compared with the other LBP variants. The ear identification and verification experiments are extensively conducted on five publicly available constrained and unconstrained benchmark ear datasets stressing various imaging conditions; namely IIT Delhi (I), IIT Delhi (II), AMI, WPUT and AWE. The obtained results for both identification and verification indicate that the current LBP texture descriptors are successful feature extraction candidates for ear recognition systems in the case of constrained imaging conditions and can achieve recognition rates reaching up to 99%; while, their performance faces difficulties when the level of distortions increases. Moreover, it is noted that the tested LBP variants achieve almost close performance on ear recognition. Thus, further studies on other applications are needed to verify this close performance. We believe that the presented study has significant insights and can benefit researchers in choosing between LBP variants as well as acting as a connection between previous studies and future work in utilizing LBP-like features in ear recognition systems.

ACS Style

M. Hassaballah; Hammam A. Alshazly; Abdelmgeid A. Ali. Ear recognition using local binary patterns: A comparative experimental study. Expert Systems with Applications 2018, 118, 182 -200.

AMA Style

M. Hassaballah, Hammam A. Alshazly, Abdelmgeid A. Ali. Ear recognition using local binary patterns: A comparative experimental study. Expert Systems with Applications. 2018; 118 ():182-200.

Chicago/Turabian Style

M. Hassaballah; Hammam A. Alshazly; Abdelmgeid A. Ali. 2018. "Ear recognition using local binary patterns: A comparative experimental study." Expert Systems with Applications 118, no. : 182-200.

Conference paper
Published: 29 August 2018 in Advances in Intelligent Systems and Computing
Reads 0
Downloads 0

Recently, intensive research efforts are conducted on the human ear as a promising biometric modality for identity recognition. However, one of the main challenges facing ear recognition systems is to find robust representation for the image information that is invariant to different imaging variations. Recent studies indicate that using the distribution of local intensity gradients or edge directions can better discriminate the shape and appearance of objects. Moreover, gradient-based features are robust to global and local intensity variations as well as noise and geometric transformation of images. This paper presents an ear biometric recognition approach based on the gradient-based features. To this end, four local feature extractors are investigated, namely: Histogram of Oriented Gradients (HOG), Weber Local Descriptor (WLD), Local Directional Patterns (LDP), and Local Optimal Oriented Patterns (LOOP). Extensive experiments are conducted for both identification and verification using the publicly available IIT Delhi-I, IIT Delhi-II, and AMI ear databases. The obtained results are encouraging, where the LOOP features excel in all cases achieving recognition rates of approximately 97%.

ACS Style

Hammam A. Alshazly; M. Hassaballah; Mourad Ahmed; Abdelmgeid A. Ali. Ear Biometric Recognition Using Gradient-Based Feature Descriptors. Advances in Intelligent Systems and Computing 2018, 435 -445.

AMA Style

Hammam A. Alshazly, M. Hassaballah, Mourad Ahmed, Abdelmgeid A. Ali. Ear Biometric Recognition Using Gradient-Based Feature Descriptors. Advances in Intelligent Systems and Computing. 2018; ():435-445.

Chicago/Turabian Style

Hammam A. Alshazly; M. Hassaballah; Mourad Ahmed; Abdelmgeid A. Ali. 2018. "Ear Biometric Recognition Using Gradient-Based Feature Descriptors." Advances in Intelligent Systems and Computing , no. : 435-445.

Conference paper
Published: 31 August 2017 in Advances in Intelligent Systems and Computing
Reads 0
Downloads 0

Efficient and compact representation of local image patches in the form of features descriptors that are distinctive/robust as well as fast to compute and match is an essential and inevitable step for many computer vision applications. One category of these representations is the binary descriptors which have been shown to be successful alternatives providing similar performance to their floating-point counterparts while being efficient to compute and store. In this paper, a comprehensive performance evaluation of the current state-of-the-art binary descriptors; namely, BRIEF, ORB, BRISK, FREAK, and LATCH is presented in the context of image matching. This performance evaluation highlights several points regarding the performance characteristics of binary descriptors under various geometric and photometric transformations of images.

ACS Style

Hammam A. Alshazly; M. Hassaballah; Abdelmgeid A. Ali; G. Wang. An Experimental Evaluation of Binary Feature Descriptors. Advances in Intelligent Systems and Computing 2017, 639, 181 -191.

AMA Style

Hammam A. Alshazly, M. Hassaballah, Abdelmgeid A. Ali, G. Wang. An Experimental Evaluation of Binary Feature Descriptors. Advances in Intelligent Systems and Computing. 2017; 639 ():181-191.

Chicago/Turabian Style

Hammam A. Alshazly; M. Hassaballah; Abdelmgeid A. Ali; G. Wang. 2017. "An Experimental Evaluation of Binary Feature Descriptors." Advances in Intelligent Systems and Computing 639, no. : 181-191.

Book chapter
Published: 23 February 2016 in Cooperative Robots and Sensor Networks 2015
Reads 0
Downloads 0

Feature detection, description and matching are essential components of various computer vision applications, thus they have received a considerable attention in the last decades. Several feature detectors and descriptors have been proposed in the literature with a variety of definitions for what kind of points in an image is potentially interesting (i.e., a distinctive attribute). This chapter introduces basic notation and mathematical concepts for detecting and describing image features. Then, it discusses properties of perfect features and gives an overview of various existing detection and description methods. Furthermore, it explains some approaches to feature matching. Finally, the chapter discusses the most used techniques for performance evaluation of detection and description algorithms.

ACS Style

M. Hassaballah; Aly Amin Abdelmgeid; Hammam A. Alshazly. Image Features Detection, Description and Matching. Cooperative Robots and Sensor Networks 2015 2016, 11 -45.

AMA Style

M. Hassaballah, Aly Amin Abdelmgeid, Hammam A. Alshazly. Image Features Detection, Description and Matching. Cooperative Robots and Sensor Networks 2015. 2016; ():11-45.

Chicago/Turabian Style

M. Hassaballah; Aly Amin Abdelmgeid; Hammam A. Alshazly. 2016. "Image Features Detection, Description and Matching." Cooperative Robots and Sensor Networks 2015 , no. : 11-45.

Conference paper
Published: 01 December 2014 in 2014 9th International Conference on Computer Engineering & Systems (ICCES)
Reads 0
Downloads 0

Face detection as one of the most challenging tasks in computer vision has received a lot of attention in recent decades due to its wide range of use in face based image analysis. In this paper, we propose an efficient approach for face detection that efficiently combines generalized Hough transform within random decision forests framework. In this approach, we train random decision forests that directly maps the image patch appearance to the probabilistic vote about the possible location of the face centroid; the detection hypotheses then correspond to the maxima of the Hough image. The random decision forests construction and prediction abilities depend on setting some parameters, which in turns affects the performance of the method. Therefore, the impact of these parameters that most influence the behavior of the forest for detecting faces is studied through experiments on the widely used CMU+MIT database. Moreover, a comparison with some published methods is presented.

ACS Style

Mahmoud Hassaballah; Mourad Ahmed; H.A. Alshazly. Effect of hough forests parameters on face detection performance: An empirical analysis. 2014 9th International Conference on Computer Engineering & Systems (ICCES) 2014, 35 -40.

AMA Style

Mahmoud Hassaballah, Mourad Ahmed, H.A. Alshazly. Effect of hough forests parameters on face detection performance: An empirical analysis. 2014 9th International Conference on Computer Engineering & Systems (ICCES). 2014; ():35-40.

Chicago/Turabian Style

Mahmoud Hassaballah; Mourad Ahmed; H.A. Alshazly. 2014. "Effect of hough forests parameters on face detection performance: An empirical analysis." 2014 9th International Conference on Computer Engineering & Systems (ICCES) , no. : 35-40.