This page has only limited features, please log in for full access.
The computation of likelihood ratios (LR) to measure the weight of forensic glass evidence with LA-ICP-MS data directly in the feature space without computing any kind of score as an intermediate step is a complex problem. A probabilistic two-level modeling of the within-source and between-source variability of the glass samples is needed in order to compare the elemental profiles measured from glass recovered from a suspect or a crime scene and compared to glass samples of a known source of origin. Calibration of the likelihood ratios generated using previously reported models is essential to the realistic reporting of the value of the glass evidence comparisons. We propose models that outperform previously proposed feature-based LR models, in particular by improving the calibration of the computed LRs. We assume that the within-source variability is heavy-tailed, in order to incorporate uncertainty when the available data is scarce, as it typically happens in forensic glass comparison. Moreover, we address the complexity of the between-source variability by the use of probabilistic machine learning algorithms, namely a variational autoencoder and a warped Gaussian mixture. Our results show that the overall performance of the likelihood ratios generated by our model is superior to classical approaches, and that this improvement is due to a dramatic improvement in the calibration despite some loss in discriminating power. Moreover, the robustness of the calibration of our proposal is remarkable.
Daniel Ramos; Juan Maroñas; Jose Almirall. Improving calibration of forensic glass comparisons by considering uncertainty in feature-based elemental data. Chemometrics and Intelligent Laboratory Systems 2021, 217, 104399 .
AMA StyleDaniel Ramos, Juan Maroñas, Jose Almirall. Improving calibration of forensic glass comparisons by considering uncertainty in feature-based elemental data. Chemometrics and Intelligent Laboratory Systems. 2021; 217 ():104399.
Chicago/Turabian StyleDaniel Ramos; Juan Maroñas; Jose Almirall. 2021. "Improving calibration of forensic glass comparisons by considering uncertainty in feature-based elemental data." Chemometrics and Intelligent Laboratory Systems 217, no. : 104399.
Sound Event Detection is a task with a rising relevance over the recent years in the field of audio signal processing, due to the creation of specific datasets such as Google AudioSet or DESED (Domestic Environment Sound Event Detection) and the introduction of competitive evaluations like the DCASE Challenge (Detection and Classification of Acoustic Scenes and Events). The different categories of acoustic events can present diverse temporal and spectral characteristics. However, most approaches use a fixed time-frequency resolution to represent the audio segments. This work proposes a multi-resolution analysis for feature extraction in Sound Event Detection, hypothesizing that different resolutions can be more adequate for the detection of different sound event categories, and that combining the information provided by multiple resolutions could improve the performance of Sound Event Detection systems. Experiments are carried out over the DESED dataset in the context of the DCASE 2020 Challenge, concluding that the combination of up to 5 resolutions allows a neural network-based system to obtain better results than single-resolution models in terms of event-based F1-score in every event category and in terms of PSDS (Polyphonic Sound Detection Score). Furthermore, we analyze the impact of score thresholding in the computation of F1-score results, finding that the standard value of 0.5 is suboptimal and proposing an alternative strategy based in the use of a specific threshold for each event category, which obtains further improvements in performance.
Diego De Benito-Gorron; Daniel Ramos; Doroteo T. Toledano. A multi-resolution CRNN-based approach for semi-supervised Sound Event Detection in DCASE 2020 Challenge. IEEE Access 2021, 9, 1 -1.
AMA StyleDiego De Benito-Gorron, Daniel Ramos, Doroteo T. Toledano. A multi-resolution CRNN-based approach for semi-supervised Sound Event Detection in DCASE 2020 Challenge. IEEE Access. 2021; 9 ():1-1.
Chicago/Turabian StyleDiego De Benito-Gorron; Daniel Ramos; Doroteo T. Toledano. 2021. "A multi-resolution CRNN-based approach for semi-supervised Sound Event Detection in DCASE 2020 Challenge." IEEE Access 9, no. : 1-1.
Radiation dose in nuclear power plant reactors is known to be dominated by the presence of radioisotopes in the primary loop of the reactor. In order to strictly control it in normal operation (e.g., cleaning and reloading of nuclear fuel), established chemical theories exist to explain the amount of radioisotopes present in the reactor water circuits with respect to known control variables in the plant (e.g., thermal power on the reactor, pH, hydrogen, etc.). However, the high complexity and the uncertainty of the process make difficult an accurate estimation of the measured values of radioisotopes. In order to address this problem, this article introduces a dynamic Bayesian network (DBN) probabilistic model that allows to experimentally demonstrate the capabilities of the control variables to give information about the value of the radioisotope concentrations, and to predict their values in a data-driven way. Our results in 5 different nuclear power plants show that the accuracy and reliability of these predictions is remarkable, enabling strategies for gathering reliable information about the chemical process in the primary loop, towards possible operational improvements.
Daniel Ramos; Pablo Ramirez-Hereza; Doroteo T. Toledano; Joaquin Gonzalez-Rodriguez; Alicia Ariza-Velazquez; Daniel Solis-Tovar; Cristina Muñoz-Reja. Dynamic Bayesian networks for temporal prediction of chemical radioisotope levels in nuclear power plant reactors. Chemometrics and Intelligent Laboratory Systems 2021, 214, 104327 .
AMA StyleDaniel Ramos, Pablo Ramirez-Hereza, Doroteo T. Toledano, Joaquin Gonzalez-Rodriguez, Alicia Ariza-Velazquez, Daniel Solis-Tovar, Cristina Muñoz-Reja. Dynamic Bayesian networks for temporal prediction of chemical radioisotope levels in nuclear power plant reactors. Chemometrics and Intelligent Laboratory Systems. 2021; 214 ():104327.
Chicago/Turabian StyleDaniel Ramos; Pablo Ramirez-Hereza; Doroteo T. Toledano; Joaquin Gonzalez-Rodriguez; Alicia Ariza-Velazquez; Daniel Solis-Tovar; Cristina Muñoz-Reja. 2021. "Dynamic Bayesian networks for temporal prediction of chemical radioisotope levels in nuclear power plant reactors." Chemometrics and Intelligent Laboratory Systems 214, no. : 104327.
This paper explores several strategies for Forensic Voice Comparison (FVC), aimed at improving the performance of the LRs when using generative Gaussian score-to-LR models. First, different anchoring strategies are proposed, with the objective of adapting the LR computation process to the case at hand, always respecting the propositions defined for the particular case. Second, a fully-Bayesian Gaussian model is used to tackle the sparsity in the training scores that is often present when the proposed anchoring strategies are used. Experiments are performed using the 2014 i-Vector challenge set-up, which presents high variability in a telephone speech context. The results show that the proposed fully-Bayesian model clearly outperforms a more common Maximum-Likelihood approach, leading to high robustness when the scores to train the model become sparse.
Daniel Ramos; Juan Maroñas; Alicia Lozano-Diez. Bayesian Strategies for Likelihood Ratio Computation in Forensic Voice Comparison with Automatic Systems. 2019, 1 .
AMA StyleDaniel Ramos, Juan Maroñas, Alicia Lozano-Diez. Bayesian Strategies for Likelihood Ratio Computation in Forensic Voice Comparison with Automatic Systems. . 2019; ():1.
Chicago/Turabian StyleDaniel Ramos; Juan Maroñas; Alicia Lozano-Diez. 2019. "Bayesian Strategies for Likelihood Ratio Computation in Forensic Voice Comparison with Automatic Systems." , no. : 1.
The goal of this paper is to deal with a data scarcity scenario where deep learning techniques use to fail. We compare the use of two well established techniques, Restricted Boltzmann Machines and Variational Auto-encoders, as generative models in order to increase the training set in a classification framework. Essentially, we rely on Markov Chain Monte Carlo (MCMC) algorithms for generating new samples. We show that generalization can be improved comparing this methodology to other state-of-the-art techniques, e.g. semi-supervised learning with ladder networks. Furthermore, we show that RBM is better than VAE generating new samples for training a classifier with good generalization capabilities.
Juan Maroñas; Roberto Paredes; Daniel Ramos. Generative Models For Deep Learning with Very Scarce Data. 2019, 1 .
AMA StyleJuan Maroñas, Roberto Paredes, Daniel Ramos. Generative Models For Deep Learning with Very Scarce Data. . 2019; ():1.
Chicago/Turabian StyleJuan Maroñas; Roberto Paredes; Daniel Ramos. 2019. "Generative Models For Deep Learning with Very Scarce Data." , no. : 1.
Latent fingerprints are usually processed with Automated Fingerprint Identification Systems (AFIS) by law enforcement agencies to narrow down possible suspects from a criminal database. AFIS do not commonly use all discriminatory features available in fingerprints but typically use only some types of features automatically extracted by a feature extraction algorithm. In this work, we explore ways to improve rank identification accuracies of AFIS when only a partial latent fingerprint is available. Towards solving this challenge, we propose a method that exploits extended fingerprint features (unusual/rare minutiae) not commonly considered in AFIS. This new method can be combined with any existing minutiae-based matcher. We first compute a similarity score based on least squares between latent and tenprint minutiae points, with rare minutiae features as reference points. Then the similarity score of the reference minutiae-based matcher at hand is modified based on a fitting error from the least square similarity stage. We use a realistic forensic fingerprint casework database in our experiments which contains rare minutiae features obtained from Guardia Civil, the Spanish law enforcement agency. Experiments are conducted using three minutiae-based matchers as a reference, namely: NIST-Bozorth3, VeriFinger-SDK and MCC-SDK. We report significant improvements in the rank identification accuracies when these minutiae matchers are augmented with our proposed algorithm based on rare minutiae features.
Ram P. Krish; Julian Fierrez; Daniel Ramos; Fernando Alonso-Fernandez; Josef Bigun. Improving Automated Latent Fingerprint Identification using Extended Minutia Types. 2018, 1 .
AMA StyleRam P. Krish, Julian Fierrez, Daniel Ramos, Fernando Alonso-Fernandez, Josef Bigun. Improving Automated Latent Fingerprint Identification using Extended Minutia Types. . 2018; ():1.
Chicago/Turabian StyleRam P. Krish; Julian Fierrez; Daniel Ramos; Fernando Alonso-Fernandez; Josef Bigun. 2018. "Improving Automated Latent Fingerprint Identification using Extended Minutia Types." , no. : 1.
Latent fingerprints are usually processed with Automated Fingerprint Identification Systems (AFIS) by law enforcement agencies to narrow down possible suspects from a criminal database. AFIS do not commonly use all discriminatory features available in fingerprints but typically use only some types of features automatically extracted by a feature extraction algorithm. In this work, we explore ways to improve rank identification accuracies of AFIS when only a partial latent fingerprint is available. Towards solving this challenge, we propose a method that exploits extended fingerprint features (unusual/rare minutiae) not commonly considered in AFIS. This new method can be combined with any existing minutiae-based matcher. We first compute a similarity score based on least squares between latent and tenprint minutiae points, with rare minutiae features as reference points. Then the similarity score of the reference minutiae-based matcher at hand is modified based on a fitting error from the least square similarity stage. We use a realistic forensic fingerprint casework database in our experiments which contains rare minutiae features obtained from Guardia Civil, the Spanish law enforcement agency. Experiments are conducted using three minutiae-based matchers as a reference, namely: NIST-Bozorth3, VeriFinger-SDK and MCC-SDK. We report significant improvements in the rank identification accuracies when these minutiae matchers are augmented with our proposed algorithm based on rare minutiae features.
Ram P. Krish; Julian Fierrez; Daniel Ramos; Fernando Alonso-Fernandez; Josef Bigun. Improving automated latent fingerprint identification using extended minutia types. Information Fusion 2018, 50, 9 -19.
AMA StyleRam P. Krish, Julian Fierrez, Daniel Ramos, Fernando Alonso-Fernandez, Josef Bigun. Improving automated latent fingerprint identification using extended minutia types. Information Fusion. 2018; 50 ():9-19.
Chicago/Turabian StyleRam P. Krish; Julian Fierrez; Daniel Ramos; Fernando Alonso-Fernandez; Josef Bigun. 2018. "Improving automated latent fingerprint identification using extended minutia types." Information Fusion 50, no. : 9-19.
Laser Ablation-Inductively Coupled Plasma-Mass Spectrometry (LA-ICP-MS) has been shown to be an excellent technique for the discrimination of glass originating from different sources and for the association of glass originating from the same source. Typically, a match criterion is used to compare the elemental profile of the known sample to a questioned sample and if the glass samples are determined to “match” this may be followed by the use of a verbal scale to report the forensic practitioner's conclusion. This approach has several disadvantages: a fixed match criterion suffers from the “fall-off-the-cliff effect,” the rarity of an elemental profile is not taken into account, and the use of a verbal scale to assign a weight of evidence may be considered as subjective and can vary by examiner. An alternative approach includes the use of a continuous likelihood ratio that provides a quantitative measure of the significance of the evidence and accounts for the rarity of an elemental profile through the use of a glass database. In the present study, two glass databases were used to evaluate the performance of the likelihood ratio; the first database includes 420 automotive windshield samples, while the second database includes 385 glass samples from casework. The multivariate kernel model was used for the calculation of the likelihood ratio. However, this model led to unreasonably large (or small) likelihood ratios. Thus, a calibration step, using the Pool Adjacent Violators (PAV) algorithm, was necessary in order to limit the likelihood ratio to reasonable values. The calibrated likelihood ratio led to improved false exclusion rates (< 1.5%) and comparable false inclusion rates (< 1.0%). In addition, the likelihood ratio limited the magnitude of the misleading evidence, providing only weak to moderate support for the incorrect hypothesis. Finally, most of the pairs found to be falsely included were explained by similarity of manufacturer of the glass source.
Ruthmara Corzo; Tricia Hoffman; Peter Weis; Javier Franco-Pedroso; Daniel Ramos; Jose Almirall. The use of LA-ICP-MS databases to calculate likelihood ratios for the forensic analysis of glass evidence. Talanta 2018, 186, 655 -661.
AMA StyleRuthmara Corzo, Tricia Hoffman, Peter Weis, Javier Franco-Pedroso, Daniel Ramos, Jose Almirall. The use of LA-ICP-MS databases to calculate likelihood ratios for the forensic analysis of glass evidence. Talanta. 2018; 186 ():655-661.
Chicago/Turabian StyleRuthmara Corzo; Tricia Hoffman; Peter Weis; Javier Franco-Pedroso; Daniel Ramos; Jose Almirall. 2018. "The use of LA-ICP-MS databases to calculate likelihood ratios for the forensic analysis of glass evidence." Talanta 186, no. : 655-661.
The comparative analysis of chromatographic profiles of materials is the subject of interest in many scientific fields, including forensic science. Plastic microtraces collected during hit-and-run accidents and examined with pyrolysis gas chromatography mass spectrometry (Py-GC-MS), may serve as an example. The aim of comparing the recovered and control samples is to help reconstruct the event by commenting on their common, or not, sources. The objective is to report the evidential value of data in the context of two competing hypotheses: H1-both samples share common origins (e.g. car) and H2-they do not share common origins. The likelihood ratio approach (LR) addresses this idea as an acknowledged method within the forensic community. However, conventional feature-based LR models (using e.g. signal intensities of the chromatographically separated compounds) suffer from the curse of multidimensionality. Their considerable complexity can be reduced in the score-based LR models. In this concept the evidence expressed by the score, computed as a distance between the recovered and control samples characteristics, is evaluated using LR. A score solely based on a distance or a measure of similarity, without taking into account typicality, may not reflect the differences between similar samples clearly in a highly multidimensional space. Here we show that boosting the between-samples variance (B) whilst minimising the within-samples variance (W) helps distinguish between samples and improves the score-based LR models performance. Instead of computing the distances in the feature space, the authors use the space defined by ANOVA simultaneous component analysis, regularised MANOVA and ANOVA target projection that find directions with the magnified differences between B and W. The concept was successfully illustrated for 22 plastic containers and automotive samples, examined using Py-GC-MS. The research shows that this so-called hybrid approach combining chemometric tools and score-based LR framework yields a performing solution for the comparison problem for Py-GC-MS chromatograms.
Agnieszka Martyna; Grzegorz Zadora; Daniel Ramos. Forensic comparison of pyrograms using score-based likelihood ratios. Journal of Analytical and Applied Pyrolysis 2018, 133, 198 -215.
AMA StyleAgnieszka Martyna, Grzegorz Zadora, Daniel Ramos. Forensic comparison of pyrograms using score-based likelihood ratios. Journal of Analytical and Applied Pyrolysis. 2018; 133 ():198-215.
Chicago/Turabian StyleAgnieszka Martyna; Grzegorz Zadora; Daniel Ramos. 2018. "Forensic comparison of pyrograms using score-based likelihood ratios." Journal of Analytical and Applied Pyrolysis 133, no. : 198-215.
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze its motivation, meaning and interpretation from an information-theoretical point of view. In this sense, this article presents several contributions: First, we explicitly analyze the contribution to cross-entropy of (i) prior knowledge; and (ii) the value of the features in the form of a likelihood ratio. Second, we introduce a decomposition of cross-entropy into two components: discrimination and calibration. This decomposition enables the measurement of different performance aspects of a classifier in a more precise way; and justifies previously reported strategies to obtain reliable probabilities by means of the calibration of the output of a discriminating classifier. Third, we give different information-theoretical interpretations of cross-entropy, which can be useful in different application scenarios, and which are related to the concept of reference probabilities. Fourth, we present an analysis tool, the Empirical Cross-Entropy (ECE) plot, a compact representation of cross-entropy and its aforementioned decomposition. We show the power of ECE plots, as compared to other classical performance representations, in two diverse experimental examples: a speaker verification system, and a forensic case where some glass findings are present.
Daniel Ramos; Javier Franco-Pedroso; Alicia Lozano-Diez; Joaquin Gonzalez-Rodriguez. Deconstructing Cross-Entropy for Probabilistic Binary Classifiers. Entropy 2018, 20, 208 .
AMA StyleDaniel Ramos, Javier Franco-Pedroso, Alicia Lozano-Diez, Joaquin Gonzalez-Rodriguez. Deconstructing Cross-Entropy for Probabilistic Binary Classifiers. Entropy. 2018; 20 (3):208.
Chicago/Turabian StyleDaniel Ramos; Javier Franco-Pedroso; Alicia Lozano-Diez; Joaquin Gonzalez-Rodriguez. 2018. "Deconstructing Cross-Entropy for Probabilistic Binary Classifiers." Entropy 20, no. 3: 208.
Performance estimation is crucial to the assessment of novel algorithms and systems. In detection error tradeoff (DET) diagrams, discrimination performance is solely assessed targeting one application, where cross-application performance considers risks resulting from decisions, depending on application constraints. For the purpose of interchangeability of research results across different application constraints, we propose to augment DET curves by depicting systems regarding their support of security and convenience levels. Therefore, application policies are aggregated into levels based on verbal likelihood ratio scales, providing an easy to use concept for business-to-business communication to denote operative thresholds. We supply a reference implementation in Python, an exemplary performance assessment on synthetic score distributions, and a fine-tuning scheme for Bayes decision thresholds, when decision policies are bounded rather than fix.
Andreas Nautsch; Didier Meuwly; Daniel Ramos; Jonas Lindh; Christoph Busch. Making Likelihood Ratios Digestible for Cross-Application Performance Assessment. IEEE Signal Processing Letters 2017, 24, 1552 -1556.
AMA StyleAndreas Nautsch, Didier Meuwly, Daniel Ramos, Jonas Lindh, Christoph Busch. Making Likelihood Ratios Digestible for Cross-Application Performance Assessment. IEEE Signal Processing Letters. 2017; 24 (10):1552-1556.
Chicago/Turabian StyleAndreas Nautsch; Didier Meuwly; Daniel Ramos; Jonas Lindh; Christoph Busch. 2017. "Making Likelihood Ratios Digestible for Cross-Application Performance Assessment." IEEE Signal Processing Letters 24, no. 10: 1552-1556.
In this chapter, we describe the issue of the interpretation of forensic evidence from scores computed by a biometric system. This is one of the most important topics into the so-called area of forensic biometrics. We will show the importance of the topic, introducing some of the key concepts of forensic science with respect to the interpretation of results prior to their presentation in court, which is increasingly addressed by the computation of likelihood ratios (LR). We will describe the LR methodology, and will illustrate it with an example of the evaluation of fingerprint evidence in forensic conditions, by means of a fingerprint biometric system.
Daniel Ramos; Ram P. Krish; Julian Fierrez; Didier Meuwly. From Biometric Scores to Forensic Likelihood Ratios. Guide to 3D Vision Computation 2017, 305 -327.
AMA StyleDaniel Ramos, Ram P. Krish, Julian Fierrez, Didier Meuwly. From Biometric Scores to Forensic Likelihood Ratios. Guide to 3D Vision Computation. 2017; ():305-327.
Chicago/Turabian StyleDaniel Ramos; Ram P. Krish; Julian Fierrez; Didier Meuwly. 2017. "From Biometric Scores to Forensic Likelihood Ratios." Guide to 3D Vision Computation , no. : 305-327.
Automated Fingerprint Identification Systems (AFIS) are commonly used by law enforcement agencies to narrow down the possible suspects from a criminal database. AFIS do not use all discriminatory features available in fingerprints but typically use only some types of features automatically extracted by a feature extraction algorithm. Latent fingerprints obtained from crime scenes are usually partial in nature which results to only very few number of reliable minutiae. Comparing a partial minutiae pattern to a full minutiae pattern is a difficult problem. Towards solving this challenge, we propose a method that exploits extended fingerprint features (unusual/rare minutiae) not commonly considered in typical minutiae-based matchers. The method we propose in this work can be combined with any existing minutiae-based matcher. We first compute a quantitative measure based on least squares between latent and tenprint minutiae points, with rare minutia feature as reference point. Then the similarity score of the reference minutiae-based matcher is modified based on the least square quantitative measure. The modified similarity score thus obtained incorporates the contribution of rare minutia features. We use a realistic forensic fingerprint casework database in our experiments which contains rare minutia features obtained from Guardia Civil, the Spanish law enforcement agency. Experiments are conducted using two reference minutiae-based matchers, namely: NIST-Bozorth3 and VeriFinger. We report a significant improvement in the rank identification accuracies when the reference minutiae matchers are augmented with our proposed algorithm based on rare minutia features.
Ram P. Krish; Julian Fierrez; Daniel Ramos. Integrating rare minutiae in generic fingerprint matchers for forensics. 2015 IEEE International Workshop on Information Forensics and Security (WIFS) 2015, 1 -6.
AMA StyleRam P. Krish, Julian Fierrez, Daniel Ramos. Integrating rare minutiae in generic fingerprint matchers for forensics. 2015 IEEE International Workshop on Information Forensics and Security (WIFS). 2015; ():1-6.
Chicago/Turabian StyleRam P. Krish; Julian Fierrez; Daniel Ramos. 2015. "Integrating rare minutiae in generic fingerprint matchers for forensics." 2015 IEEE International Workshop on Information Forensics and Security (WIFS) , no. : 1-6.
SynonymsSpeech parametrizationDefinitionThe analysis of speech signals can be defined as the process of extracting relevant information from the speech signal (i.e., from a recording). This process is mainly based on the speech production mechanism, whose study involves multiple disciplines from linguistics and articulatory phonetics to signal processing and source coding. In this article, a short overview is given about how the speech signal is produced and typical models of the speech production system, focusing on the different sources of individuality that will be present in the final uttered speech. In this way, the speaker who produced the speech with those individual features is then recognizable both for humans and for machines.Although speech production is felt by humans as a very natural and simple mechanism, it is a very complex process that involves the coordinated participation of several physiological structures that evolution has developed over the years. For a deeper de ...
Doroteo T. Toledano; Daniel Ramos; Javier Gonzalez-Dominguez; Joaquín González-Rodríguez. Speech Analysis. Encyclopedia of Biometrics 2015, 1487 -1493.
AMA StyleDoroteo T. Toledano, Daniel Ramos, Javier Gonzalez-Dominguez, Joaquín González-Rodríguez. Speech Analysis. Encyclopedia of Biometrics. 2015; ():1487-1493.
Chicago/Turabian StyleDoroteo T. Toledano; Daniel Ramos; Javier Gonzalez-Dominguez; Joaquín González-Rodríguez. 2015. "Speech Analysis." Encyclopedia of Biometrics , no. : 1487-1493.
SynonymsObservations from speech; Speaker parametersDefinitionSpeaker features are measurements extracted from the speech signal with the objective of determining the identity of a given speaker. In voice biometrics, speaker features whose source is known are typically used to build speaker models. Then, speaker features of unknown source are compared with the enrolled models in order to obtain measures of similarity. The identity of the speaker influences the speech production process in many different ways, due to vocal tract configuration, language spoken, social context, education, etc. Thus, several levels of identity can be identified in the speech signal, e.g., spectral, phonetic, prosodic, etc. Speaker features can be extracted at any of this identity levels, and therefore the speaker recognition process follows in essence a multilevel approach.Identity Information in the Speech SignalThe identity levels in the speech signal are configured by the speech production process, whic ...
Daniel Ramos; Javier Gonzalez-Dominguez; Doroteo T. Toledano; Joaquín González-Rodríguez. Speaker Features. Encyclopedia of Biometrics 2015, 1455 -1459.
AMA StyleDaniel Ramos, Javier Gonzalez-Dominguez, Doroteo T. Toledano, Joaquín González-Rodríguez. Speaker Features. Encyclopedia of Biometrics. 2015; ():1455-1459.
Chicago/Turabian StyleDaniel Ramos; Javier Gonzalez-Dominguez; Doroteo T. Toledano; Joaquín González-Rodríguez. 2015. "Speaker Features." Encyclopedia of Biometrics , no. : 1455-1459.
In this study, the authors present a hierarchical algorithm to register a partial fingerprint against a full fingerprint using only the orientation fields. In the first level, they shortlist possible locations for registering the partial fingerprint in the full fingerprint using a normalised correlation measure, taking various rotations into account. As a second level, on those candidate locations, they calculate three other similarity measures. They then perform score fusion for all the estimated similarity scores to locate the final registration. By registering a partial fingerprint against a full fingerprint, they can reduce the search space of the minutiae set in the full fingerprint, thereby improving the result of partial fingerprint identification, particularly for poor quality latent fingerprints. They report the rank identification improvements of two minutiae-based automated fingerprint identification systems on the National Institute of Standards and Technology (NIST)-Special Database 27 database when they use the authors hierarchical registration as a pre-alignment.
Ram Prasad Krish; Julian Fierrez; Daniel Ramos; Javier Ortega‐Garcia; Josef Bigun. Pre‐registration of latent fingerprints based on orientation field. IET Biometrics 2015, 4, 42 -52.
AMA StyleRam Prasad Krish, Julian Fierrez, Daniel Ramos, Javier Ortega‐Garcia, Josef Bigun. Pre‐registration of latent fingerprints based on orientation field. IET Biometrics. 2015; 4 (2):42-52.
Chicago/Turabian StyleRam Prasad Krish; Julian Fierrez; Daniel Ramos; Javier Ortega‐Garcia; Josef Bigun. 2015. "Pre‐registration of latent fingerprints based on orientation field." IET Biometrics 4, no. 2: 42-52.
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used.
Rudolf Haraksim; Daniel Ramos; Didier Meuwly; Charles E.H. Berger. Measuring coherence of computer-assisted likelihood ratio methods. Forensic Science International 2015, 249, 123 -132.
AMA StyleRudolf Haraksim, Daniel Ramos, Didier Meuwly, Charles E.H. Berger. Measuring coherence of computer-assisted likelihood ratio methods. Forensic Science International. 2015; 249 ():123-132.
Chicago/Turabian StyleRudolf Haraksim; Daniel Ramos; Didier Meuwly; Charles E.H. Berger. 2015. "Measuring coherence of computer-assisted likelihood ratio methods." Forensic Science International 249, no. : 123-132.
Comparing a latent fingerprint minutiae set against a ten print fingerprint minutiae set using an automated fingerprint identification system is a challenging problem. This is mainly because latent fingerprints obtained from crime scenes are mostly partial fingerprints, and most automated systems expect approximately the same number of minutiae between query and the reference fingerprint under comparison for good performance. In this work, we propose a methodology to reduce the minutiae set of ten print with respect to that of query latent minutiae set by registering the orientation field of latent fingerprint with the ten print orientation field. By reducing the search space of minutiae from the ten print, we can improve the performance of automated identification systems for latent fingerprints. We report the performance of our registration algorithm on the NIST-SD27 database as well as the improvement in the Rank Identification accuracy of a standard minutiae-based automated system.
Ram P. Krish; Julian Fierrez; Daniel Ramos; Javier Ortega-Garcia; Josef Bigun. Pre-registration for Improved Latent Fingerprint Identification. 2014 22nd International Conference on Pattern Recognition 2014, 696 -701.
AMA StyleRam P. Krish, Julian Fierrez, Daniel Ramos, Javier Ortega-Garcia, Josef Bigun. Pre-registration for Improved Latent Fingerprint Identification. 2014 22nd International Conference on Pattern Recognition. 2014; ():696-701.
Chicago/Turabian StyleRam P. Krish; Julian Fierrez; Daniel Ramos; Javier Ortega-Garcia; Josef Bigun. 2014. "Pre-registration for Improved Latent Fingerprint Identification." 2014 22nd International Conference on Pattern Recognition , no. : 696-701.
The spectral minutiae representation (SMC) has been recently proposed as a novel method to minutiae-based fingerprint recognition, which is invariant to minutiae translation and rotation and presents low computational complexity. As high-resolution palmprint recognition is also mainly based on minutiae sets, SMC has been applied to palmprints and used in full-to-full palmprint matching. However, the performance of that approach was still limited. As one of the main reasons for this is the much bigger size of a palmprint compared with a fingerprint, the authors propose a division of the palmprint into smaller regions. Then, to further improve the performance of spectral minutiae-based palmprint matching, in this work the authors present anatomically inspired regional fusion while using SMC for palmprints. Firstly, the authors consider three regions of the palm, namely interdigital, thenar and hypothenar, which have inspiration in anatomic cues. Then, the authors apply SMC to region-to-region palmprint comparison and study regional discriminability when using the method. After that, the authors implement regional fusion at score level by combining the scores of different regional comparisons in the palm with two fusion methods, that is, sum rule and logistic regression. The authors evaluate region-to-region comparison and regional fusion based on spectral minutiae matching on a public high-resolution palmprint database, THUPALMLAB. Both manual segmentation and automatic segmentation are performed to obtain the three palm regions for each palm. Essentially using the complex SMC, the authors obtain results on region-to-region comparison which show that the hypothenar and interdigital regions outperform the thenar region. More importantly, the authors achieve significant performance improvements by regional fusion using regions segmented both manually and automatically. One main advantage of the approach the authors took is that human examiners can segment the palm into the three regions without prior knowledge of the system, which makes the segmentation process easy to be incorporated in protocols such as in forensic science.
Ruifang Wang; Daniel Ramos; Raymond Veldhuis; Julian Fierrez; Luuk Spreeuwers; Haiyun Xu. Regional fusion for high‐resolution palmprint recognition using spectral minutiae representation. IET Biometrics 2014, 3, 94 -100.
AMA StyleRuifang Wang, Daniel Ramos, Raymond Veldhuis, Julian Fierrez, Luuk Spreeuwers, Haiyun Xu. Regional fusion for high‐resolution palmprint recognition using spectral minutiae representation. IET Biometrics. 2014; 3 (2):94-100.
Chicago/Turabian StyleRuifang Wang; Daniel Ramos; Raymond Veldhuis; Julian Fierrez; Luuk Spreeuwers; Haiyun Xu. 2014. "Regional fusion for high‐resolution palmprint recognition using spectral minutiae representation." IET Biometrics 3, no. 2: 94-100.
This is the author’s version of a work that was accepted for publication in Food Chemistry. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Food Chemistry, 150, (2014) DOI: 10.1016/j.foodchem.2013.10.111The aim of the study was to investigate the applicability of the likelihood ratio (LR) approach for verifying the authenticity of 178 samples of 3 Italian wine brands: Barolo, Barbera, and Grignolino described by 27 parameters describing their chemical compositions. Since the problem of products authenticity may be of forensic interest, the likelihood ratio approach, expressing the role of the forensic expert, was proposed for determining the true origin of wines. It allows us to analyse the evidence in the context of two hypotheses, that the object belongs to one or another wine brand. Various LR models were the subject of the research and their accuracy was evaluated by the Empirical cross entropy (ECE) approach. The rates of correct classifications for the proposed models were higher than 90% and their performance evaluated by ECE was satisfactory
Agnieszka Martyna; Grzegorz Zadora; Ivana Stanimirova; Daniel Ramos. Wine authenticity verification as a forensic problem: An application of likelihood ratio test to label verification. Food Chemistry 2014, 150, 287 -295.
AMA StyleAgnieszka Martyna, Grzegorz Zadora, Ivana Stanimirova, Daniel Ramos. Wine authenticity verification as a forensic problem: An application of likelihood ratio test to label verification. Food Chemistry. 2014; 150 ():287-295.
Chicago/Turabian StyleAgnieszka Martyna; Grzegorz Zadora; Ivana Stanimirova; Daniel Ramos. 2014. "Wine authenticity verification as a forensic problem: An application of likelihood ratio test to label verification." Food Chemistry 150, no. : 287-295.