This page has only limited features, please log in for full access.

Dr. Marcos Nieto
VICOMTECH

Basic Info


Research Keywords & Expertise

0 Camera Calibration
0 Computer Vision
0 Probability Theory
0 Optimization methods
0 Autonomous Driving

Fingerprints

Computer Vision
Autonomous Driving
Camera Calibration

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 24 August 2021 in Applied Sciences
Reads 0
Downloads 0

Modern Artificial Intelligence (AI) methods can produce a large quantity of accurate and richly described data, in domains such as surveillance or automation. As a result, the need to organize data at a large scale in a semantic structure has arisen for long-term data maintenance and consumption. Ontologies and graph databases have gained popularity as mechanisms to satisfy this need. Ontologies provide the means to formally structure descriptive and semantic relations of a domain. Graph databases allow efficient and well-adapted storage, manipulation, and consumption of these linked data resources. However, at present, there is no a universally defined strategy for building AI-oriented ontologies for the automotive sector. One of the key challenges is the lack of a global standardized vocabulary. Most private initiatives and large open datasets for Advanced Driver Assistance Systems (ADASs) and Autonomous Driving (AD) development include their own definitions of terms, with incompatible taxonomies and structures, thus resulting in a well-known lack of interoperability. This paper presents the Automotive Global Ontology (AGO) as a Knowledge Organization System (KOS) using a graph database (Neo4j). Two different use cases for the AGO domain ontology are presented to showcase its capabilities in terms of semantic labeling and scenario-based testing. The ontology and related material have been made public for their subsequent use by the industry and academic communities.

ACS Style

Itziar Urbieta; Marcos Nieto; Mikel García; Oihana Otaegui. Design and Implementation of an Ontology for Semantic Labeling and Testing: Automotive Global Ontology (AGO). Applied Sciences 2021, 11, 7782 .

AMA Style

Itziar Urbieta, Marcos Nieto, Mikel García, Oihana Otaegui. Design and Implementation of an Ontology for Semantic Labeling and Testing: Automotive Global Ontology (AGO). Applied Sciences. 2021; 11 (17):7782.

Chicago/Turabian Style

Itziar Urbieta; Marcos Nieto; Mikel García; Oihana Otaegui. 2021. "Design and Implementation of an Ontology for Semantic Labeling and Testing: Automotive Global Ontology (AGO)." Applied Sciences 11, no. 17: 7782.

Original software publication
Published: 30 December 2020 in SoftwareX
Reads 0
Downloads 0

Data labeling has become a major problem in industries aiming to create and use ground truth labels from massive multi-sensor archives to feed into Artificial Intelligence (AI) applications. Annotation of multi-sensor set-ups with multiple cameras and LIDAR is now particularly relevant for the automotive industry aiming to build Autonomous Driving (AD) functions. In this paper, we present the Video Content Description (VCD), as the first open source metadata structure and set of tools, able to structure annotations for such complex scenes, including unprecedented flexibility to label 2D and 3D objects, pixel-wise labels, actions, events, contexts, semantic relations, odometry, and calibration. Several example cases are reported to demonstrate the flexibility of the VCD.

ACS Style

Marcos Nieto; Orti Senderos; Oihana Otaegui. Boosting AI applications: Labeling format for complex datasets. SoftwareX 2020, 13, 100653 .

AMA Style

Marcos Nieto, Orti Senderos, Oihana Otaegui. Boosting AI applications: Labeling format for complex datasets. SoftwareX. 2020; 13 ():100653.

Chicago/Turabian Style

Marcos Nieto; Orti Senderos; Oihana Otaegui. 2020. "Boosting AI applications: Labeling format for complex datasets." SoftwareX 13, no. : 100653.

Journal article
Published: 23 October 2020 in IEEE Transactions on Intelligent Transportation Systems
Reads 0
Downloads 0

Lane markings are a key element for Autonomous Driving. The generation of high definition maps and ground-truth data require extensive manual labor. In this paper, we present an efficient and robust method for the offline annotation of lane markings, using low-density LIDAR point clouds and odometry information. The odometry is used to accumulate the scans and to process them using blocks following the trajectory of the vehicle. At each block, candidate lane marking points are detected by generating virtual scan-lines and applying a dynamically optimized filter function to the LIDAR intensity values. The lane markings are tracked block wise, and their width is estimated and classified as either solid or dashed. The results are lists of connected 3D points that represent the different lane markings. The accuracy of the proposed method was tested against manually labeled recordings. A novel evaluation methodology focused on the lateral precision of detections is presented. Moreover, a web user interface was used to load the produced annotations, achieving a reduction of 60% in the annotation time, as compared to a fully manual baseline.

ACS Style

Javier Barandiaran Martirena; Marcos Nieto Doncel; Andoni Cortes Vidal; Oihana Otaegui Madurga; Julian Florez Esnal; Manuel Grana Romay. Automated Annotation of Lane Markings Using LIDAR and Odometry. IEEE Transactions on Intelligent Transportation Systems 2020, PP, 1 -11.

AMA Style

Javier Barandiaran Martirena, Marcos Nieto Doncel, Andoni Cortes Vidal, Oihana Otaegui Madurga, Julian Florez Esnal, Manuel Grana Romay. Automated Annotation of Lane Markings Using LIDAR and Odometry. IEEE Transactions on Intelligent Transportation Systems. 2020; PP (99):1-11.

Chicago/Turabian Style

Javier Barandiaran Martirena; Marcos Nieto Doncel; Andoni Cortes Vidal; Oihana Otaegui Madurga; Julian Florez Esnal; Manuel Grana Romay. 2020. "Automated Annotation of Lane Markings Using LIDAR and Odometry." IEEE Transactions on Intelligent Transportation Systems PP, no. 99: 1-11.

Journal article
Published: 28 July 2020 in IEEE Transactions on Intelligent Transportation Systems
Reads 0
Downloads 0

A major challenges of deep learning (DL) is the necessity to collect huge amounts of training data. Often, the lack of a sufficiently large dataset discourages the use of DL in certain applications. Typically, acquiring the required amounts of data costs considerable time, material and effort. To mitigate this problem, the use of synthetic images combined with real data is a popular approach, widely adopted in the scientific community to effectively train various detectors. In this study, we examined the potential of synthetic data-based training in the field of intelligent transportation systems. Our focus is on camera-based traffic sign recognition applications for advanced driver assistance systems and autonomous driving. The proposed augmentation pipeline of synthetic datasets includes novel augmentation processes such as structured shadows and gaussian specular highlights. A well-known DL model was trained with different datasets to compare the performance of synthetic and real image-based trained models. Additionally, a new, detailed method to objectively compare these models is proposed. Synthetic images are generated using a semi-supervised errors-guide method which is also described. Our experiments showed that a synthetic image-based approach outperforms in most cases real image-based training when applied to cross-domain test datasets (+10% precision for GTSRB dataset) and consequently, the generalization of the model is improved decreasing the cost of acquiring images.

ACS Style

Andoni Cortes; Clemente Rodriguez; Gorka Velez; Javier Barandiaran; Marcos Nieto. Analysis of Classifier Training on Synthetic Data for Cross-Domain Datasets. IEEE Transactions on Intelligent Transportation Systems 2020, 1 -10.

AMA Style

Andoni Cortes, Clemente Rodriguez, Gorka Velez, Javier Barandiaran, Marcos Nieto. Analysis of Classifier Training on Synthetic Data for Cross-Domain Datasets. IEEE Transactions on Intelligent Transportation Systems. 2020; (99):1-10.

Chicago/Turabian Style

Andoni Cortes; Clemente Rodriguez; Gorka Velez; Javier Barandiaran; Marcos Nieto. 2020. "Analysis of Classifier Training on Synthetic Data for Cross-Domain Datasets." IEEE Transactions on Intelligent Transportation Systems , no. 99: 1-10.

Journal article
Published: 23 June 2020 in Applied Sciences
Reads 0
Downloads 0

An innovative solution named Annotation as a Service (AaaS) has been specifically designed to integrate heterogeneous video annotation workflows into containers and take advantage of a cloud native highly scalable and reliable design based on Kubernetes workloads. Using the AaaS as a foundation, the execution of automatic video annotation workflows is addressed in the broader context of a semi-automatic video annotation business logic for ground truth generation for Autonomous Driving (AD) and Advanced Driver Assistance Systems (ADAS). The document presents design decisions, innovative developments, and tests conducted to provide scalability to this cloud-native ecosystem for semi-automatic annotation. The solution has proven to be efficient and resilient on an AD/ADAS scale, specifically in an experiment with 25 TB of input data to annotate, 4000 concurrent annotation jobs, and 32 worker nodes forming a high performance computing cluster with a total of 512 cores, and 2048 GB of RAM. Automatic pre-annotations with the proposed strategy reduce the time of human participation in the annotation up to 80% maximum and 60% on average.

ACS Style

Sergio Sánchez-Carballido; Orti Senderos; Marcos Nieto; Oihana Otaegui. Semi-Automatic Cloud-Native Video Annotation for Autonomous Driving. Applied Sciences 2020, 10, 4301 .

AMA Style

Sergio Sánchez-Carballido, Orti Senderos, Marcos Nieto, Oihana Otaegui. Semi-Automatic Cloud-Native Video Annotation for Autonomous Driving. Applied Sciences. 2020; 10 (12):4301.

Chicago/Turabian Style

Sergio Sánchez-Carballido; Orti Senderos; Marcos Nieto; Oihana Otaegui. 2020. "Semi-Automatic Cloud-Native Video Annotation for Autonomous Driving." Applied Sciences 10, no. 12: 4301.

Conference paper
Published: 01 November 2018 in 2018 21st International Conference on Intelligent Transportation Systems (ITSC)
Reads 0
Downloads 0

For accurate vehicle self-localization, many approaches rely on the match between sophisticated 3D map data and sensor information obtained from laser scanners or camera images. However, when depending on highly accurate map data, every small change in the environment has to be detected and the corresponding map section needs to be updated. As an alternative, we propose an approach which is able to provide map-relative lane-level localization without the restraint of requiring extensive sensor equipment, neither for generating the maps, nor for aligning map to sensor data. It uses freely available crowdsourced map data which is enhanced and stored in a graph-based relational local dynamic map (R-LDM). Based on rough position estimation, provided by Global Navigation Satellite Systems (GNSS) such as GPS or Galileo, we align visual information with map data that is dynamically queried from the R-LDM. This is done by comparing virtual 3D views (so-called candidates), created from projected map data, with lane geometry data, extracted from the image of a front facing camera. More specifically, we extract explicit lane marking information from the real-world view using a lane-detection algorithm that fits lane markings to a curvilinear model. The position correction relative to the initial guess is determined by best match search of the virtual view that fits best the processed real-world view. Evaluations performed on data recorded in The Netherlands show that our algorithm presents a promising approach to allow lane-level localization using state-of-the-art equipment and freely available map data.

ACS Style

Benedict Flade; Marcos Nieto; Gorka Velez; Julian Eggert. Lane Detection Based Camera to Map Alignment Using Open-Source Map Data. 2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018, 890 -897.

AMA Style

Benedict Flade, Marcos Nieto, Gorka Velez, Julian Eggert. Lane Detection Based Camera to Map Alignment Using Open-Source Map Data. 2018 21st International Conference on Intelligent Transportation Systems (ITSC). 2018; ():890-897.

Chicago/Turabian Style

Benedict Flade; Marcos Nieto; Gorka Velez; Julian Eggert. 2018. "Lane Detection Based Camera to Map Alignment Using Open-Source Map Data." 2018 21st International Conference on Intelligent Transportation Systems (ITSC) , no. : 890-897.

Conference paper
Published: 01 September 2018 in 2018 26th European Signal Processing Conference (EUSIPCO)
Reads 0
Downloads 0
ACS Style

Marcos Nieto; Lorena Garcia; Orti Scnderos; Oihana Otaegui. Fast Multi-Lane Detection and Modeling for Embedded Platforms. 2018 26th European Signal Processing Conference (EUSIPCO) 2018, 1 .

AMA Style

Marcos Nieto, Lorena Garcia, Orti Scnderos, Oihana Otaegui. Fast Multi-Lane Detection and Modeling for Embedded Platforms. 2018 26th European Signal Processing Conference (EUSIPCO). 2018; ():1.

Chicago/Turabian Style

Marcos Nieto; Lorena Garcia; Orti Scnderos; Oihana Otaegui. 2018. "Fast Multi-Lane Detection and Modeling for Embedded Platforms." 2018 26th European Signal Processing Conference (EUSIPCO) , no. : 1.

Theoretical advances
Published: 31 May 2017 in Pattern Analysis and Applications
Reads 0
Downloads 0

This paper presents a structured approach for efficiently exploiting the perspective information of a scene to enhance the detection of objects in monocular systems. It defines a finite grid of 3D positions on the dominant ground plane and computes occupancy maps from which object location estimates are extracted . This method works on the top of any detection method, either pixel-wise (e.g. background subtraction) or region-wise (e.g. detection-by-classification) technique, which can be linked to the proposed scheme with minimal fine tuning. Its flexibility thus allows for applying this approach in a wide variety of applications and sectors, such as surveillance applications (e.g. person detection) or driver assistance systems (e.g. vehicle or pedestrian detection). Extensive results provide evidence of its excellent performance and its ease of use in combination with different image processing techniques.

ACS Style

Marcos Nieto; Juan Diego Ortega; Peter Leskovsky; Orti Senderos. Constant-time monocular object detection using scene geometry. Pattern Analysis and Applications 2017, 21, 1053 -1066.

AMA Style

Marcos Nieto, Juan Diego Ortega, Peter Leskovsky, Orti Senderos. Constant-time monocular object detection using scene geometry. Pattern Analysis and Applications. 2017; 21 (4):1053-1066.

Chicago/Turabian Style

Marcos Nieto; Juan Diego Ortega; Peter Leskovsky; Orti Senderos. 2017. "Constant-time monocular object detection using scene geometry." Pattern Analysis and Applications 21, no. 4: 1053-1066.

Article
Published: 15 April 2016 in IET Computer Vision
Reads 0
Downloads 0
ACS Style

Hui Wang; Marcos Nieto; Zhen Lei; Suzanne Lyttle. Guest Editorial. IET Computer Vision 2016, 10, 235 -236.

AMA Style

Hui Wang, Marcos Nieto, Zhen Lei, Suzanne Lyttle. Guest Editorial. IET Computer Vision. 2016; 10 (4):235-236.

Chicago/Turabian Style

Hui Wang; Marcos Nieto; Zhen Lei; Suzanne Lyttle. 2016. "Guest Editorial." IET Computer Vision 10, no. 4: 235-236.

Research article
Published: 01 April 2016 in IET Intelligent Transport Systems
Reads 0
Downloads 0

Computer vision methods for advanced driver assistance systems (ADAS) must be developed considering the strong requirements imposed by the industry, including real-time performance in low cost and low consumption hardware (HW), and rapid time to market. These two apparently contradictory requirements create the necessity of adopting careful development methodologies. In this study the authors review existing approaches and describe the methodology to optimise computer vision applications without incurring in costly code optimisation or migration into special HW. This approach is exemplified on the improvements achieved on the successive re-designs of vehicle detection algorithms for monocular systems. In the experiments the authors observed a ×15 speed up between the first and fourth prototypes, progressively optimised using the proposed methodology from the very first naive approach to a fine-tuned algorithm.

ACS Style

Marcos Nieto; Gorka Vélez; Oihana Otaegui; Seán Gaines; Geoffroy Van Cutsem. Optimising computer vision based ADAS: vehicle detection case study. IET Intelligent Transport Systems 2016, 10, 157 -164.

AMA Style

Marcos Nieto, Gorka Vélez, Oihana Otaegui, Seán Gaines, Geoffroy Van Cutsem. Optimising computer vision based ADAS: vehicle detection case study. IET Intelligent Transport Systems. 2016; 10 (3):157-164.

Chicago/Turabian Style

Marcos Nieto; Gorka Vélez; Oihana Otaegui; Seán Gaines; Geoffroy Van Cutsem. 2016. "Optimising computer vision based ADAS: vehicle detection case study." IET Intelligent Transport Systems 10, no. 3: 157-164.

Regular paper
Published: 01 February 2015 in IET Intelligent Transport Systems
Reads 0
Downloads 0

In this study, the authors analyse the exponential growth of advanced driver assistance systems based on video processing in the past decade. Specifically, they focus on how research and innovative ideas can finally reach the market as cost-effective solutions. They explore well-known computer vision methods for services like lane departure warning systems, collision avoidance systems and point out potential future trends according to a review of the state-of-the-art. Along this study, the authors’ own contributions are described as examples of such systems from the perspective of real-time by design, pursuing a trade-off between the accuracy and reliability of the designed algorithms and the restrictive computational, economical and design requisites of embedded platforms.

ACS Style

Marcos Nieto; Oihana Otaegui; Gorka Vélez; Juan Diego Ortega; Andoni Cortés. On creating vision‐based advanced driver assistance systems. IET Intelligent Transport Systems 2015, 9, 59 -66.

AMA Style

Marcos Nieto, Oihana Otaegui, Gorka Vélez, Juan Diego Ortega, Andoni Cortés. On creating vision‐based advanced driver assistance systems. IET Intelligent Transport Systems. 2015; 9 (1):59-66.

Chicago/Turabian Style

Marcos Nieto; Oihana Otaegui; Gorka Vélez; Juan Diego Ortega; Andoni Cortés. 2015. "On creating vision‐based advanced driver assistance systems." IET Intelligent Transport Systems 9, no. 1: 59-66.

Journal article
Published: 01 May 2014 in Journal of Real-Time Image Processing
Reads 0
Downloads 0

Most recent visual odometry algorithms based on sparse feature matching are computationally efficient methods that can be executed in real time on desktop computers. However, further efforts are required to reduce computational complexity in order to integrate these solutions in embedded platforms with low power consumption. This paper presents a spacetime framework that can be applied to most stereo visual odometry algorithms greatly reducing their computational complexity. Moreover, it enables exploiting multi-core architectures available in most modern computing platforms. According to the tests performed on publicly available datasets and an experimental driverless car, the proposed framework significantly reduces the computational complexity of a visual odometry algorithm while improving the accuracy of the results.

ACS Style

Leonardo De-Maeztu; Unai Elordi; Marcos Nieto; Javier Barandiaran; Oihana Otaegui. A temporally consistent grid-based visual odometry framework for multi-core architectures. Journal of Real-Time Image Processing 2014, 10, 759 -769.

AMA Style

Leonardo De-Maeztu, Unai Elordi, Marcos Nieto, Javier Barandiaran, Oihana Otaegui. A temporally consistent grid-based visual odometry framework for multi-core architectures. Journal of Real-Time Image Processing. 2014; 10 (4):759-769.

Chicago/Turabian Style

Leonardo De-Maeztu; Unai Elordi; Marcos Nieto; Javier Barandiaran; Oihana Otaegui. 2014. "A temporally consistent grid-based visual odometry framework for multi-core architectures." Journal of Real-Time Image Processing 10, no. 4: 759-769.

Journal article
Published: 21 March 2014 in Journal of Real-Time Image Processing
Reads 0
Downloads 0

Computer vision technologies can contribute in many ways to the development of smart cities. In the case of vision applications for advanced driver assistance systems (ADAS), they can help to increase road traffic safety, which is a major concern nowadays. The design of an embedded vision system for driver assistance is not straightforward; several requirements must be addressed such as computational performance, cost, size, power consumption or time-to-market. This paper presents a novel reconfigurable embedded vision system that meets the requirements of ADAS applications. The developed PCB board contains a System on Chip composed of a programmable logic that supports parallel processing necessary for a fast pixel-level analysis, and a microprocessor suited for serial decision making. A lane departure warning system was implemented in the case study, obtaining a better computational performance than the rest of the works found in the literature. Moreover, thanks to the reconfiguration capability of the proposed system a more flexible and extensible solution is obtained.

ACS Style

Gorka Velez; Ainhoa Cortés; Marcos Nieto; Igone Vélez; Oihana Otaegui. A reconfigurable embedded vision system for advanced driver assistance. Journal of Real-Time Image Processing 2014, 10, 725 -739.

AMA Style

Gorka Velez, Ainhoa Cortés, Marcos Nieto, Igone Vélez, Oihana Otaegui. A reconfigurable embedded vision system for advanced driver assistance. Journal of Real-Time Image Processing. 2014; 10 (4):725-739.

Chicago/Turabian Style

Gorka Velez; Ainhoa Cortés; Marcos Nieto; Igone Vélez; Oihana Otaegui. 2014. "A reconfigurable embedded vision system for advanced driver assistance." Journal of Real-Time Image Processing 10, no. 4: 725-739.

Conference paper
Published: 01 January 2014 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0
ACS Style

Marcos Nieto; Peter Leskovsky; Juan Diego Ortega. Person Detection, Tracking and Masking for Automated Annotation of Large CCTV Datasets. Transactions on Petri Nets and Other Models of Concurrency XV 2014, 519 -522.

AMA Style

Marcos Nieto, Peter Leskovsky, Juan Diego Ortega. Person Detection, Tracking and Masking for Automated Annotation of Large CCTV Datasets. Transactions on Petri Nets and Other Models of Concurrency XV. 2014; ():519-522.

Chicago/Turabian Style

Marcos Nieto; Peter Leskovsky; Juan Diego Ortega. 2014. "Person Detection, Tracking and Masking for Automated Annotation of Large CCTV Datasets." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 519-522.

Conference paper
Published: 01 January 2014 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0

The efficient detection and tracking of persons in videos has widrespread applications, specially in CCTV systems for surveillance or forensics applications. In this paper we present a new method for people detection and tracking based on the knowledge of the perspective information of the scene. It allows alleviating two main drawbacks of existing methods: (i) high or even excessive computational cost associated to multiscale detection-by-classification methods; and (ii) the inherent difficulty of the CCTV, in which predominate partial and full occlusions as well as very high intra-class variability. During the detection stage, we propose to use the homograhy of the dominant plane to compute the expected sizes of persons at different positions of the image and thus dramatically reduce the number of evaluation of the multiscale sliding window detection scheme. To achieve robustness against false positives and negatives, we have used a combination of full and upper-body detectors, as well as a Data Association Filter (DAF) inspired in the well-known Rao-Blackwellization-based particle filters (RBPF). Our experiments demonstrate the benefit of using the proposed perspective multiscale approach, compared to conventional sliding window approaches, and also that this perspective information can lead to useful mixes of full-body and upper-body detectors.

ACS Style

Marcos Nieto; Juan Diego Ortega; Andoni Cortés; Sean Gaines. Perspective Multiscale Detection and Tracking of Persons. Transactions on Petri Nets and Other Models of Concurrency XV 2014, 8326, 92 -103.

AMA Style

Marcos Nieto, Juan Diego Ortega, Andoni Cortés, Sean Gaines. Perspective Multiscale Detection and Tracking of Persons. Transactions on Petri Nets and Other Models of Concurrency XV. 2014; 8326 ():92-103.

Chicago/Turabian Style

Marcos Nieto; Juan Diego Ortega; Andoni Cortés; Sean Gaines. 2014. "Perspective Multiscale Detection and Tracking of Persons." Transactions on Petri Nets and Other Models of Concurrency XV 8326, no. : 92-103.

Proceedings article
Published: 01 December 2013 in 2013 IEEE International Symposium on Multimedia
Reads 0
Downloads 0

The process of transcoding videos apart from being computationally intensive, can also be a rather complex procedure. The complexity refers to the choice of appropriate parameters for the transcoding engine, with the aim of decreasing video sizes, transcoding times and network bandwidth without degrading video quality beyond some threshold that event detectors lose their accuracy. This paper explains the need for transcoding, and then studies different video quality metrics. Commonly used algorithms for motion and person detection are briefly described, with emphasis in investigating the optimum transcoding configuration parameters. The analysis of the experimental results reveals that the existing video quality metrics are not suitable for automated systems and that the detection of persons is affected by the reduction of bit rate and resolution, while motion detection is more sensitive to frame rate.

ACS Style

Emmanouil Kafetzakis; Christos Xilouris; Michail Alexandros Kourtis; Marcos Nieto; Iveel Jargalsaikhan; Suzanne Little. The Impact of Video Transcoding Parameters on Event Detection for Surveillance Systems. 2013 IEEE International Symposium on Multimedia 2013, 333 -338.

AMA Style

Emmanouil Kafetzakis, Christos Xilouris, Michail Alexandros Kourtis, Marcos Nieto, Iveel Jargalsaikhan, Suzanne Little. The Impact of Video Transcoding Parameters on Event Detection for Surveillance Systems. 2013 IEEE International Symposium on Multimedia. 2013; ():333-338.

Chicago/Turabian Style

Emmanouil Kafetzakis; Christos Xilouris; Michail Alexandros Kourtis; Marcos Nieto; Iveel Jargalsaikhan; Suzanne Little. 2013. "The Impact of Video Transcoding Parameters on Event Detection for Surveillance Systems." 2013 IEEE International Symposium on Multimedia , no. : 333-338.

Original articles
Published: 05 March 2013 in Cybernetics and Systems
Reads 0
Downloads 0

Image interest point extraction and matching across images is a commonplace task in computer vision–based applications, across widely diverse domains, such as 3D reconstruction, augmented reality, or tracking. We present an empirical evaluation of state-of-the-art interest point detection algorithms measuring several parameters, such as efficiency, robustness to image domain geometric transformations—that is, similarity—affine or projective transformations, as well as invariance to photometric transformations such as light intensity or image noise.

ACS Style

Iñigo Barandiaran; Manuel Graña; Marcos Nieto. AN EMPIRICAL EVALUATION OF INTEREST POINT DETECTORS. Cybernetics and Systems 2013, 44, 98 -117.

AMA Style

Iñigo Barandiaran, Manuel Graña, Marcos Nieto. AN EMPIRICAL EVALUATION OF INTEREST POINT DETECTORS. Cybernetics and Systems. 2013; 44 (2-3):98-117.

Chicago/Turabian Style

Iñigo Barandiaran; Manuel Graña; Marcos Nieto. 2013. "AN EMPIRICAL EVALUATION OF INTEREST POINT DETECTORS." Cybernetics and Systems 44, no. 2-3: 98-117.

Conference paper
Published: 01 January 2013 in Proceedings of the 3rd ACM SIGSIM Conference on Principles of Advanced Discrete Simulation
Reads 0
Downloads 0
ACS Style

Suzanne Little; Bryan Scotney; Hui Wang; Sean Gaines; Aitor Rodriguez; Pedro Sanchez; Ana Martínez Llorens; Karina Villarroel Peniza; Roberto Gimenez; Raúl Santos De La Cámara; Anna Mereu; Iveel Jargalsaikhan; Celso Prados; Emmanouil Kafetzakis; Kathy Clawson; Marcos Nieto; Hao Li; Cem Direkoğlu; Noel E. O'connor; Alan F. Smeaton; Jun Liu. Interactive surveillance event detection at TRECVid2012. Proceedings of the 3rd ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013, 301 -302.

AMA Style

Suzanne Little, Bryan Scotney, Hui Wang, Sean Gaines, Aitor Rodriguez, Pedro Sanchez, Ana Martínez Llorens, Karina Villarroel Peniza, Roberto Gimenez, Raúl Santos De La Cámara, Anna Mereu, Iveel Jargalsaikhan, Celso Prados, Emmanouil Kafetzakis, Kathy Clawson, Marcos Nieto, Hao Li, Cem Direkoğlu, Noel E. O'connor, Alan F. Smeaton, Jun Liu. Interactive surveillance event detection at TRECVid2012. Proceedings of the 3rd ACM SIGSIM Conference on Principles of Advanced Discrete Simulation. 2013; ():301-302.

Chicago/Turabian Style

Suzanne Little; Bryan Scotney; Hui Wang; Sean Gaines; Aitor Rodriguez; Pedro Sanchez; Ana Martínez Llorens; Karina Villarroel Peniza; Roberto Gimenez; Raúl Santos De La Cámara; Anna Mereu; Iveel Jargalsaikhan; Celso Prados; Emmanouil Kafetzakis; Kathy Clawson; Marcos Nieto; Hao Li; Cem Direkoğlu; Noel E. O'connor; Alan F. Smeaton; Jun Liu. 2013. "Interactive surveillance event detection at TRECVid2012." Proceedings of the 3rd ACM SIGSIM Conference on Principles of Advanced Discrete Simulation , no. : 301-302.

Book chapter
Published: 01 January 2013 in Communications in Computer and Information Science
Reads 0
Downloads 0

An automatic method for rail inspection is introduced in this paper. The method detects rail flaws using computer vision algorithms. Unlike other methods designed for the same goal, we propose a method that automatically fits a 3D rail model to the observations. The proposed strategy is based on the novel combination of a simple but effective laser-camera calibration procedure with the application of an MCMC (Markov Chain Monte Carlo) framework. The proposed particle filter uses the efficient overrelation slice sampling method, which allows us to exploit the temporal coherence of observations and to obtain more accurate estimates than with other sampling techniques. The results show that the system is able to robustly obtain measurements of the wear of the rail. The two other contributions of the paper are the successfull introuction of the slice sampling technique into MCMC particle filters and the proposed online and flexible method for camera-laser calibration.

ACS Style

Marcos Nieto; Andoni Cortés; Javier Barandiaran; Oihana Otaegui; Iñigo Etxabe. Single Camera Railways Track Profile Inspection Using an Slice Sampling-Based Particle Filter. Communications in Computer and Information Science 2013, 359, 326 -339.

AMA Style

Marcos Nieto, Andoni Cortés, Javier Barandiaran, Oihana Otaegui, Iñigo Etxabe. Single Camera Railways Track Profile Inspection Using an Slice Sampling-Based Particle Filter. Communications in Computer and Information Science. 2013; 359 ():326-339.

Chicago/Turabian Style

Marcos Nieto; Andoni Cortés; Javier Barandiaran; Oihana Otaegui; Iñigo Etxabe. 2013. "Single Camera Railways Track Profile Inspection Using an Slice Sampling-Based Particle Filter." Communications in Computer and Information Science 359, no. : 326-339.

Conference paper
Published: 01 January 2013 in Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Reads 0
Downloads 0
ACS Style

Iñigo Barandiaran; Camilo Cortes; Marcos Nieto; Manuel Grana; Oscar E. Ruiz. A New Evaluation Framework and Image Dataset for Keypoint Extraction and Feature Descriptor Matching. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications 2013, 252 -257.

AMA Style

Iñigo Barandiaran, Camilo Cortes, Marcos Nieto, Manuel Grana, Oscar E. Ruiz. A New Evaluation Framework and Image Dataset for Keypoint Extraction and Feature Descriptor Matching. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. 2013; ():252-257.

Chicago/Turabian Style

Iñigo Barandiaran; Camilo Cortes; Marcos Nieto; Manuel Grana; Oscar E. Ruiz. 2013. "A New Evaluation Framework and Image Dataset for Keypoint Extraction and Feature Descriptor Matching." Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications , no. : 252-257.