This page has only limited features, please log in for full access.

Unclaimed
Jesús Peral
Lucentia Research Group, Department of Software and Computing Systems, University of Alicante, Alicante, Spain

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Regular paper
Published: 13 July 2021 in Knowledge and Information Systems
Reads 0
Downloads 0

NoSQL technologies have become a common component in many information systems and software applications. These technologies are focused on performance, enabling scalable processing of large volumes of structured and unstructured data. Unfortunately, most developments over NoSQL technologies consider security as an afterthought, putting at risk personal data of individuals and potentially causing severe economic loses as well as reputation crisis. In order to avoid these situations, companies require an approach that introduces security mechanisms into their systems without scrapping already in-place solutions to restart all over again the design process. Therefore, in this paper we propose the first modernization approach for introducing security in NoSQL databases, focusing on access control and thereby improving the security of their associated information systems and applications. Our approach analyzes the existing NoSQL solution of the organization, using a domain ontology to detect sensitive information and creating a conceptual model of the database. Together with this model, a series of security issues related to access control are listed, allowing database designers to identify the security mechanisms that must be incorporated into their existing solution. For each security issue, our approach automatically generates a proposed solution, consisting of a combination of privilege modifications, new roles and views to improve access control. In order to test our approach, we apply our process to a medical database implemented using the popular document-oriented NoSQL database, MongoDB. The great advantages of our approach are that: (1) it takes into account the context of the system thanks to the introduction of domain ontologies, (2) it helps to avoid missing critical access control issues since the analysis is performed automatically, (3) it reduces the effort and costs of the modernization process thanks to the automated steps in the process, (4) it can be used with different NoSQL document-based technologies in a successful way by adjusting the metamodel, and (5) it is lined up with known standards, hence allowing the application of guidelines and best practices.

ACS Style

Alejandro Maté; Jesús Peral; Juan Trujillo; Carlos Blanco; Diego García-Saiz; Eduardo Fernández-Medina. Improving security in NoSQL document databases through model-driven modernization. Knowledge and Information Systems 2021, 63, 2209 -2230.

AMA Style

Alejandro Maté, Jesús Peral, Juan Trujillo, Carlos Blanco, Diego García-Saiz, Eduardo Fernández-Medina. Improving security in NoSQL document databases through model-driven modernization. Knowledge and Information Systems. 2021; 63 (8):2209-2230.

Chicago/Turabian Style

Alejandro Maté; Jesús Peral; Juan Trujillo; Carlos Blanco; Diego García-Saiz; Eduardo Fernández-Medina. 2021. "Improving security in NoSQL document databases through model-driven modernization." Knowledge and Information Systems 63, no. 8: 2209-2230.

Journal article
Published: 20 August 2020 in Sustainability
Reads 0
Downloads 0

With patients demanding services to control their own health conditions, hospitals are looking to build agility in delivering care by extending their reach into patient and partner ecosystems and sharing relevant patient data to support care continuity. However, sharing patient data with several external stakeholders outside a hospital network calls for the development of a digital platform that is trusted by both hospitals and stakeholders, given that there is often no single entity supporting such coordination. In this paper, we propose a methodology that uses a blockchain architecture to address the technical challenge of linking disparate systems used by multiple stakeholders and the social challenge of engendering trust by using visualization to bring about transparency in the way in which data are shared. We illustrate this methodology using a pilot implementation. The paper concludes with a discussion and directions for future research and makes some concluding comments.

ACS Style

Jesús Peral; Eduardo Gallego; David Gil; Mohan Tanniru; Prashant Khambekar. Using Visualization to Build Transparency in a Healthcare Blockchain Application. Sustainability 2020, 12, 6768 .

AMA Style

Jesús Peral, Eduardo Gallego, David Gil, Mohan Tanniru, Prashant Khambekar. Using Visualization to Build Transparency in a Healthcare Blockchain Application. Sustainability. 2020; 12 (17):6768.

Chicago/Turabian Style

Jesús Peral; Eduardo Gallego; David Gil; Mohan Tanniru; Prashant Khambekar. 2020. "Using Visualization to Build Transparency in a Healthcare Blockchain Application." Sustainability 12, no. 17: 6768.

Journal article
Published: 13 August 2020 in Electronics
Reads 0
Downloads 0

The study of phonological proximity makes it possible to establish a basis for future decision-making in the treatment of sign languages. Knowing how close a set of signs are allows the interested party to decide more easily its study by clustering, as well as the teaching of the language to third parties based on similarities. In addition, it lays the foundation for strengthening disambiguation modules in automatic recognition systems. To the best of our knowledge, this is the first study of its kind for Costa Rican Sign Language (LESCO, for its Spanish acronym), and forms the basis for one of the modules of the already operational system of sign and speech editing called the International Platform for Sign Language Edition (PIELS). A database of 2665 signs, grouped into eight contexts, is used, and a comparison of similarity measures is made, using standard statistical formulas to measure their degree of correlation. This corpus will be especially useful in machine learning approaches. In this work, we have proposed an analysis of different similarity measures between signs in order to find out the phonological proximity between them. After analyzing the results obtained, we can conclude that LESCO is a sign language with high levels of phonological proximity, particularly in the orientation and location components, but they are noticeably lower in the form component. We have also concluded as an outstanding contribution of our research that automatic recognition systems can take as a basis for their first prototypes the contexts or sign domains that map to clusters with lower levels of similarity. As mentioned, the results obtained have multiple applications such as in the teaching area or the Natural Language Processing area for automatic recognition tasks.

ACS Style

Luis Naranjo-Zeledón; Mario Chacón-Rivas; Jesús Peral; Antonio Ferrández. Phonological Proximity in Costa Rican Sign Language. Electronics 2020, 9, 1302 .

AMA Style

Luis Naranjo-Zeledón, Mario Chacón-Rivas, Jesús Peral, Antonio Ferrández. Phonological Proximity in Costa Rican Sign Language. Electronics. 2020; 9 (8):1302.

Chicago/Turabian Style

Luis Naranjo-Zeledón; Mario Chacón-Rivas; Jesús Peral; Antonio Ferrández. 2020. "Phonological Proximity in Costa Rican Sign Language." Electronics 9, no. 8: 1302.

Journal article
Published: 21 March 2020 in Electronics
Reads 0
Downloads 0

About 15% of the world’s population suffers from some form of disability. In developed countries, about 1.5% of children are diagnosed with autism. Autism is a developmental disorder distinguished mainly by impairments in social interaction and communication and by restricted and repetitive behavior. Since the cause of autism is still unknown, there have been many studies focused on screening for autism based on behavioral features. Thus, the main purpose of this paper is to present an architecture focused on data integration and analytics, allowing the distributed processing of input data. Furthermore, the proposed architecture allows the identification of relevant features as well as of hidden correlations among parameters. To this end, we propose a methodology able to integrate diverse data sources, even data that are collected separately. This methodology increases the data variety which can lead to the identification of more correlations between diverse parameters. We conclude the paper with a case study that used autism data in order to validate our proposed architecture, which showed very promising results.

ACS Style

Jesús Peral; David Gil; Sayna Rotbei; Sandra Amador; Marga Guerrero; Hadi Moradi. A Machine Learning and Integration Based Architecture for Cognitive Disorder Detection Used for Early Autism Screening. Electronics 2020, 9, 516 .

AMA Style

Jesús Peral, David Gil, Sayna Rotbei, Sandra Amador, Marga Guerrero, Hadi Moradi. A Machine Learning and Integration Based Architecture for Cognitive Disorder Detection Used for Early Autism Screening. Electronics. 2020; 9 (3):516.

Chicago/Turabian Style

Jesús Peral; David Gil; Sayna Rotbei; Sandra Amador; Marga Guerrero; Hadi Moradi. 2020. "A Machine Learning and Integration Based Architecture for Cognitive Disorder Detection Used for Early Autism Screening." Electronics 9, no. 3: 516.

Journal article
Published: 22 January 2020 in Applied Sciences
Reads 0
Downloads 0

Nowadays, the increasing demand of water for electricity production, agricultural and industrial uses are directly affecting the reduction of available quality water for human consumption in the world. Efficient and sustainable maintenance of water reservoirs and supply networks implies a holistic strategy that takes into account, as much as possible, information from the stages of water usage. Next,-generation decision-making software tools, for supporting water management, require the integration of multiple and heterogeneous data sources of different knowledge domains. In this regard, Linked Data and Semantic Web technologies enable harmonization of different data sources, as well as the efficient querying for feeding upper-level Business Intelligence processes. This work investigates the design, implementation and usage of a semantic approach driven by ontology to capture, store, integrate and exploit real-world data concerning water supply networks management. As a main contribution, the proposal helps with obtaining semantically enriched linked data, enhancing the analysis of water network performance. For validation purposes, in the use case, a series of data sources from different measures have been considered, in the scope of an actual water management system of the Mediterranean region of Valencia (Spain), throughout several years of activity. The obtained experience shows the benefits of using the proposed approach to identify possible correlations between the measures such as the supplied water, the water leaks or the population.

ACS Style

Pilar Escobar; María Del Mar Roldán-García; Jesús Peral; Gustavo Candela; José García-Nieto. An Ontology-Based Framework for Publishing and Exploiting Linked Open Data: A Use Case on Water Resources Management. Applied Sciences 2020, 10, 779 .

AMA Style

Pilar Escobar, María Del Mar Roldán-García, Jesús Peral, Gustavo Candela, José García-Nieto. An Ontology-Based Framework for Publishing and Exploiting Linked Open Data: A Use Case on Water Resources Management. Applied Sciences. 2020; 10 (3):779.

Chicago/Turabian Style

Pilar Escobar; María Del Mar Roldán-García; Jesús Peral; Gustavo Candela; José García-Nieto. 2020. "An Ontology-Based Framework for Publishing and Exploiting Linked Open Data: A Use Case on Water Resources Management." Applied Sciences 10, no. 3: 779.

Conference paper
Published: 29 October 2019 in First Complex Systems Digital Campus World E-Conference 2015
Reads 0
Downloads 0

Automatic word sense disambiguation (WSD) from text is a task of great importance in various applications of natural language processing, for example, in machine translation, question answering, automatic summarization or sentiment analysis. There are different approaches to finding the meaning of a word within a context, whether using supervised, unsupervised, semi-supervised or knowledge-based methods. Several studies have been conducted to automatically translate from text to sign language, reproducing the result of the translation with a signing avatar, in a way that deaf users have access to informative contents that otherwise are highly inaccessible, because sign language is their mother tongue. The many proposals that have been made look forward to minimize these informative and communicative barriers. Sign languages, however, do not have as many words as the spoken languages, so an automatic translation must be as accurate and free of ambiguities as possible. In this paper, we propose to evaluate the use of public access big data resources, as well as appropriate techniques to access this type of resources for WSD tasks, illustrating their effects in a translation system from text in Spanish to Costa Rican Sign Language (LESCO). The architecture of the actual system incorporates the use of a folksonomy, from which the disambiguation process will benefit. When an exact word is not found for a given detected sense in the source text, the ontology will be fed back with a new relationship of hyperonymy, to alert the curator on the need to propose a new sign in that category, thus promoting an enrichment in a key component of the architecture. As a result of the evaluation, the most appropriate big data public resources and techniques for WSD for sign language will be elucidated.

ACS Style

Luis Naranjo-Zeledón; Antonio Ferrández; Jesús Peral; Mario Chacón-Rivas. Big Data-Assisted Word Sense Disambiguation for Sign Language. First Complex Systems Digital Campus World E-Conference 2015 2019, 441 -448.

AMA Style

Luis Naranjo-Zeledón, Antonio Ferrández, Jesús Peral, Mario Chacón-Rivas. Big Data-Assisted Word Sense Disambiguation for Sign Language. First Complex Systems Digital Campus World E-Conference 2015. 2019; ():441-448.

Chicago/Turabian Style

Luis Naranjo-Zeledón; Antonio Ferrández; Jesús Peral; Mario Chacón-Rivas. 2019. "Big Data-Assisted Word Sense Disambiguation for Sign Language." First Complex Systems Digital Campus World E-Conference 2015 , no. : 441-448.

Conference paper
Published: 29 October 2019 in First Complex Systems Digital Campus World E-Conference 2015
Reads 0
Downloads 0

Nowadays, firms have realized the importance of Big Data, highlighting the need for understanding the current state of marketing practice with respect to Big Data analytics. Among the different sources of Big Data, User-Generated Content (UGC) is one of the most important ones. From blogs to social media and online reviews, consumers generate huge amounts of brand related information that have a decisive potential business value in targeted advertising, customer engagement or brand communication, among others. In the same line, previous empirical findings show that UGC has significant effects on brand images, purchase intentions, and sales. It plays an important role for customers’ potential buying decisions. Thus, mining and analysing UGC data such as comments and sentiments might be useful for firms. Particularly, brand management can be one area of interest, as online reviews might have an influence on brand image and brand positioning. Within this context, as well as the quantitative star score usual in this UGC, in which the buyers rate the product, a recent stream of research employs Sentiment Analysis (SA) tools with the aim of examining the textual content of the review and categorizing buyers’ opinions. While certain SA split the comments into two classes (negative or positive), other incorporate more sentiment classes. However, the review can have phrases with different polarities because the user can have different experiences and sentiments about each feature of the product. Finding the polarity of each feature can be interesting for the decision makers of a product. In this paper, we consider that although these two scores (star and sentiment) are related, the sentiment score highlights extra information not detailed in the star score, which is crucial to be extracted in order to have better criteria of comparison between products. Moreover, we mine the positive and negative features of the products analysing the sentiment.

ACS Style

Erick Kauffmann; David Gil; Jesús Peral; Antonio Ferrández; Ricardo Sellers. A Step Further in Sentiment Analysis Application in Marketing Decision-Making. First Complex Systems Digital Campus World E-Conference 2015 2019, 211 -221.

AMA Style

Erick Kauffmann, David Gil, Jesús Peral, Antonio Ferrández, Ricardo Sellers. A Step Further in Sentiment Analysis Application in Marketing Decision-Making. First Complex Systems Digital Campus World E-Conference 2015. 2019; ():211-221.

Chicago/Turabian Style

Erick Kauffmann; David Gil; Jesús Peral; Antonio Ferrández; Ricardo Sellers. 2019. "A Step Further in Sentiment Analysis Application in Marketing Decision-Making." First Complex Systems Digital Campus World E-Conference 2015 , no. : 211-221.

Review
Published: 18 September 2019 in Electronics
Reads 0
Downloads 0

Sign languages (SL) are the first language for most deaf people. Consequently, bidirectional communication among deaf and non-deaf people has always been a challenging issue. Sign language usage has increased due to inclusion policies and general public agreement, which must then become evident in information technologies, in the many facets that comprise sign language understanding and its computational treatment. In this study, we conduct a thorough systematic mapping of translation-enabling technologies for sign languages. This mapping has considered the most recommended guidelines for systematic reviews, i.e., those pertaining software engineering, since there is a need to account for interdisciplinary areas of accessibility, human computer interaction, natural language processing, and education, all of them part of ACM (Association for Computing Machinery) computing classification system directly related to software engineering. An ongoing development of a software tool called SYMPLE (SYstematic Mapping and Parallel Loading Engine) facilitated the querying and construction of a base set of candidate studies. A great diversity of topics has been studied over the last 25 years or so, but this systematic mapping allows for comfortable visualization of predominant areas, venues, top authors, and different measures of concentration and dispersion. The systematic review clearly shows a large number of classifications and subclassifications interspersed over time. This is an area of study in which there is much interest, with a basically steady level of scientific publications over the last decade, concentrated mainly in the European continent. The publications by country, nevertheless, usually favor their local sign language.

ACS Style

Luis Naranjo-Zeledón; Jesús Peral; Antonio Ferrández; Mario Chacón-Rivas. A Systematic Mapping of Translation-Enabling Technologies for Sign Languages. Electronics 2019, 8, 1047 .

AMA Style

Luis Naranjo-Zeledón, Jesús Peral, Antonio Ferrández, Mario Chacón-Rivas. A Systematic Mapping of Translation-Enabling Technologies for Sign Languages. Electronics. 2019; 8 (9):1047.

Chicago/Turabian Style

Luis Naranjo-Zeledón; Jesús Peral; Antonio Ferrández; Mario Chacón-Rivas. 2019. "A Systematic Mapping of Translation-Enabling Technologies for Sign Languages." Electronics 8, no. 9: 1047.

Journal article
Published: 05 August 2019 in Sustainability
Reads 0
Downloads 0

Companies have realized the importance of “big data” in creating a sustainable competitive advantage, and user-generated content (UGC) represents one of big data’s most important sources. From blogs to social media and online reviews, consumers generate a huge amount of brand-related information that has a decisive potential business value for marketing purposes. Particularly, we focus on online reviews that could have an influence on brand image and positioning. Within this context, and using the usual quantitative star score ratings, a recent stream of research has employed sentiment analysis (SA) tools to examine the textual content of reviews and categorize buyer opinions. Although many SA tools split comments into negative or positive, a review can contain phrases with different polarities because the user can have different sentiments about each feature of the product. Finding the polarity of each feature can be interesting for product managers and brand management. In this paper, we present a general framework that uses natural language processing (NLP) techniques, including sentiment analysis, text data mining, and clustering techniques, to obtain new scores based on consumer sentiments for different product features. The main contribution of our proposal is the combination of price and the aforementioned scores to define a new global score for the product, which allows us to obtain a ranking according to product features. Furthermore, the products can be classified according to their positive, neutral, or negative features (visualized on dashboards), helping consumers with their sustainable purchasing behavior. We proved the validity of our approach in a case study using big data extracted from Amazon online reviews (specifically cell phones), obtaining satisfactory and promising results. After the experimentation, we could conclude that our work is able to improve recommender systems by using positive, neutral, and negative customer opinions and by classifying customers based on their comments.

ACS Style

Erick Kauffmann; Jesús Peral; David Gil; Antonio Ferrández; Ricardo Sellers; Higinio Mora. Managing Marketing Decision-Making with Sentiment Analysis: An Evaluation of the Main Product Features Using Text Data Mining. Sustainability 2019, 11, 4235 .

AMA Style

Erick Kauffmann, Jesús Peral, David Gil, Antonio Ferrández, Ricardo Sellers, Higinio Mora. Managing Marketing Decision-Making with Sentiment Analysis: An Evaluation of the Main Product Features Using Text Data Mining. Sustainability. 2019; 11 (15):4235.

Chicago/Turabian Style

Erick Kauffmann; Jesús Peral; David Gil; Antonio Ferrández; Ricardo Sellers; Higinio Mora. 2019. "Managing Marketing Decision-Making with Sentiment Analysis: An Evaluation of the Main Product Features Using Text Data Mining." Sustainability 11, no. 15: 4235.

Journal article
Published: 03 May 2019 in IEEE Access
Reads 0
Downloads 0

New information and communication technologies have contributed to the development of the smart city concept. On a physical level, this paradigm is characterised by deploying a substantial number of different devices that can sense their surroundings and generate a large amount of data. The most typical case is image and video acquisition sensors. Recently, these types of sensors are found in abundance in urban spaces and are responsible for producing a large volume of multimedia data. The advanced computer vision methods for this type of multimedia information means that many aspects can be dynamically monitored, which can help implement value-added applications in the city. However, obtaining more elaborate semantic information from these data poses significant challenges related to a large amount of data generated and the processing capabilities required. This work aims to address these issues by using a combination of cloud computing technologies and mobile computing techniques to design a three-layer distributed architecture for intensive urban computing. The approach consists of distributing the processing tasks among a city’s multimedia acquisition devices, a middle computing layer, known as a cloudlet, and a cloud-computing infrastructure. As a result, each part of the architecture can now focus on a small number of tasks for which they are especially designed, and data transmission communication needs are significantly reduced. To this end, the cloud server can hold and centralise the multimedia analysis of processed results from the lower layers. Finally, a case study on smart lighting is described to illustrate the benefits of using the proposed model in smart city environments.

ACS Style

Higinio Mora; Jesus Peral; Antonio Ferrandez; David Gil; Julian Szymanski. Distributed Architectures for Intensive Urban Computing: A Case Study on Smart Lighting for Sustainable Cities. IEEE Access 2019, 7, 58449 -58465.

AMA Style

Higinio Mora, Jesus Peral, Antonio Ferrandez, David Gil, Julian Szymanski. Distributed Architectures for Intensive Urban Computing: A Case Study on Smart Lighting for Sustainable Cities. IEEE Access. 2019; 7 (99):58449-58465.

Chicago/Turabian Style

Higinio Mora; Jesus Peral; Antonio Ferrandez; David Gil; Julian Szymanski. 2019. "Distributed Architectures for Intensive Urban Computing: A Case Study on Smart Lighting for Sustainable Cities." IEEE Access 7, no. 99: 58449-58465.

Evaluation study
Published: 23 April 2019 in PLOS ONE
Reads 0
Downloads 0

The accessing and processing of textual information (i.e. the storing and querying of a set of strings) is especially important for many current applications (e.g. information retrieval and social networks), especially when working in the fields of Big Data or IoT, which require the handling of very large string dictionaries. Typical data structures for textual indexing are Hash Tables and some variants of Tries such as the Double Trie (DT). In this paper, we propose an extension of the DT that we have called MergedTrie. It improves the DT compression by merging both Tries into a single and by segmenting the indexed term into two fixed length parts in order to balance the new Trie. Thus, a higher overlapping of both prefixes and suffixes is obtained. Moreover, we propose a new implementation of Tries that achieves better compression rates than the Double-Array representation usually chosen for implementing Tries. Our proposal also overcomes the limitation of static implementations that does not allow insertions and updates in their compact representations. Finally, our MergedTrie implementation experimentally improves the efficiency of the Hash Tables, the DTs, the Double-Array, the Crit-bit, the Directed Acyclic Word Graphs (DAWG), and the Acyclic Deterministic Finite Automata (ADFA) data structures, requiring less space than the original text to be indexed.

ACS Style

Antonio Ferrández; Jesús Peral. MergedTrie: Efficient textual indexing. PLOS ONE 2019, 14, e0215288 .

AMA Style

Antonio Ferrández, Jesús Peral. MergedTrie: Efficient textual indexing. PLOS ONE. 2019; 14 (4):e0215288.

Chicago/Turabian Style

Antonio Ferrández; Jesús Peral. 2019. "MergedTrie: Efficient textual indexing." PLOS ONE 14, no. 4: e0215288.

Review
Published: 15 January 2019 in IEEE Access
Reads 0
Downloads 0
ACS Style

Jesus Peral; Antonio Ferrandez; Higinio Mora; David Gil; Erick Kauffmann. A Review of the Analytics Techniques for an Efficient Management of Online Forums: An Architecture Proposal. IEEE Access 2019, 7, 12220 -12240.

AMA Style

Jesus Peral, Antonio Ferrandez, Higinio Mora, David Gil, Erick Kauffmann. A Review of the Analytics Techniques for an Efficient Management of Online Forums: An Architecture Proposal. IEEE Access. 2019; 7 ():12220-12240.

Chicago/Turabian Style

Jesus Peral; Antonio Ferrandez; Higinio Mora; David Gil; Erick Kauffmann. 2019. "A Review of the Analytics Techniques for an Efficient Management of Online Forums: An Architecture Proposal." IEEE Access 7, no. : 12220-12240.

Conference paper
Published: 26 September 2018 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0

Big Data is becoming a prominent trend in our society. Ever larger amounts of data, including sensitive and personal information, are being loaded into NoSQL and other Big Data technologies for analysis and processing. However, current security approaches do not take into account the special characteristics of these technologies, leaving sensitive and personal data unprotected, thereby risking severe monetary losses and brand damage. In this paper, we focus on assuring document databases, proposing a framework that considers three stages: (1) The source data set is analysed by using Natural Language Processing techniques and ontological resources, in order to detect sensitive data. (2) We define a metamodel for document databases that allows designers to specify both structural and security aspects. (3) This model is automatically implemented into a specific document database tool, MongoDB. Finally, we apply the proposed framework to a case study with a data set from the medical domain. The great advantages of our framework are that: (1) the effort required to secure the data is reduced, as part of the process is automated, (2) it can be easily applied to other NoSQL technologies by adapting the metamodel and transformations, and (3) it is aligned with existing standards, thus facilitating the application of recommendations and best practices.

ACS Style

Carlos Blanco; Diego Garcia-Saiz; Jesús Peral; Alejandro Maté; Alejandro Oliver; Eduardo Fernandez-Medina. How the Conceptual Modelling Improves the Security on Document Databases. Transactions on Petri Nets and Other Models of Concurrency XV 2018, 497 -504.

AMA Style

Carlos Blanco, Diego Garcia-Saiz, Jesús Peral, Alejandro Maté, Alejandro Oliver, Eduardo Fernandez-Medina. How the Conceptual Modelling Improves the Security on Document Databases. Transactions on Petri Nets and Other Models of Concurrency XV. 2018; ():497-504.

Chicago/Turabian Style

Carlos Blanco; Diego Garcia-Saiz; Jesús Peral; Alejandro Maté; Alejandro Oliver; Eduardo Fernandez-Medina. 2018. "How the Conceptual Modelling Improves the Security on Document Databases." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 497-504.

Journal article
Published: 19 July 2018 in IEEE Access
Reads 0
Downloads 0

Current trends in medicine regarding issues of accessibility to, and quantity and quality of information and QoS (Quality of Service) are very different compared to former decades. The current state requires new methods for addressing the challenge of dealing with enormous amounts of data present, and growing, on the Web and other heterogeneous data sources such as sensors and social networks and unstructured data, normally referred to as Big Data. Traditional approaches are not enough, at least on their own, although they were frequently used in hybrid architectures in the past. In this paper, we propose an architecture to process big data including heterogeneous sources of information. We have defined an ontology-oriented architecture where a core ontology has been used as a knowledge base (KB) and allows data integration of different heterogeneous sources. We have used Natural Language Processing and Artificial Intelligence methods to process and mine data in the health sector to uncover knowledge hidden in diverse data sources. Our approach has been applied to the field of personalized medicine (study, diagnosis, and treatment of diseases customized for each patient) and it has been used in a telemedicine system. A case study focused on diabetes is presented to prove the validity of the proposed model.

ACS Style

Jesus Peral; Antonio Ferrandez; David Gil; Rafael Munoz-Terol; Higinio Mora. An Ontology-Oriented Architecture for Dealing With Heterogeneous Data Applied to Telemedicine Systems. IEEE Access 2018, 6, 41118 -41138.

AMA Style

Jesus Peral, Antonio Ferrandez, David Gil, Rafael Munoz-Terol, Higinio Mora. An Ontology-Oriented Architecture for Dealing With Heterogeneous Data Applied to Telemedicine Systems. IEEE Access. 2018; 6 (99):41118-41138.

Chicago/Turabian Style

Jesus Peral; Antonio Ferrandez; David Gil; Rafael Munoz-Terol; Higinio Mora. 2018. "An Ontology-Oriented Architecture for Dealing With Heterogeneous Data Applied to Telemedicine Systems." IEEE Access 6, no. 99: 41118-41138.

Proceedings
Published: 01 January 2018 in Proceedings
Reads 0
Downloads 0

E-Learning is a response to the new educational needs of society and an important development in Information and Communication Technologies. However, this trend presents many challenges, such as the lack of an architecture that allows a unified management of heterogeneous string dictionaries required by all the users of e-learning environments, which we face in this paper. We mean the string dictionaries needed in information retrieval, content development, “key performance indicators” generation and course management applications. As an example, our approach can deal with different indexation dictionaries required by the course contents and the different online forums that generate a huge number of messages with an unordered structure and a great variety of topics. Our architecture will generate an only dictionary that is shared by all the stakeholders involved in the e-learning process.

ACS Style

Antonio Ferrández; Jesús Peral; Higinio Mora; David Gil. Architecture for Efficient String Dictionaries in E-Learning. Proceedings 2018, 2, 1251 .

AMA Style

Antonio Ferrández, Jesús Peral, Higinio Mora, David Gil. Architecture for Efficient String Dictionaries in E-Learning. Proceedings. 2018; 2 (19):1251.

Chicago/Turabian Style

Antonio Ferrández; Jesús Peral; Higinio Mora; David Gil. 2018. "Architecture for Efficient String Dictionaries in E-Learning." Proceedings 2, no. 19: 1251.

Journal article
Published: 01 November 2017 in Computer Standards & Interfaces
Reads 0
Downloads 0

Currently dashboards are the preferred tool across organizations to monitor business performance. Dashboards are often composed of different data visualization techniques, amongst which are Key Performance Indicators (KPIs) which play a crucial role in quickly providing accurate information by comparing current performance against a target required to fulfil business objectives. However, KPIs are not always well known and sometimes it is difficult to find an appropriate KPI to associate with each business objective. In addition, Data Mining techniques are often used when forecasting trends and visualizing data correlations. In this paper we present a new approach to combining these two aspects in order to drive Data Mining techniques to obtain specific KPIs for business objectives in a semi-automated way. The main benefit of our approach is that organizations do not need to rely on existing KPI lists or test KPIs over a cycle as they can analyze their behavior using existing data. In order to show the applicability of our approach, we apply our proposal to the fields of Massive Open Online Courses (MOOCs) and Open Data extracted from the University of Alicante in order to identify the KPIs. HighlightsExtraction of Key Performance Indicators (KPIs).Application of Data Mining techniques to discover relevant KPIs.A new methodology for extracting the relevant KPIs based on Data Mining.Case study with MOOCs and Open Data from the University of Alicante.

ACS Style

Jesús Peral; Alejandro Maté; Manuel Marco. Application of Data Mining techniques to identify relevant Key Performance Indicators. Computer Standards & Interfaces 2017, 54, 76 -85.

AMA Style

Jesús Peral, Alejandro Maté, Manuel Marco. Application of Data Mining techniques to identify relevant Key Performance Indicators. Computer Standards & Interfaces. 2017; 54 ():76-85.

Chicago/Turabian Style

Jesús Peral; Alejandro Maté; Manuel Marco. 2017. "Application of Data Mining techniques to identify relevant Key Performance Indicators." Computer Standards & Interfaces 54, no. : 76-85.

Journal article
Published: 01 October 2016 in Future Generation Computer Systems
Reads 0
Downloads 0

Highlights•Energy consumption predictions based on data mining and supported by external data.•Heterogeneous data are combined through DW and Information Extraction (IE).•Multidimensional model integrates information extracted from Social Networks and IE.•The scenario: consumption prediction is modified with external unstructured data. AbstractIrresponsible and negligent use of natural resources in the last five decades has made it an important priority to adopt more intelligent ways of managing existing resources, especially the ones related to energy. The main objective of this paper is to explore the opportunities of integrating internal data already stored in Data Warehouses together with external Big Data to improve energy consumption predictions. This paper presents a study in which we propose an architecture that makes use of already stored energy data and external unstructured information to improve knowledge acquisition and allow managers to make better decisions. This external knowledge is represented by a torrent of information that, in many cases, is hidden across heterogeneous and unstructured data sources, which are recuperated by an Information Extraction system. Alternatively, it is present in social networks expressed as user opinions. Furthermore, our approach applies data mining techniques to exploit the already integrated data. Our approach has been applied to a real case study and shows promising results. The experiments carried out in this work are twofold: (i) using and comparing diverse Artificial Intelligence methods, and (ii) validating our approach with data sources integration.

ACS Style

Alejandro Maté; Jesús Peral; Antonio Ferrández; David Gil; Juan Trujillo. A hybrid integrated architecture for energy consumption prediction. Future Generation Computer Systems 2016, 63, 131 -147.

AMA Style

Alejandro Maté, Jesús Peral, Antonio Ferrández, David Gil, Juan Trujillo. A hybrid integrated architecture for energy consumption prediction. Future Generation Computer Systems. 2016; 63 ():131-147.

Chicago/Turabian Style

Alejandro Maté; Jesús Peral; Antonio Ferrández; David Gil; Juan Trujillo. 2016. "A hybrid integrated architecture for energy consumption prediction." Future Generation Computer Systems 63, no. : 131-147.

Journal article
Published: 23 September 2016 in Computer Standards & Interfaces
Reads 0
Downloads 0

Currently dashboards are the preferred tool across organizations to monitor business performance. Dashboards are often composed of different data visualization techniques, amongst which are Key Performance Indicators (KPIs) which play a crucial role in quickly providing accurate information by comparing current performance against a target required to fulfill business objectives. However, KPIs are not always well known and sometimes it is difficult to find an appropriate KPI to associate with each business objective. In addition, Data Mining techniques are often used when forecasting trends and visualizing data correlations. In this paper we present a new approach to combining these two aspects in order to drive Data Mining techniques to obtain specific KPIs for business objectives in a semi-automated way. The main benefit of our approach is that organizations do not need to rely on existing KPI lists or test KPIs over a cycle as they can analyze their behavior using existing data. In order to show the applicability of our approach, we apply our proposal to the fields of Massive Open Online Courses (MOOCs) and Open Data extracted from the University of Alicante in order to identify the KPIs.

ACS Style

Jesús Peral; Alejandro Maté; Manuel Marco. Application of Data Mining techniques to identify relevant Key Performance Indicators. Computer Standards & Interfaces 2016, 50, 55 -64.

AMA Style

Jesús Peral, Alejandro Maté, Manuel Marco. Application of Data Mining techniques to identify relevant Key Performance Indicators. Computer Standards & Interfaces. 2016; 50 ():55-64.

Chicago/Turabian Style

Jesús Peral; Alejandro Maté; Manuel Marco. 2016. "Application of Data Mining techniques to identify relevant Key Performance Indicators." Computer Standards & Interfaces 50, no. : 55-64.

Review
Published: 11 July 2016 in Sensors
Reads 0
Downloads 0

The Internet of Things (IoT) has made it possible for devices around the world to acquire information and store it, in order to be able to use it at a later stage. However, this potential opportunity is often not exploited because of the excessively big interval between the data collection and the capability to process and analyse it. In this paper, we review the current IoT technologies, approaches and models in order to discover what challenges need to be met to make more sense of data. The main goal of this paper is to review the surveys related to IoT in order to provide well integrated and context aware intelligent services for IoT. Moreover, we present a state-of-the-art of IoT from the context aware perspective that allows the integration of IoT and social networks in the emerging Social Internet of Things (SIoT) term.

ACS Style

David Gil; Antonio Ferrández; Higinio Mora-Mora; Jesús Peral. Internet of Things: A Review of Surveys Based on Context Aware Intelligent Services. Sensors 2016, 16, 1069 .

AMA Style

David Gil, Antonio Ferrández, Higinio Mora-Mora, Jesús Peral. Internet of Things: A Review of Surveys Based on Context Aware Intelligent Services. Sensors. 2016; 16 (7):1069.

Chicago/Turabian Style

David Gil; Antonio Ferrández; Higinio Mora-Mora; Jesús Peral. 2016. "Internet of Things: A Review of Surveys Based on Context Aware Intelligent Services." Sensors 16, no. 7: 1069.

Conference paper
Published: 15 December 2015 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0

Currently dashboards are the preferred tool across organizations to monitor business performance. Dashboards are often composed by different data visualization techniques, amongst which Key Performance Indicators (KPIs) play a crucial role in facilitating quick and precise information by comparing current performance against a target required to fulfill business objectives. It is however the case that not always KPIs are well known, and sometimes it is difficult to find an adequate KPI to associate with each business objective. On the other hand, data mining techniques are often used for forecasting trends and visualizing data correlations. In this paper, we present a novel approach to combine these two aspects in order to drive data mining techniques into obtaining specific KPIs for business objectives in a semi-automatic way. The main benefit of our approach, is that organizations do not need to rely on existing KPI lists, such as APQC, nor test KPIs on a cycle, as they can analyze their behaviour using existing data. In order to show the applicability of our approach, we apply our proposal to the novel field of MOOC courses in order to identify additional KPIs to the ones being currently used.

ACS Style

Roberto Tardío; Jesús Peral. Obtaining Key Performance Indicators by Using Data Mining Techniques. Transactions on Petri Nets and Other Models of Concurrency XV 2015, 144 -153.

AMA Style

Roberto Tardío, Jesús Peral. Obtaining Key Performance Indicators by Using Data Mining Techniques. Transactions on Petri Nets and Other Models of Concurrency XV. 2015; ():144-153.

Chicago/Turabian Style

Roberto Tardío; Jesús Peral. 2015. "Obtaining Key Performance Indicators by Using Data Mining Techniques." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 144-153.