This page has only limited features, please log in for full access.
Jose E. Lozano-Rizk received his Bachelor's degree in Computer Engineering from the Faculty of Engineering, Architecture, and Design (FIAD) of the Autonomous University of Baja California (UABC, Ensenada, Mexico) in 2003. In 2007 he received a Master's degree in Computer Engineering and, in 2019, received a Doctor of Science degree from the UABC. He currently works at the Computer Department at CICESE Research Center in the Telematics Division and is a member of the Faculty of Sciences' academic staff at UABC Ensenada. His research interests are high-performance computing (HPC), big data analytics, distributed systems, the internet of things (IoT), and software-defined networks.
Scientific applications require to process, analyze and transfer large volumes of data in the shortest possible time from distributed data sources. In order to improve their performance, it is necessary to provide them with specific QoS parameters. On the other hand, SDN is presented as a new paradigm of communications networks that facilitates the management of the communications infrastructure and consequently allows to dynamically incorporate QoS parameters to the applications running in this type of network. With both these paradigms in mind, we conducted this research to answer the following questions: Do scientific applications that are running in an SDN-Enabled distributed data centers improve their performance? Do they consider network QoS parameters for job scheduling? The methodology used was to consult articles in specialized databases containing the keywords SDN and for scientific applications: HPC and Big Data. Then, we analyzed the articles where these keywords intersect with some of the parameters related to QoS in communications networks. Also, we reviewed QoS proposals in SDN to identify the advances in this research area. The results of this paper are: i) QoS is an open issue to incorporate in scientific applications that are running in an SDN ii) we identified the challenges to join both these paradigms, and iii) we present a strategy to provide QoS to scientific applications that are being executed among SDN-Enabled distributed data centers.
Jose Eleno Lozano-Rizk; Raul Rivera-Rodriguez; Juan Iván Nieto-Hipólito; Сальвадор Вилларреаль-Рейс; Alejandro Galaviz-Mosqueda; Mabel Vazquez-Briseno. Quality of Service in Software Defined Networks for Scientific Applications: Opportunities and Challenges. Proceedings of the Institute for System Programming of the RAS 2021, 33, 111 -122.
AMA StyleJose Eleno Lozano-Rizk, Raul Rivera-Rodriguez, Juan Iván Nieto-Hipólito, Сальвадор Вилларреаль-Рейс, Alejandro Galaviz-Mosqueda, Mabel Vazquez-Briseno. Quality of Service in Software Defined Networks for Scientific Applications: Opportunities and Challenges. Proceedings of the Institute for System Programming of the RAS. 2021; 33 (1):111-122.
Chicago/Turabian StyleJose Eleno Lozano-Rizk; Raul Rivera-Rodriguez; Juan Iván Nieto-Hipólito; Сальвадор Вилларреаль-Рейс; Alejandro Galaviz-Mosqueda; Mabel Vazquez-Briseno. 2021. "Quality of Service in Software Defined Networks for Scientific Applications: Opportunities and Challenges." Proceedings of the Institute for System Programming of the RAS 33, no. 1: 111-122.
When Internet of Things (IoT) big data analytics (BDA) require to transfer data streams among software defined network (SDN)-based distributed data centers, the data flow forwarding in the communication network is typically done by an SDN controller using a traditional shortest path algorithm or just considering bandwidth requirements by the applications. In BDA, this scheme could affect their performance resulting in a longer job completion time because additional metrics were not considered, such as end-to-end delay, jitter, and packet loss rate in the data transfer path. These metrics are quality of service (QoS) parameters in the communication network. This research proposes a solution called QoSComm, an SDN strategy to allocate QoS-based data flows for BDA running across distributed data centers to minimize their job completion time. QoSComm operates in two phases: (i) based on the current communication network conditions, it calculates the feasible paths for each data center using a multi-objective optimization method; (ii) it distributes the resultant paths among data centers configuring their openflow Switches (OFS) dynamically. Simulation results show that QoSComm can improve BDA job completion time by an average of 18%.
Jose E. Lozano-Rizk; Juan I. Nieto-Hipolito; Raul Rivera-Rodriguez; Maria A. Cosio-Leon; Mabel Vazquez-Briseño; Juan C. Chimal-Eguia. QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics. Applied Sciences 2020, 10, 7586 .
AMA StyleJose E. Lozano-Rizk, Juan I. Nieto-Hipolito, Raul Rivera-Rodriguez, Maria A. Cosio-Leon, Mabel Vazquez-Briseño, Juan C. Chimal-Eguia. QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics. Applied Sciences. 2020; 10 (21):7586.
Chicago/Turabian StyleJose E. Lozano-Rizk; Juan I. Nieto-Hipolito; Raul Rivera-Rodriguez; Maria A. Cosio-Leon; Mabel Vazquez-Briseño; Juan C. Chimal-Eguia. 2020. "QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics." Applied Sciences 10, no. 21: 7586.
In recent years, traditional data centers have been used to host high-performance computing infrastructure, such as HPC clusters, addressing specific requirements for different research and scientific projects. Computes, storage, network, and security infrastructure is usually from heterogenous manufacturers and has multiple management interfaces. This process implies a higher demand on data center administrators to manually attend the problems or particular configurations that require each of the HPC applications, which can affect its performance, together with the fact that it can present a poor resource allocation on Data Center. Software-Defined Data Centers (SDDC) have emerged as a solution for automating the management and self-provisioning of computing, storage, network, and security resources dynamically according to the software-defined policies for each of the applications running in the SDDC. With these paradigms in mind, this work aims to answer whether HPC applications can benefit in their performance using the advantages that SDDC offers to other applications such as business or enterprise. The results of this article are (i) we identify SDDC main components. (ii) we present an experimental approach to use SDDC network component for High-Performance MPI-based applications.
J. E. Lozano-Rizk; Juan Ivan Nieto Hipolito; R. Rivera-Rodriguez; María De Los Ángeles Cosío León; M. Vazquez-Briseno; J. C. Chimal-Eguia; V. Rico-Rodriguez; E. Martinez-Martinez. Software Defined Data Center for High Performance Computing Applications. Communications in Computer and Information Science 2019, 27 -41.
AMA StyleJ. E. Lozano-Rizk, Juan Ivan Nieto Hipolito, R. Rivera-Rodriguez, María De Los Ángeles Cosío León, M. Vazquez-Briseno, J. C. Chimal-Eguia, V. Rico-Rodriguez, E. Martinez-Martinez. Software Defined Data Center for High Performance Computing Applications. Communications in Computer and Information Science. 2019; ():27-41.
Chicago/Turabian StyleJ. E. Lozano-Rizk; Juan Ivan Nieto Hipolito; R. Rivera-Rodriguez; María De Los Ángeles Cosío León; M. Vazquez-Briseno; J. C. Chimal-Eguia; V. Rico-Rodriguez; E. Martinez-Martinez. 2019. "Software Defined Data Center for High Performance Computing Applications." Communications in Computer and Information Science , no. : 27-41.
The past decade, virtual machines emerged to solve many infrastructure problems and practical use of computing resources. The limitations of this type of technology, are in the sense of resource overload because each virtual machine has a complete copy of an operating system plus different libraries needed to run an application. Containers technology reduces this load by eliminating the hypervisor and the virtual machine for its operation, where each application is executed with the most elementary of a server, plus a shared instance of the operating system that hosts it. Container technology is already an essential part of the IT industry, as it is a simpler and more efficient way to virtualize Micro-Services with workflow’s creations support in development and operations (DevOps). Unlike the use of virtual machines, this solution generates much less overhead in the kernel host and the application, improving performance. In the high-performance computing (HPC) there is a willingness to implement this solution for scientific computing purposes. The most important and standard technology in the industry is Docker, however is not a trivial and direct adoption of this standard for the requirements of scientific computing in a HPC environment. In the present study, a review of research works focused on the use of containers for the HPC will be carried out with the objective of familiarizing the user and system administrator of HPC in the use of this technology, and how scientific research projects can get benefit from this type of technology in terms of mobility of compute and reproducibility of workflows.
F. Medrano-Jaimes; Jose E. Lozano-Rizk; S. Castañeda-Avila; R. Rivera-Rodriguez. Use of Containers for High-Performance Computing. Communications in Computer and Information Science 2018, 24 -32.
AMA StyleF. Medrano-Jaimes, Jose E. Lozano-Rizk, S. Castañeda-Avila, R. Rivera-Rodriguez. Use of Containers for High-Performance Computing. Communications in Computer and Information Science. 2018; ():24-32.
Chicago/Turabian StyleF. Medrano-Jaimes; Jose E. Lozano-Rizk; S. Castañeda-Avila; R. Rivera-Rodriguez. 2018. "Use of Containers for High-Performance Computing." Communications in Computer and Information Science , no. : 24-32.