This page has only limited features, please log in for full access.

Unclaimed
Iyad Katib
Computer Science Department , King Abdulaziz University , Jeddah , Saudi Arabia

Basic Info

Basic Info is private.

Fingerprints

Computer

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

Iyad Katib is a Professor with the Computer Science Department and the current Dean of the Faculty of Computing and Information Technology (FCIT) in King Abdulaziz University (KAU). Iyad received his Ph.D. and MS degrees in Computer Science from University of Missouri-Kansas City in 2011 and 2004, respectively. He received his BS degree in Statistics/Computer Science from King Abdul Aziz University in 1999. His current research interest is on Computer Networking and High Performance Computing.

Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 27 July 2021 in Applied Mathematics and Nonlinear Sciences
Reads 0
Downloads 0

This article first introduces neural networks and their characteristics. Based on a comparison of the structure and function of biological neurons and artificial neurons, it focuses on the structure, classification, activation rules, and learning rules of neural network models. Based on the existing literature, this article adds a distributed time lag term of the neural network system. In the actual problem, history has a very important influence on the current change situation, and it is not only at a specific time in the past. It has an impact on the current state change rate. Therefore, based on the existing literature that only has discrete time lags, this paper adds distributed time lags. Such neural network systems can better reflect real-world problems. In this paper, we use three different inequality scaling methods to study the existence, uniqueness, and global asymptotic stability of a class of neural network systems with mixed delays and uncertain parameters. First, using the principle of homeomorphism, a new upper-norm norm is introduced for the correlation matrix of the neural network, and enough conditions for the existence of unique equilibrium points in several neural network systems are given. Under these conditions, the appropriate Lyapunov is used. Krasovskii functional, we prove that the equilibrium point of the neural network system is globally robust and stable. Numerical experiments show that the stability conditions of the neural network system we obtained are feasible, and the conservativeness of the stability conditions of the neural network system is reduced. Finally, some applications and problems of neural network models in psychology are briefly discussed.

ACS Style

Hong Zhang; Iyad Katib; Hafnida. Hasan. Research on the Psychological Distribution Delay of Artificial Neural Network Based on the Analysis of Differential Equation by Inequality Expansion and Contraction Method. Applied Mathematics and Nonlinear Sciences 2021, {"content-, 1 .

AMA Style

Hong Zhang, Iyad Katib, Hafnida. Hasan. Research on the Psychological Distribution Delay of Artificial Neural Network Based on the Analysis of Differential Equation by Inequality Expansion and Contraction Method. Applied Mathematics and Nonlinear Sciences. 2021; {"content- ():1.

Chicago/Turabian Style

Hong Zhang; Iyad Katib; Hafnida. Hasan. 2021. "Research on the Psychological Distribution Delay of Artificial Neural Network Based on the Analysis of Differential Equation by Inequality Expansion and Contraction Method." Applied Mathematics and Nonlinear Sciences {"content-, no. : 1.

Journal article
Published: 24 April 2021 in Sensors
Reads 0
Downloads 0

Digital societies could be characterized by their increasing desire to express themselves and interact with others. This is being realized through digital platforms such as social media that have increasingly become convenient and inexpensive sensors compared to physical sensors in many sectors of smart societies. One such major sector is road transportation, which is the backbone of modern economies and costs globally 1.25 million deaths and 50 million human injuries annually. The cutting-edge on big data-enabled social media analytics for transportation-related studies is limited. This paper brings a range of technologies together to detect road traffic-related events using big data and distributed machine learning. The most specific contribution of this research is an automatic labelling method for machine learning-based traffic-related event detection from Twitter data in the Arabic language. The proposed method has been implemented in a software tool called Iktishaf+ (an Arabic word meaning discovery) that is able to detect traffic events automatically from tweets in the Arabic language using distributed machine learning over Apache Spark. The tool is built using nine components and a range of technologies including Apache Spark, Parquet, and MongoDB. Iktishaf+ uses a light stemmer for the Arabic language developed by us. We also use in this work a location extractor developed by us that allows us to extract and visualize spatio-temporal information about the detected events. The specific data used in this work comprises 33.5 million tweets collected from Saudi Arabia using the Twitter API. Using support vector machines, naïve Bayes, and logistic regression-based classifiers, we are able to detect and validate several real events in Saudi Arabia without prior knowledge, including a fire in Jeddah, rains in Makkah, and an accident in Riyadh. The findings show the effectiveness of Twitter media in detecting important events with no prior knowledge about them.

ACS Style

Ebtesam Alomari; Iyad Katib; Aiiad Albeshri; Tan Yigitcanlar; Rashid Mehmood. Iktishaf+: A Big Data Tool with Automatic Labeling for Road Traffic Social Sensing and Event Detection Using Distributed Machine Learning. Sensors 2021, 21, 2993 .

AMA Style

Ebtesam Alomari, Iyad Katib, Aiiad Albeshri, Tan Yigitcanlar, Rashid Mehmood. Iktishaf+: A Big Data Tool with Automatic Labeling for Road Traffic Social Sensing and Event Detection Using Distributed Machine Learning. Sensors. 2021; 21 (9):2993.

Chicago/Turabian Style

Ebtesam Alomari; Iyad Katib; Aiiad Albeshri; Tan Yigitcanlar; Rashid Mehmood. 2021. "Iktishaf+: A Big Data Tool with Automatic Labeling for Road Traffic Social Sensing and Event Detection Using Distributed Machine Learning." Sensors 21, no. 9: 2993.

Journal article
Published: 30 March 2021 in Sustainability
Reads 0
Downloads 0

SARS-CoV-2, a tiny virus, is severely affecting the social, economic, and environmental sustainability of our planet, causing infections and deaths (2,674,151 deaths, as of 17 March 2021), relationship breakdowns, depression, economic downturn, riots, and much more. The lessons that have been learned from good practices by various countries include containing the virus rapidly; enforcing containment measures; growing COVID-19 testing capability; discovering cures; providing stimulus packages to the affected; easing monetary policies; developing new pandemic-related industries; support plans for controlling unemployment; and overcoming inequalities. Coordination and multi-term planning have been found to be the key among the successful national and global endeavors to fight the pandemic. The current research and practice have mainly focused on specific aspects of COVID-19 response. There is a need to automate the learning process such that we can learn from good and bad practices during pandemics and normal times. To this end, this paper proposes a technology-driven framework, iResponse, for coordinated and autonomous pandemic management, allowing pandemic-related monitoring and policy enforcement, resource planning and provisioning, and data-driven planning and decision-making. The framework consists of five modules: Monitoring and Break-the-Chain, Cure Development and Treatment, Resource Planner, Data Analytics and Decision Making, and Data Storage and Management. All modules collaborate dynamically to make coordinated and informed decisions. We provide the technical system architecture of a system based on the proposed iResponse framework along with the design details of each of its five components. The challenges related to the design of the individual modules and the whole system are discussed. We provide six case studies in the paper to elaborate on the different functionalities of the iResponse framework and how the framework can be implemented. These include a sentiment analysis case study, a case study on the recognition of human activities, and four case studies using deep learning and other data-driven methods to show how to develop sustainability-related optimal strategies for pandemic management using seven real-world datasets. A number of important findings are extracted from these case studies.

ACS Style

Furqan Alam; Ahmed Almaghthawi; Iyad Katib; Aiiad Albeshri; Rashid Mehmood. iResponse: An AI and IoT-Enabled Framework for Autonomous COVID-19 Pandemic Management. Sustainability 2021, 13, 3797 .

AMA Style

Furqan Alam, Ahmed Almaghthawi, Iyad Katib, Aiiad Albeshri, Rashid Mehmood. iResponse: An AI and IoT-Enabled Framework for Autonomous COVID-19 Pandemic Management. Sustainability. 2021; 13 (7):3797.

Chicago/Turabian Style

Furqan Alam; Ahmed Almaghthawi; Iyad Katib; Aiiad Albeshri; Rashid Mehmood. 2021. "iResponse: An AI and IoT-Enabled Framework for Autonomous COVID-19 Pandemic Management." Sustainability 13, no. 7: 3797.

Journal article
Published: 28 January 2021 in Computer Communications
Reads 0
Downloads 0

Internet of Drones (IoDs) is getting growing interest of researchers due to its applicability in wide range of applications for transportation, weather monitoring, emergency monitoring for flood, earth quake, healthcare and road hazards. To update the data about emergency situation, a real-time data sharing is mandatory. However, regular message transmission by various drones may not only overwhelm a central server but it also causes congestion on the network. It is mandatory to reduce messaging cost and congestion. This paper presents a fog-assisted congestion avoidance approach for Smooth Message Dissemination (SMD). We present a message forwarding algorithm for congestion avoidance to select the appropriate next-hop node using layered model. This model is based on various layers having drones. In first phase, it looks for an appropriate drone in a layer near the fog server for message forwarding. In next step, the drone is identified in nearby layers to forward the emergency message to next-hop to further locate the group head as per priority. It is a drone that has less distance towards fog server and inform in its one-hop circle. It can stop forwarding message after delivering it to fog server. Finally, the fog server disseminates information timely towards upper layers for necessary actions for emergency situations. The performance of the proposed approach is validated through extensive simulations using NS 2.35. Results prove the dominance of SMD over counterparts in terms of messaging overhead, packet delivery ratio, throughput, energy consumption and average delay. Proposed SMD improves PDR by 85% and message overhead cost by 91% as compared to counterparts.

ACS Style

Shumayla Yaqoob; Ata Ullah; Muhammad Awais; Iyad Katib; Aiiad Albeshri; Rashid Mehmood; Mohsin Raza; Saif Ul Islam; Joel J.P.C. Rodrigues. Novel congestion avoidance scheme for Internet of Drones. Computer Communications 2021, 169, 202 -210.

AMA Style

Shumayla Yaqoob, Ata Ullah, Muhammad Awais, Iyad Katib, Aiiad Albeshri, Rashid Mehmood, Mohsin Raza, Saif Ul Islam, Joel J.P.C. Rodrigues. Novel congestion avoidance scheme for Internet of Drones. Computer Communications. 2021; 169 ():202-210.

Chicago/Turabian Style

Shumayla Yaqoob; Ata Ullah; Muhammad Awais; Iyad Katib; Aiiad Albeshri; Rashid Mehmood; Mohsin Raza; Saif Ul Islam; Joel J.P.C. Rodrigues. 2021. "Novel congestion avoidance scheme for Internet of Drones." Computer Communications 169, no. : 202-210.

Journal article
Published: 01 January 2021 in International Journal of Environmental Research and Public Health
Reads 0
Downloads 0

Today’s societies are connected to a level that has never been seen before. The COVID-19 pandemic has exposed the vulnerabilities of such an unprecedently connected world. As of 19 November 2020, over 56 million people have been infected with nearly 1.35 million deaths, and the numbers are growing. The state-of-the-art social media analytics for COVID-19-related studies to understand the various phenomena happening in our environment are limited and require many more studies. This paper proposes a software tool comprising a collection of unsupervised Latent Dirichlet Allocation (LDA) machine learning and other methods for the analysis of Twitter data in Arabic with the aim to detect government pandemic measures and public concerns during the COVID-19 pandemic. The tool is described in detail, including its architecture, five software components, and algorithms. Using the tool, we collect a dataset comprising 14 million tweets from the Kingdom of Saudi Arabia (KSA) for the period 1 February 2020 to 1 June 2020. We detect 15 government pandemic measures and public concerns and six macro-concerns (economic sustainability, social sustainability, etc.), and formulate their information-structural, temporal, and spatio-temporal relationships. For example, we are able to detect the timewise progression of events from the public discussions on COVID-19 cases in mid-March to the first curfew on 22 March, financial loan incentives on 22 March, the increased quarantine discussions during March–April, the discussions on the reduced mobility levels from 24 March onwards, the blood donation shortfall late March onwards, the government’s 9 billion SAR (Saudi Riyal) salary incentives on 3 April, lifting the ban on five daily prayers in mosques on 26 May, and finally the return to normal government measures on 29 May 2020. These findings show the effectiveness of the Twitter media in detecting important events, government measures, public concerns, and other information in both time and space with no earlier knowledge about them.

ACS Style

Ebtesam AlOmari; Iyad Katib; Aiiad Albeshri; Rashid Mehmood. COVID-19: Detecting Government Pandemic Measures and Public Concerns from Twitter Arabic Data Using Distributed Machine Learning. International Journal of Environmental Research and Public Health 2021, 18, 282 .

AMA Style

Ebtesam AlOmari, Iyad Katib, Aiiad Albeshri, Rashid Mehmood. COVID-19: Detecting Government Pandemic Measures and Public Concerns from Twitter Arabic Data Using Distributed Machine Learning. International Journal of Environmental Research and Public Health. 2021; 18 (1):282.

Chicago/Turabian Style

Ebtesam AlOmari; Iyad Katib; Aiiad Albeshri; Rashid Mehmood. 2021. "COVID-19: Detecting Government Pandemic Measures and Public Concerns from Twitter Arabic Data Using Distributed Machine Learning." International Journal of Environmental Research and Public Health 18, no. 1: 282.

Article
Published: 30 November 2020 in The Journal of Supercomputing
Reads 0
Downloads 0

Sparse linear algebra is central to many areas of engineering, science, and business. The community has done considerable work on proposing new methods for sparse matrix-vector multiplication (SpMV) computations and iterative sparse solvers on graphical processing units (GPUs). Due to vast variations in matrix features, no single method performs well across all sparse matrices. A few tools on automatic prediction of best-performing SpMV kernels have emerged recently and require many more efforts to fully utilize their potential. The utilization of a GPU by the existing SpMV kernels is far from its full capacity. Moreover, the development and performance analysis of SpMV techniques on GPUs have not been studied in sufficient depth. This paper proposes DIESEL, a deep learning-based tool that predicts and executes the best performing SpMV kernel for a given matrix using a feature set carefully devised by us through rigorous empirical and mathematical instruments. The dataset comprises 1056 matrices from 26 different real-life application domains including computational fluid dynamics, materials, electromagnetics, economics, and more. We propose a range of new metrics and methods for performance analysis, visualization, and comparison of SpMV tools. DIESEL provides better performance with its accuracy \(88.2\%\), workload accuracy \(91.96\%\), and average relative loss \(4.4\%\), compared to \(85.9\%\), \(85.31\%\), and \(7.65\%\) by the next best performing artificial intelligence (AI)-based SpMV tool. The extensive results and analyses presented in this paper provide several key insights into the performance of the SpMV tools and how these relate to the matrix datasets and the performance metrics, allowing the community to further improve and compare basic and AI-based SpMV tools in the future.

ACS Style

Thaha Mohammed; Aiiad Albeshri; Iyad Katib; Rashid Mehmood. DIESEL: A novel deep learning-based tool for SpMV computations and solving sparse linear equation systems. The Journal of Supercomputing 2020, 77, 6313 -6355.

AMA Style

Thaha Mohammed, Aiiad Albeshri, Iyad Katib, Rashid Mehmood. DIESEL: A novel deep learning-based tool for SpMV computations and solving sparse linear equation systems. The Journal of Supercomputing. 2020; 77 (6):6313-6355.

Chicago/Turabian Style

Thaha Mohammed; Aiiad Albeshri; Iyad Katib; Rashid Mehmood. 2020. "DIESEL: A novel deep learning-based tool for SpMV computations and solving sparse linear equation systems." The Journal of Supercomputing 77, no. 6: 6313-6355.

Journal article
Published: 13 October 2020 in Sensors
Reads 0
Downloads 0

Artificial intelligence (AI) has taken us by storm, helping us to make decisions in everything we do, even in finding our “true love” and the “significant other”. While 5G promises us high-speed mobile internet, 6G pledges to support ubiquitous AI services through next-generation softwarization, heterogeneity, and configurability of networks. The work on 6G is in its infancy and requires the community to conceptualize and develop its design, implementation, deployment, and use cases. Towards this end, this paper proposes a framework for Distributed AI as a Service (DAIaaS) provisioning for Internet of Everything (IoE) and 6G environments. The AI service is “distributed” because the actual training and inference computations are divided into smaller, concurrent, computations suited to the level and capacity of resources available with cloud, fog, and edge layers. Multiple DAIaaS provisioning configurations for distributed training and inference are proposed to investigate the design choices and performance bottlenecks of DAIaaS. Specifically, we have developed three case studies (e.g., smart airport) with eight scenarios (e.g., federated learning) comprising nine applications and AI delivery models (smart surveillance, etc.) and 50 distinct sensor and software modules (e.g., object tracker). The evaluation of the case studies and the DAIaaS framework is reported in terms of end-to-end delay, network usage, energy consumption, and financial savings with recommendations to achieve higher performance. DAIaaS will facilitate standardization of distributed AI provisioning, allow developers to focus on the domain-specific details without worrying about distributed training and inference, and help systemize the mass-production of technologies for smarter environments.

ACS Style

Nourah Janbi; Iyad Katib; Aiiad Albeshri; Rashid Mehmood. Distributed Artificial Intelligence-as-a-Service (DAIaaS) for Smarter IoE and 6G Environments. Sensors 2020, 20, 5796 .

AMA Style

Nourah Janbi, Iyad Katib, Aiiad Albeshri, Rashid Mehmood. Distributed Artificial Intelligence-as-a-Service (DAIaaS) for Smarter IoE and 6G Environments. Sensors. 2020; 20 (20):5796.

Chicago/Turabian Style

Nourah Janbi; Iyad Katib; Aiiad Albeshri; Rashid Mehmood. 2020. "Distributed Artificial Intelligence-as-a-Service (DAIaaS) for Smarter IoE and 6G Environments." Sensors 20, no. 20: 5796.

Journal article
Published: 13 October 2020 in Electronics
Reads 0
Downloads 0

Graphics processing units (GPUs) have delivered a remarkable performance for a variety of high performance computing (HPC) applications through massive parallelism. One such application is sparse matrix-vector (SpMV) computations, which is central to many scientific, engineering, and other applications including machine learning. No single SpMV storage or computation scheme provides consistent and sufficiently high performance for all matrices due to their varying sparsity patterns. An extensive literature review reveals that the performance of SpMV techniques on GPUs has not been studied in sufficient detail. In this paper, we provide a detailed performance analysis of SpMV performance on GPUs using four notable sparse matrix storage schemes (compressed sparse row (CSR), ELLAPCK (ELL), hybrid ELL/COO (HYB), and compressed sparse row 5 (CSR5)), five performance metrics (execution time, giga floating point operations per second (GFLOPS), achieved occupancy, instructions per warp, and warp execution efficiency), five matrix sparsity features (nnz, anpr, nprvariance, maxnpr, and distavg), and 17 sparse matrices from 10 application domains (chemical simulations, computational fluid dynamics (CFD), electromagnetics, linear programming, economics, etc.). Subsequently, based on the deeper insights gained through the detailed performance analysis, we propose a technique called the heterogeneous CPU–GPU Hybrid (HCGHYB) scheme. It utilizes both the CPU and GPU in parallel and provides better performance over the HYB format by an average speedup of 1.7x. Heterogeneous computing is an important direction for SpMV and other application areas. Moreover, to the best of our knowledge, this is the first work where the SpMV performance on GPUs has been discussed in such depth. We believe that this work on SpMV performance analysis and the heterogeneous scheme will open up many new directions and improvements for the SpMV computing field in the future.

ACS Style

Sarah Alahmadi; Thaha Mohammed; Aiiad Albeshri; Iyad Katib; Rashid Mehmood. Performance Analysis of Sparse Matrix-Vector Multiplication (SpMV) on Graphics Processing Units (GPUs). Electronics 2020, 9, 1675 .

AMA Style

Sarah Alahmadi, Thaha Mohammed, Aiiad Albeshri, Iyad Katib, Rashid Mehmood. Performance Analysis of Sparse Matrix-Vector Multiplication (SpMV) on Graphics Processing Units (GPUs). Electronics. 2020; 9 (10):1675.

Chicago/Turabian Style

Sarah Alahmadi; Thaha Mohammed; Aiiad Albeshri; Iyad Katib; Rashid Mehmood. 2020. "Performance Analysis of Sparse Matrix-Vector Multiplication (SpMV) on Graphics Processing Units (GPUs)." Electronics 9, no. 10: 1675.

Journal article
Published: 13 October 2020 in Applied Sciences
Reads 0
Downloads 0

5G networks and Internet of Things (IoT) offer a powerful platform for ubiquitous environments with their ubiquitous sensing, high speeds and other benefits. The data, analytics, and other computations need to be optimally moved and placed in these environments, dynamically, such that energy-efficiency and QoS demands are best satisfied. A particular challenge in this context is to preserve privacy and security while delivering quality of service (QoS) and energy-efficiency. Many works have tried to address these challenges but without a focus on optimizing all of them and assuming fixed models of environments and security threats. This paper proposes the UbiPriSEQ framework that uses Deep Reinforcement Learning (DRL) to adaptively, dynamically, and holistically optimize QoS, energy-efficiency, security, and privacy. UbiPriSEQ is built on a three-layered model and comprises two modules. UbiPriSEQ devises policies and makes decisions related to important parameters including local processing and offloading rates for data and computations, radio channel states, transmit power, task priority, and selection of fog nodes for offloading, data migration, and so forth. UbiPriSEQ is implemented in Python over the TensorFlow platform and is evaluated using a real-life application in terms of SINR, privacy metric, latency, and utility function, manifesting great promise.

ACS Style

Thaha Mohammed; Aiiad Albeshri; Iyad Katib; Rashid Mehmood. UbiPriSEQ—Deep Reinforcement Learning to Manage Privacy, Security, Energy, and QoS in 5G IoT HetNets. Applied Sciences 2020, 10, 7120 .

AMA Style

Thaha Mohammed, Aiiad Albeshri, Iyad Katib, Rashid Mehmood. UbiPriSEQ—Deep Reinforcement Learning to Manage Privacy, Security, Energy, and QoS in 5G IoT HetNets. Applied Sciences. 2020; 10 (20):7120.

Chicago/Turabian Style

Thaha Mohammed; Aiiad Albeshri; Iyad Katib; Rashid Mehmood. 2020. "UbiPriSEQ—Deep Reinforcement Learning to Manage Privacy, Security, Energy, and QoS in 5G IoT HetNets." Applied Sciences 10, no. 20: 7120.

Article
Published: 22 August 2020 in Mobile Networks and Applications
Reads 0
Downloads 0

Road transportation is the backbone of modern economies despite costing annually millions of human deaths and injuries and trillions of dollars. Twitter is a powerful information source for transportation but major challenges in big data management and Twitter analytics need addressing. We propose Iktishaf, developed over Apache Spark, a big data tool for traffic-related event detection from Twitter data in Saudi Arabia. It uses three machine learning (ML) algorithms to build multiple classifiers to detect eight event types. The classifiers are validated using widely used criteria and against external sources. Iktishaf Stemmer improves text preprocessing, event detection and feature space. Using 2.5 million tweets, we detect events without prior knowledge including the KSA national day, a fire in Riyadh, rains in Makkah and Taif, and the inauguration of Al-Haramain train. We are not aware of any work, apart from ours, that uses big data technologies for event detection of road traffic events from tweets in Arabic. Iktishaf provides hybrid human-ML methods and is a prime example of bringing together AI theory, big data processing, and human cognition applied to a practical problem.

ACS Style

Ebtesam AlOmari; Iyad Katib; Rashid Mehmood. Iktishaf: a Big Data Road-Traffic Event Detection Tool Using Twitter and Spark Machine Learning. Mobile Networks and Applications 2020, 1 -16.

AMA Style

Ebtesam AlOmari, Iyad Katib, Rashid Mehmood. Iktishaf: a Big Data Road-Traffic Event Detection Tool Using Twitter and Spark Machine Learning. Mobile Networks and Applications. 2020; ():1-16.

Chicago/Turabian Style

Ebtesam AlOmari; Iyad Katib; Rashid Mehmood. 2020. "Iktishaf: a Big Data Road-Traffic Event Detection Tool Using Twitter and Spark Machine Learning." Mobile Networks and Applications , no. : 1-16.

Journal article
Published: 19 February 2020 in Applied Sciences
Reads 0
Downloads 0

Smartness, which underpins smart cities and societies, is defined by our ability to engage with our environments, analyze them, and make decisions, all in a timely manner. Healthcare is the prime candidate needing the transformative capability of this smartness. Social media could enable a ubiquitous and continuous engagement between healthcare stakeholders, leading to better public health. Current works are limited in their scope, functionality, and scalability. This paper proposes Sehaa, a big data analytics tool for healthcare in the Kingdom of Saudi Arabia (KSA) using Twitter data in Arabic. Sehaa uses Naive Bayes, Logistic Regression, and multiple feature extraction methods to detect various diseases in the KSA. Sehaa found that the top five diseases in Saudi Arabia in terms of the actual afflicted cases are dermal diseases, heart diseases, hypertension, cancer, and diabetes. Riyadh and Jeddah need to do more in creating awareness about the top diseases. Taif is the healthiest city in the KSA in terms of the detected diseases and awareness activities. Sehaa is developed over Apache Spark allowing true scalability. The dataset used comprises 18.9 million tweets collected from November 2018 to September 2019. The results are evaluated using well-known numerical criteria (Accuracy and F1-Score) and are validated against externally available statistics.

ACS Style

Shoayee Alotaibi; Rashid Mehmood; Iyad Katib; Omer Rana; Aiiad Albeshri. Sehaa: A Big Data Analytics Tool for Healthcare Symptoms and Diseases Detection Using Twitter, Apache Spark, and Machine Learning. Applied Sciences 2020, 10, 1398 .

AMA Style

Shoayee Alotaibi, Rashid Mehmood, Iyad Katib, Omer Rana, Aiiad Albeshri. Sehaa: A Big Data Analytics Tool for Healthcare Symptoms and Diseases Detection Using Twitter, Apache Spark, and Machine Learning. Applied Sciences. 2020; 10 (4):1398.

Chicago/Turabian Style

Shoayee Alotaibi; Rashid Mehmood; Iyad Katib; Omer Rana; Aiiad Albeshri. 2020. "Sehaa: A Big Data Analytics Tool for Healthcare Symptoms and Diseases Detection Using Twitter, Apache Spark, and Machine Learning." Applied Sciences 10, no. 4: 1398.

Article
Published: 01 August 2019 in Mobile Networks and Applications
Reads 0
Downloads 0

Road transportation is among the global grand challenges affecting human lives, health, society, and economy, caused due to road accidents, traffic congestion, and other transportation deficiencies. Autonomous vehicles (AVs) are set to address major transportation challenges including safety, efficiency, reliability, sustainability, and personalization. The foremost challenge for AVs is to perceive their environments in real-time with the highest possible certainty. Relatedly, connected vehicles (CVs) have been another major driver of innovation in transportation. In this paper, we bring autonomous and connected vehicles together and propose TAAWUN, a novel approach based on the fusion of data from multiple vehicles. The aim herein is to share the information between multiple vehicles about their environments, enhance the information available to the vehicles, and make better decisions regarding the perception of their environments. TAWUN shares, among the vehicles, visual data acquired from cameras installed on individual vehicles, as well as the perceived information about the driving environments. The environment is perceived using deep learning, random forest (RF), and C5.0 classifiers. A key aspect of the TAAWUN approach is that it uses problem specific feature sets to enhance the prediction accuracy in challenging environments such as problematic shadows, extreme sunlight, and mirage. TAAWUN has been evaluated using multiple metrics, accuracy, sensitivity, specificity, and area-under-the-curve (AUC). It performs consistently better than the base schemes. Directions for future work to extend the tool are provided. This is the first work where visual information and decision fusion are used in CAVs to enhance environment perception for autonomous driving.

ACS Style

Furqan Alam; Rashid Mehmood; Iyad Katib; Saleh M. Altowaijri; Aiiad Albeshri. TAAWUN: a Decision Fusion and Feature Specific Road Detection Approach for Connected Autonomous Vehicles. Mobile Networks and Applications 2019, 1 -17.

AMA Style

Furqan Alam, Rashid Mehmood, Iyad Katib, Saleh M. Altowaijri, Aiiad Albeshri. TAAWUN: a Decision Fusion and Feature Specific Road Detection Approach for Connected Autonomous Vehicles. Mobile Networks and Applications. 2019; ():1-17.

Chicago/Turabian Style

Furqan Alam; Rashid Mehmood; Iyad Katib; Saleh M. Altowaijri; Aiiad Albeshri. 2019. "TAAWUN: a Decision Fusion and Feature Specific Road Detection Approach for Connected Autonomous Vehicles." Mobile Networks and Applications , no. : 1-17.

Article
Published: 31 July 2019 in Mobile Networks and Applications
Reads 0
Downloads 0

SpMV is a vital computing operation of many scientific, engineering, economic and social applications, increasingly being used to develop timely intelligence for the design and management of smart societies. Several factors affect the performance of SpMV computations, such as matrix characteristics, storage formats, software and hardware platforms. The complexity of the computer systems is on the rise with the increasing number of cores per processor, different levels of caches, processors per node and high speed interconnect. There is an ever-growing need for new optimization techniques and efficient ways of exploiting parallelism. In this paper, we propose ZAKI, a data-driven, machine-learning approach and tool, to predict the optimal number of processes for SpMV computations of an arbitrary sparse matrix on a distributed memory machine. The aim herein is to allow application scientists to automatically obtain the best configuration, and hence the best performance, for the execution of SpMV computations. We train and test the tool using nearly 2000 real world matrices obtained from 45 application domains including computational fluid dynamics (CFD), computer vision, and robotics. The tool uses three machine learning methods, decision trees, random forest, gradient boosting, and is evaluated in depth. A discussion on the applicability of our proposed tool to energy efficiency optimization of SpMV computations is given. This is the first work where the sparsity structure of matrices have been exploited to predict the optimal number of processes for a given matrix in distributed memory environments by using different base and ensemble machine learning methods.

ACS Style

Sardar Usman; Rashid Mehmood; Iyad Katib; Aiiad Albeshri; Saleh M. Altowaijri. ZAKI: A Smart Method and Tool for Automatic Performance Optimization of Parallel SpMV Computations on Distributed Memory Machines. Mobile Networks and Applications 2019, 1 -20.

AMA Style

Sardar Usman, Rashid Mehmood, Iyad Katib, Aiiad Albeshri, Saleh M. Altowaijri. ZAKI: A Smart Method and Tool for Automatic Performance Optimization of Parallel SpMV Computations on Distributed Memory Machines. Mobile Networks and Applications. 2019; ():1-20.

Chicago/Turabian Style

Sardar Usman; Rashid Mehmood; Iyad Katib; Aiiad Albeshri; Saleh M. Altowaijri. 2019. "ZAKI: A Smart Method and Tool for Automatic Performance Optimization of Parallel SpMV Computations on Distributed Memory Machines." Mobile Networks and Applications , no. : 1-20.

Journal article
Published: 14 May 2019 in Sustainability
Reads 0
Downloads 0

Rapid transit systems or metros are a popular choice for high-capacity public transport in urban areas due to several advantages including safety, dependability, speed, cost, and lower risk of accidents. Existing studies on metros have not considered appropriate holistic urban transport models and integrated use of cutting-edge technologies. This paper proposes a comprehensive approach toward large-scale and faster prediction of metro system characteristics by employing the integration of four leading-edge technologies: big data, deep learning, in-memory computing, and Graphics Processing Units (GPUs). Using London Metro as a case study, and the Rolling Origin and Destination Survey (RODS) (real) dataset, we predict the number of passengers for six time intervals (a) using various access transport modes to reach the train stations (buses, walking, etc.); (b) using various egress modes to travel from the metro station to their next points of interest (PoIs); (c) traveling between different origin-destination (OD) pairs of stations; and (d) against the distance between the OD stations. The prediction allows better spatiotemporal planning of the whole urban transport system, including the metro subsystem, and its various access and egress modes. The paper contributes novel deep learning models, algorithms, implementation, analytics methodology, and software tool for analysis of metro systems.

ACS Style

Muhammad Aqib; Rashid Mehmood; Ahmed Alzahrani; Iyad Katib; Aiiad Albeshri; Saleh M. Altowaijri. Rapid Transit Systems: Smarter Urban Planning Using Big Data, In-Memory Computing, Deep Learning, and GPUs. Sustainability 2019, 11, 2736 .

AMA Style

Muhammad Aqib, Rashid Mehmood, Ahmed Alzahrani, Iyad Katib, Aiiad Albeshri, Saleh M. Altowaijri. Rapid Transit Systems: Smarter Urban Planning Using Big Data, In-Memory Computing, Deep Learning, and GPUs. Sustainability. 2019; 11 (10):2736.

Chicago/Turabian Style

Muhammad Aqib; Rashid Mehmood; Ahmed Alzahrani; Iyad Katib; Aiiad Albeshri; Saleh M. Altowaijri. 2019. "Rapid Transit Systems: Smarter Urban Planning Using Big Data, In-Memory Computing, Deep Learning, and GPUs." Sustainability 11, no. 10: 2736.

Journal article
Published: 13 May 2019 in Sensors
Reads 0
Downloads 0

Road transportation is the backbone of modern economies, albeit it annually costs 1.25 million deaths and trillions of dollars to the global economy, and damages public health and the environment. Deep learning is among the leading-edge methods used for transportation-related predictions, however, the existing works are in their infancy, and fall short in multiple respects, including the use of datasets with limited sizes and scopes, and insufficient depth of the deep learning studies. This paper provides a novel and comprehensive approach toward large-scale, faster, and real-time traffic prediction by bringing four complementary cutting-edge technologies together: big data, deep learning, in-memory computing, and Graphics Processing Units (GPUs). We trained deep networks using over 11 years of data provided by the California Department of Transportation (Caltrans), the largest dataset that has been used in deep learning studies. Several combinations of the input attributes of the data along with various network configurations of the deep learning models were investigated for training and prediction purposes. The use of the pre-trained model for real-time prediction was explored. The paper contributes novel deep learning models, algorithms, implementation, analytics methodology, and software tool for smart cities, big data, high performance computing, and their convergence.

ACS Style

Muhammad Aqib; Rashid Mehmood; Ahmed Alzahrani; Iyad Katib; Aiiad Albeshri; Saleh M. Altowaijri. Smarter Traffic Prediction Using Big Data, In-Memory Computing, Deep Learning and GPUs. Sensors 2019, 19, 2206 .

AMA Style

Muhammad Aqib, Rashid Mehmood, Ahmed Alzahrani, Iyad Katib, Aiiad Albeshri, Saleh M. Altowaijri. Smarter Traffic Prediction Using Big Data, In-Memory Computing, Deep Learning and GPUs. Sensors. 2019; 19 (9):2206.

Chicago/Turabian Style

Muhammad Aqib; Rashid Mehmood; Ahmed Alzahrani; Iyad Katib; Aiiad Albeshri; Saleh M. Altowaijri. 2019. "Smarter Traffic Prediction Using Big Data, In-Memory Computing, Deep Learning and GPUs." Sensors 19, no. 9: 2206.

Journal article
Published: 06 March 2019 in Applied Sciences
Reads 0
Downloads 0

Sparse matrix-vector (SpMV) multiplication is a vital building block for numerous scientific and engineering applications. This paper proposes SURAA (translates to speed in arabic), a novel method for SpMV computations on graphics processing units (GPUs). The novelty lies in the way we group matrix rows into different segments, and adaptively schedule various segments to different types of kernels. The sparse matrix data structure is created by sorting the rows of the matrix on the basis of the nonzero elements per row ( n p r) and forming segments of equal size (containing approximately an equal number of nonzero elements per row) using the Freedman–Diaconis rule. The segments are assembled into three groups based on the mean n p r of the segments. For each group, we use multiple kernels to execute the group segments on different streams. Hence, the number of threads to execute each segment is adaptively chosen. Dynamic Parallelism available in Nvidia GPUs is utilized to execute the group containing segments with the largest mean n p r, providing improved load balancing and coalesced memory access, and hence more efficient SpMV computations on GPUs. Therefore, SURAA minimizes the adverse effects of the n p r variance by uniformly distributing the load using equal sized segments. We implement the SURAA method as a tool and compare its performance with the de facto best commercial (cuSPARSE) and open source (CUSP, MAGMA) tools using widely used benchmarks comprising 26 high n p r v a r i a n c e matrices from 13 diverse domains. SURAA outperforms the other tools by delivering 13.99x speedup on average. We believe that our approach provides a fundamental shift in addressing SpMV related challenges on GPUs including coalesced memory access, thread divergence, and load balancing, and is set to open new avenues for further improving SpMV performance in the future.

ACS Style

Thaha Muhammed; Rashid Mehmood; Aiiad Albeshri; Iyad Katib. SURAA: A Novel Method and Tool for Loadbalanced and Coalesced SpMV Computations on GPUs. Applied Sciences 2019, 9, 947 .

AMA Style

Thaha Muhammed, Rashid Mehmood, Aiiad Albeshri, Iyad Katib. SURAA: A Novel Method and Tool for Loadbalanced and Coalesced SpMV Computations on GPUs. Applied Sciences. 2019; 9 (5):947.

Chicago/Turabian Style

Thaha Muhammed; Rashid Mehmood; Aiiad Albeshri; Iyad Katib. 2019. "SURAA: A Novel Method and Tool for Loadbalanced and Coalesced SpMV Computations on GPUs." Applied Sciences 9, no. 5: 947.

Journal article
Published: 08 February 2018 in Sustainability
Reads 0
Downloads 0

Viewing a computationally-intensive problem as a self-contained challenge with its own hardware, software and scheduling strategies is an approach that should be investigated. We might suggest assigning heterogeneous hardware architectures to solve a problem, while parallel computing paradigms may play an important role in writing efficient code to solve the problem; moreover, the scheduling strategies may be examined as a possible solution. Depending on the problem complexity, finding the best possible solution using an integrated infrastructure of hardware, software and scheduling strategy can be a complex job. Developing and using ontologies and reasoning techniques play a significant role in reducing the complexity of identifying the components of such integrated infrastructures. Undertaking reasoning and inferencing regarding the domain concepts can help to find the best possible solution through a combination of hardware, software and scheduling strategies. In this paper, we present an ontology and show how we can use it to solve computationally-intensive problems from various domains. As a potential use for the idea, we present examples from the bioinformatics domain. Validation by using problems from the Elastic Optical Network domain has demonstrated the flexibility of the suggested ontology and its suitability for use with any other computationally-intensive problem domain.

ACS Style

Hossam Faheem; Birgitta König-Ries; Muhammad Ahtisham Aslam; Naif Radi Aljohani; Iyad Katib. Ontology Design for Solving Computationally-Intensive Problems on Heterogeneous Architectures. Sustainability 2018, 10, 441 .

AMA Style

Hossam Faheem, Birgitta König-Ries, Muhammad Ahtisham Aslam, Naif Radi Aljohani, Iyad Katib. Ontology Design for Solving Computationally-Intensive Problems on Heterogeneous Architectures. Sustainability. 2018; 10 (2):441.

Chicago/Turabian Style

Hossam Faheem; Birgitta König-Ries; Muhammad Ahtisham Aslam; Naif Radi Aljohani; Iyad Katib. 2018. "Ontology Design for Solving Computationally-Intensive Problems on Heterogeneous Architectures." Sustainability 10, no. 2: 441.

Conference paper
Published: 18 November 2015 in Communications in Computer and Information Science
Reads 0
Downloads 0

There are speech troubles that can be a sign of speech disorders or speech sound disorders. Some causes include hearing loss, neurological disorders, brain injury, intellectual disabilities, and so on. It is therefore very important to include the speech therapy as part of the rehabilitation process for affected patients’ phonation. This chapter presents a study on the ways sound creation develops, aiming at creating an application to aid the therapy session. The solution presented is used to improve speech problems through playing games.

ACS Style

Habib M. Fardoun; Iyad A. Katib; Antonio Paules Cipres. Games-Based Therapy to Stimulate Speech in Children. Communications in Computer and Information Science 2015, 68 -77.

AMA Style

Habib M. Fardoun, Iyad A. Katib, Antonio Paules Cipres. Games-Based Therapy to Stimulate Speech in Children. Communications in Computer and Information Science. 2015; ():68-77.

Chicago/Turabian Style

Habib M. Fardoun; Iyad A. Katib; Antonio Paules Cipres. 2015. "Games-Based Therapy to Stimulate Speech in Children." Communications in Computer and Information Science , no. : 68-77.

Journal article
Published: 15 March 2013 in Computer Communications
Reads 0
Downloads 0

Multilayer network design has received significant attention in current literature. However, the explicit modeling of IP/MPLS over OTN over DWDM in which the OTN layer’s technological constraints are specifically considered has not been investigated before. In this paper, we present an optimization design model for protecting an IP/MPLS over OTN over DWDM three-layer network. While considering the technological constraints of each layer, we provide a protection mechanism at each layer that guarantees the multilayer network survivability when three links fail simultaneously where each layer suffers a single failure. We present a heuristic approach to reduce the complexity of the problem and present a study based on varying several network parameters to understand their impacts on the protection capacity and the overall network cost. In addition, we present and solve three variations of our original model where we exclude each layer protection in each one of them to compare the cost performance of all models. We observe that generally the DWDM layer protection is the most expensive capacity component. The IP/MPLS layer protection becomes more expensive only when the IP/MPLS unit cost is high.

ACS Style

Iyad Katib; Deep Medhi. Network protection design models, a heuristic, and a study for concurrent single-link per layer failures in three-layer networks. Computer Communications 2013, 36, 678 -688.

AMA Style

Iyad Katib, Deep Medhi. Network protection design models, a heuristic, and a study for concurrent single-link per layer failures in three-layer networks. Computer Communications. 2013; 36 (6):678-688.

Chicago/Turabian Style

Iyad Katib; Deep Medhi. 2013. "Network protection design models, a heuristic, and a study for concurrent single-link per layer failures in three-layer networks." Computer Communications 36, no. 6: 678-688.

Journal article
Published: 30 April 2012 in IEEE Transactions on Network and Service Management
Reads 0
Downloads 0

Multilayer network design has received significant attention in current literature. Despite this, the explicit modeling of IP/MPLS over OTN over DWDM in which the OTN layer is specifically considered has not been addressed before. This architecture has been identified as promising that bridges integration and interaction between the IP and optical layers. In this paper, we present an integrated capacity optimization model for network planning of such multilayer networks that consider the OTN layer as a distinct layer with its unique technological sublayer constraints. We develop a heuristic algorithm to solve this model for large networks. Finally, we provide a detailed numeric study that considers various cost parameter values of each layer in the network. We analyze the impact of each layer's cost parameter values on neighboring layers and overall network cost.

ACS Style

Iyad Katib; Deep Medhi. IP/MPLS-over-OTN-over-DWDM Multilayer Networks: An Integrated Three-Layer Capacity Optimization Model, a Heuristic, and a Study. IEEE Transactions on Network and Service Management 2012, 9, 240 -253.

AMA Style

Iyad Katib, Deep Medhi. IP/MPLS-over-OTN-over-DWDM Multilayer Networks: An Integrated Three-Layer Capacity Optimization Model, a Heuristic, and a Study. IEEE Transactions on Network and Service Management. 2012; 9 (3):240-253.

Chicago/Turabian Style

Iyad Katib; Deep Medhi. 2012. "IP/MPLS-over-OTN-over-DWDM Multilayer Networks: An Integrated Three-Layer Capacity Optimization Model, a Heuristic, and a Study." IEEE Transactions on Network and Service Management 9, no. 3: 240-253.