This page has only limited features, please log in for full access.
Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud computing environments face some challenges in terms of resource utilization, energy efficiency, heterogeneous resources, etc. Tasks scheduling and virtual machines (VMs) are used as consolidation techniques in order to tackle these issues. Tasks scheduling has been extensively studied in the literature. The problem has been studied with different parameters and objectives. In this article, we address the problem of energy consumption and efficient resource utilization in virtualized cloud data centers. The proposed algorithm is based on task classification and thresholds for efficient scheduling and better resource utilization. In the first phase, workflow tasks are pre-processed to avoid bottlenecks by placing tasks with more dependencies and long execution times in separate queues. In the next step, tasks are classified based on the intensities of the required resources. Finally, Particle Swarm Optimization (PSO) is used to select the best schedules. Experiments were performed to validate the proposed technique. Comparative results obtained on benchmark datasets are presented. The results show the effectiveness of the proposed algorithm over that of the other algorithms to which it was compared in terms of energy consumption, makespan, and load balancing.
Nimra Malik; Muhammad Sardaraz; Muhammad Tahir; Babar Shah; Gohar Ali; Fernando Moreira. Energy-Efficient Load Balancing Algorithm for Workflow Scheduling in Cloud Data Centers Using Queuing and Thresholds. Applied Sciences 2021, 11, 5849 .
AMA StyleNimra Malik, Muhammad Sardaraz, Muhammad Tahir, Babar Shah, Gohar Ali, Fernando Moreira. Energy-Efficient Load Balancing Algorithm for Workflow Scheduling in Cloud Data Centers Using Queuing and Thresholds. Applied Sciences. 2021; 11 (13):5849.
Chicago/Turabian StyleNimra Malik; Muhammad Sardaraz; Muhammad Tahir; Babar Shah; Gohar Ali; Fernando Moreira. 2021. "Energy-Efficient Load Balancing Algorithm for Workflow Scheduling in Cloud Data Centers Using Queuing and Thresholds." Applied Sciences 11, no. 13: 5849.
A wireless sensor network is the formation of a temporary network of sensor nodes equipped with limited resources working in an ad hoc environment. Routing protocol is one of the key challenges while designing a wireless sensor network, which requires optimum use of limited resources of a sensor node, such as power and so on. Similarly, data security and integrity is another open issue that has emerged as a flash point in research community in the last decade. This article proposes a secure model for routing data from source to destination named as secure and energy-efficient routing. The proposed secure and energy-efficient routing is inherited from authentication and voice encryption scheme developed for Global System for Mobile Communications. Necessary modifications have been carried out in order to fit the Global System for Mobile Communications technology in a wireless sensor network ad hoc environment. Due to its low complexity, the secure and energy-efficient routing consumes lesser battery power both during encryption/decryption and for routing purposes. It is due to the XoR operation used in the proposed scheme which is considered as the most inexpensive process with respect to time and space complexity. It is observed through simulations that secure and energy-efficient routing can work effectively even in critical power level in a sensor network. The article also presents a simulation-based comparative analysis of the proposed secure and energy-efficient routing with two notable existing secure routing protocols. We proved that the proposed secure and energy-efficient routing helps to achieve the desired performance under dynamically changing network conditions with various numbers of malicious nodes. Moreover, in Global System for Mobile Communications, generally three linear feedback shift registers are used to fragment the key in data encryption mechanism. In this article, a mathematical model is proposed to increase the number of possible combinations of shift register in order to make the data encryption mechanism more secure which has never been done before. Due to its liner complexity, lesser power consumption, and more dynamic route updating, the secure and energy-efficient routing can easily find its use in the emerging Internet-of-Things systems.
M Saud Khan; Noor M Khan; Ahmad Khan; Farhan Aadil; M Tahir; M Sardaraz. A low-complexity, energy-efficient data securing model for wireless sensor network based on linearly complex voice encryption mechanism of GSM technology. International Journal of Distributed Sensor Networks 2021, 17, 1 .
AMA StyleM Saud Khan, Noor M Khan, Ahmad Khan, Farhan Aadil, M Tahir, M Sardaraz. A low-complexity, energy-efficient data securing model for wireless sensor network based on linearly complex voice encryption mechanism of GSM technology. International Journal of Distributed Sensor Networks. 2021; 17 (5):1.
Chicago/Turabian StyleM Saud Khan; Noor M Khan; Ahmad Khan; Farhan Aadil; M Tahir; M Sardaraz. 2021. "A low-complexity, energy-efficient data securing model for wireless sensor network based on linearly complex voice encryption mechanism of GSM technology." International Journal of Distributed Sensor Networks 17, no. 5: 1.
Aims: To assess the error profile in NGS data, generated from high throughput sequencing machines. Background: Short-read sequencing data from Next Generation Sequencing (NGS) are currently being generated by a number of research projects. Depicting the errors produced by NGS platforms and expressing accurate genetic variation from reads are two inter-dependent phases. It has high significance in various analyses, such as genome sequence assembly, SNPs calling, evolutionary studies, and haplotype inference. The systematic and random errors show incidence profile for each of the sequencing platforms i.e. Illumina sequencing, Pacific Biosciences, 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Ion Torrent sequencing, and Oxford Nanopore sequencing. Advances in NGS deliver galactic data with the addition of errors. Some ratio of these errors may emulate genuine true biological signals i.e., mutation, and may subsequently negate the results. Various independent applications have been proposed to correct the sequencing errors. Systematic analysis of these algorithms shows that state-of-the-art models are missing. Objective: In this paper, an effcient error estimation computational model called ESREEM is proposed to assess the error rates in NGS data. Methods: The proposed model prospects the analysis that there exists a true linear regression association between the number of reads containing errors and the number of reads sequenced. The model is based on a probabilistic error model integrated with the Hidden Markov Model (HMM). Result: The proposed model is evaluated on several benchmark datasets and the results obtained are compared with state-of-the-art algorithms. Conclusions: Experimental results analyses show that the proposed model efficiently estimates errors and runs in less time as compared to others.
Muhammad Tahir; Muhammad Sardaraz; Zahid Mehmood; Muhammad Saud Khan. ESREEM: Efficient Short Reads Error Estimation Computational Model for Next-generation Genome Sequencing. Current Bioinformatics 2021, 16, 339 -349.
AMA StyleMuhammad Tahir, Muhammad Sardaraz, Zahid Mehmood, Muhammad Saud Khan. ESREEM: Efficient Short Reads Error Estimation Computational Model for Next-generation Genome Sequencing. Current Bioinformatics. 2021; 16 (2):339-349.
Chicago/Turabian StyleMuhammad Tahir; Muhammad Sardaraz; Zahid Mehmood; Muhammad Saud Khan. 2021. "ESREEM: Efficient Short Reads Error Estimation Computational Model for Next-generation Genome Sequencing." Current Bioinformatics 16, no. 2: 339-349.
Blockchain and IoT are being deployed at a large scale in various fields including healthcare for applications such as secure storage, transactions, and process automation. IoT devices are resource-constrained, have no capability of security and self-protection, and can easily be hacked or compromised. Furthermore, Blockchain is an emerging technology with immutability features which provide secure management, authentication, and guaranteed access control to IoT devices. IoT is a cloud-based internet service in which processing and collection of user’s data are accomplished remotely. Smart healthcare also requires the facility to provide the diagnosis of patients located remotely. The smart health framework faces critical issues such as data security, costs, memory, scalability, trust, and transparency between different platforms. Therefore, it is important to handle data integrity and privacy as the user’s authenticity is in question due to an open internet environment. Several techniques are available that primarily focus on resolving security issues i.e., forgery, timing, denial of service and stolen smartcard attacks, etc. Blockchain technology follows the rules of absolute privacy to identify the users associated with transactions. The motivation behind the use of Blockchain in health informatics is the removal of the centralized third party, immutability, improved data sharing, enhanced security, and reduced overhead costs in distributed applications. Healthcare informatics has some specific requirements associated with the security and privacy along with the additional legal requirements. This paper presents a novel authentication and authorization framework for Blockchain-enabled IoT networks using a probabilistic model. The proposed framework makes use of random numbers in the authentication process which is further connected through joint conditional probability. Hence, it establishes a secure connection among IoT devices for further data acquisition. The proposed model is validated and evaluated through extensive simulations using the AVISPA tool and the Cooja simulator, respectively. Experimental results analyses show that the proposed framework provides robust mutual authenticity, enhanced access control, and lowers both the communication and computational overhead cost as compared to others.
Muhammad Tahir; Muhammad Sardaraz; Shakoor Muhammad; Muhammad Saud Khan. A Lightweight Authentication and Authorization Framework for Blockchain-Enabled IoT Network in Health-Informatics. Sustainability 2020, 12, 6960 .
AMA StyleMuhammad Tahir, Muhammad Sardaraz, Shakoor Muhammad, Muhammad Saud Khan. A Lightweight Authentication and Authorization Framework for Blockchain-Enabled IoT Network in Health-Informatics. Sustainability. 2020; 12 (17):6960.
Chicago/Turabian StyleMuhammad Tahir; Muhammad Sardaraz; Shakoor Muhammad; Muhammad Saud Khan. 2020. "A Lightweight Authentication and Authorization Framework for Blockchain-Enabled IoT Network in Health-Informatics." Sustainability 12, no. 17: 6960.
Recent developments in cloud computing have made it a powerful solution for executing large-scale scientific problems. The complexity of scientific workflows demands efficient utilization of cloud resources to satisfy user requirements. Scheduling of scientific workflows in a cloud environment is a challenge for researchers. The problem is considered as NP-hard. Some constraints such as a heterogeneous environment, dependencies between tasks, quality of service and user deadlines make it difficult for the scheduler to fully utilize available resources. The problem has been extensively studied in the literature. Different researchers have targeted different parameters. This article presents a multi-objective scheduling algorithm for scheduling scientific workflows in cloud computing. The solution is based on genetic algorithm that targets makespan, monetary cost, and load balance. The proposed algorithm first finds the best solution for each parameter. Based on these solutions, the algorithm finds the superbest solution for all parameters. The proposed algorithm is evaluated with benchmark datasets and comparative results with the standard genetic algorithm, particle swarm optimization, and specialized scheduler are presented. The results show that the proposed algorithm achieves an improvement in makespan and reduces the cost with a well load balanced system.
Muhammad Sardaraz; Muhammad Tahir. A parallel multi-objective genetic algorithm for scheduling scientific workflows in cloud computing. International Journal of Distributed Sensor Networks 2020, 16, 1 .
AMA StyleMuhammad Sardaraz, Muhammad Tahir. A parallel multi-objective genetic algorithm for scheduling scientific workflows in cloud computing. International Journal of Distributed Sensor Networks. 2020; 16 (8):1.
Chicago/Turabian StyleMuhammad Sardaraz; Muhammad Tahir. 2020. "A parallel multi-objective genetic algorithm for scheduling scientific workflows in cloud computing." International Journal of Distributed Sensor Networks 16, no. 8: 1.
The core of a content-based image retrieval (CBIR) system is based on an effective understanding of the visual contents of images due to which a CBIR system can be termed as accurate. One of the most prominent issues which affect the performance of a CBIR system is the semantic gap. It is a variance that exists between low-level patterns of an image and high-level abstractions as perceived by humans. A robust image visual representation and relevance feedback (RF) can bridge this gap by extracting distinctive local and global features from the image and by incorporating valuable information stored as feedback. To handle this issue, this article presents a novel adaptive complementary visual word integration method for a robust representation of the salient objects of the image using local and global features based on the bag-of-visual-words (BoVW) model. To analyze the performance of the proposed method, three integration methods based on the BoVW model are proposed in this article: (a) integration of complementary features before clustering (called as non-adaptive complementary feature integration), (b) integration of non-adaptive complementary features after clustering (called as a non-adaptive complementary visual words integration), and (c) integration of adaptive complementary feature weighting after clustering based on self-paced learning (called as a proposed method based on adaptive complementary visual words integration). The performance of the proposed method is further enhanced by incorporating a log-based RF (LRF) method in the proposed model. The qualitative and quantitative analysis of the proposed method is carried on four image datasets, which show that the proposed adaptive complementary visual words integration method outperforms as compared with the non-adaptive complementary feature integration, non-adaptive complementary visual words integration, and state-of-the-art CBIR methods in terms of performance evaluation metrics.
Ruqia Bibi; Zahid Mehmood; Rehan Mehmood Yousaf; Muhammad Tahir; Amjad Rehman; Muhammad Sardaraz; Muhammad Rashid. BoVW model based on adaptive local and global visual words modeling and log-based relevance feedback for semantic retrieval of the images. EURASIP Journal on Image and Video Processing 2020, 2020, 1 -30.
AMA StyleRuqia Bibi, Zahid Mehmood, Rehan Mehmood Yousaf, Muhammad Tahir, Amjad Rehman, Muhammad Sardaraz, Muhammad Rashid. BoVW model based on adaptive local and global visual words modeling and log-based relevance feedback for semantic retrieval of the images. EURASIP Journal on Image and Video Processing. 2020; 2020 (1):1-30.
Chicago/Turabian StyleRuqia Bibi; Zahid Mehmood; Rehan Mehmood Yousaf; Muhammad Tahir; Amjad Rehman; Muhammad Sardaraz; Muhammad Rashid. 2020. "BoVW model based on adaptive local and global visual words modeling and log-based relevance feedback for semantic retrieval of the images." EURASIP Journal on Image and Video Processing 2020, no. 1: 1-30.
Next generation sequencing (NGS) technologies produce a huge amount of biological data, which poses various issues such as requirements of high processing time and large memory. This research focuses on the detection of single nucleotide polymorphism (SNP) in genome sequences. Currently, SNPs detection algorithms face several issues, e.g., computational overhead cost, accuracy, and memory requirements. In this research, we propose a fast and scalable workflow that integrates Bowtie aligner with Hadoop based Heap SNP caller to improve the SNPs detection in genome sequences. The proposed workflow is validated through benchmark datasets obtained from publicly available web-portals, e.g., NCBI and DDBJ DRA. Extensive experiments have been performed and the results obtained are compared with Bowtie and BWA aligner in the alignment phase, while compared with GATK, FaSD, SparkGA, Halvade, and Heap in SNP calling phase. Experimental results analysis shows that the proposed workflow outperforms existing frameworks e.g., GATK, FaSD, Heap integrated with BWA and Bowtie aligners, SparkGA, and Halvade. The proposed framework achieved 22.46% more efficient F-score and 99.80% consistent accuracy on average. More, comparatively 0.21% mean higher accuracy is achieved. Moreover, SNP mining has also been performed to identify specific regions in genome sequences. All the frameworks are implemented with the default configuration of memory management. The observations show that all workflows have approximately same memory requirement. In the future, it is intended to graphically show the mined SNPs for user-friendly interaction, analyze and optimize the memory requirements as well.
Muhammad Tahir; Muhammad Sardaraz. A Fast and Scalable Workflow for SNPs Detection in Genome Sequences Using Hadoop Map-Reduce. Genes 2020, 11, 166 .
AMA StyleMuhammad Tahir, Muhammad Sardaraz. A Fast and Scalable Workflow for SNPs Detection in Genome Sequences Using Hadoop Map-Reduce. Genes. 2020; 11 (2):166.
Chicago/Turabian StyleMuhammad Tahir; Muhammad Sardaraz. 2020. "A Fast and Scalable Workflow for SNPs Detection in Genome Sequences Using Hadoop Map-Reduce." Genes 11, no. 2: 166.
Cloud computing has become the main source for executing scientific experiments. It is an effective technique for distributing and processing tasks on virtual machines. Scientific workflows are complex and demand efficient utilization of cloud resources. Scheduling of scientific workflows is considered as NPcomplete. The problem is constrained by some parameters such as Quality of Service (QoS), dependencies between tasks and users’ deadlines, etc. There exists a strong literature on scheduling scientific workflows in cloud environments. Solutions include standard schedulers, evolutionary optimization techniques, etc. This article presents a hybrid algorithm for scheduling scientific workflows in cloud environments. In the first phase, the algorithm prepares tasks lists for PSO algorithm. Bottleneck tasks are processed on high priority to reduce execution time. In the next phase, tasks are scheduled with the PSO algorithm to reduce both execution time and monetary cost. The algorithm also monitors the load balance to efficiently utilize cloud resources. Benchmark scientific workflows are used to evaluate the proposed algorithm. The proposed algorithm is compared with standard PSO and specialized schedulers to validate the performance. The results show improvement in execution time, monetary cost without affecting the load balance as compared to other techniques.
Muhammad Sardaraz; Muhammad Tahir. A Hybrid Algorithm for Scheduling Scientific Workflows in Cloud Computing. IEEE Access 2019, 7, 186137 -186146.
AMA StyleMuhammad Sardaraz, Muhammad Tahir. A Hybrid Algorithm for Scheduling Scientific Workflows in Cloud Computing. IEEE Access. 2019; 7 (99):186137-186146.
Chicago/Turabian StyleMuhammad Sardaraz; Muhammad Tahir. 2019. "A Hybrid Algorithm for Scheduling Scientific Workflows in Cloud Computing." IEEE Access 7, no. 99: 186137-186146.
Malaria is a serious worldwide disease, caused by a bite of a female Anopheles mosquito. The parasite transferred into complex life round in which it is grown and reproduces into the human body. The detection and recognition of Plasmodium species are possible and efficient through a process called staining (Giemsa). The staining process slightly colorizes the red blood cells (RBCs) but highlights Plasmodium parasites, white blood cells and artifacts. Giemsa stains nuclei, chromatin in blue tone and RBCs in pink color. It has been reported in numerous studies that manual microscopy is not a trustworthy screening technique when performed by nonexperts. Malaria parasites host in RBCs when it enters the bloodstream. This paper presents segmentation of Plasmodium parasite from the thin blood smear points on region growing and dynamic convolution based filtering algorithm. After segmentation, malaria parasite classified into four Plasmodium species: Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax, and Plasmodium malaria. The random forest and K‐nearest neighbor are used for classification base on local binary pattern and hue saturation value features. The sensitivity for malaria parasitemia (MP) is 96.75% on training and testing of the proposed approach while specificity is 94.59%. Beside these, the comparisons of the two features are added to the proposed work for classification having sensitivity is 83.60% while having specificity is 94.90% through random forest classifier based on local binary pattern feature.
Naveed Abbas; Tanzila Saba; Amjad Rehman; Zahid Mehmood; Nadeem Javaid; Muhammad Tahir; Naseer Ullah Khan; Khawaja Tehseen Ahmed; Roaider Shah. Plasmodium species aware based quantification of malaria parasitemia in light microscopy thin blood smear. Microscopy Research and Technique 2019, 82, 1198 -1214.
AMA StyleNaveed Abbas, Tanzila Saba, Amjad Rehman, Zahid Mehmood, Nadeem Javaid, Muhammad Tahir, Naseer Ullah Khan, Khawaja Tehseen Ahmed, Roaider Shah. Plasmodium species aware based quantification of malaria parasitemia in light microscopy thin blood smear. Microscopy Research and Technique. 2019; 82 (7):1198-1214.
Chicago/Turabian StyleNaveed Abbas; Tanzila Saba; Amjad Rehman; Zahid Mehmood; Nadeem Javaid; Muhammad Tahir; Naseer Ullah Khan; Khawaja Tehseen Ahmed; Roaider Shah. 2019. "Plasmodium species aware based quantification of malaria parasitemia in light microscopy thin blood smear." Microscopy Research and Technique 82, no. 7: 1198-1214.
Cloud Computing provides utility-based IT services. The services are available as pay per use. Cloud gives advantage to organizations in setting up fundamental hardware and software requirements i.e. instead of purchasing hardware or software cloud services can be used. The availability of cloud services any time and anywhere makes it a feasible solution for many applications. cloud services are constrained by some parameters such as Quality of Service (QoS), efficient utilization of cloud resources, user budget, user deadlines, energy consumption etc. In this article, we present a comprehensive review of techniques or algorithms designed to reduce energy consumption in cloud data centers. The review covers Evolutionary Algorithms (EA) such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Genetic Algorithms (GA). We discuss each technique with strengths and weaknesses. Target objectives of each algorithm are also compared. The article is concluded with future research directions.
Khola Maryam; Muhammad Sardaraz; Muhammad Tahir. Evolutionary Algorithms in Cloud Computing from the Perspective of Energy Consumption: A Review. 2018 14th International Conference on Emerging Technologies (ICET) 2018, 1 -6.
AMA StyleKhola Maryam, Muhammad Sardaraz, Muhammad Tahir. Evolutionary Algorithms in Cloud Computing from the Perspective of Energy Consumption: A Review. 2018 14th International Conference on Emerging Technologies (ICET). 2018; ():1-6.
Chicago/Turabian StyleKhola Maryam; Muhammad Sardaraz; Muhammad Tahir. 2018. "Evolutionary Algorithms in Cloud Computing from the Perspective of Energy Consumption: A Review." 2018 14th International Conference on Emerging Technologies (ICET) , no. : 1-6.
We present a brief introduction to the applications of pattern matching.We present a novel pattern matching algorithm for DNA sequences.We present multithreading in pattern matching.We use Turing machine for pattern matching.We present comparative results with significance improvements. To solve, manage and analyze biological problems using computer technology is called bioinformatics. With the emergent evolution in computing era, the volume of biological data has increased significantly. These large amounts of data have increased the need to analyze it in reasonable space and time. DNA sequences contain basic information of species, and pattern matching between different species is an important and challenging issue to cope with. There exist generalized string matching and some specialized DNA pattern matching algorithms in the literature. There is still need to develop fast and space efficient pattern matching algorithms that consider new hardware development. In this paper, we present a novel DNA sequences pattern matching algorithm called EPMA. The proposed algorithm utilizes fixed length 2-bits binary encoding, segmentation and multi-threading. The idea is to find the pattern with multiple searcher agents concurrently. The proposed algorithm is validated with comparative experimental results. The results show that the new algorithm is a good candidate for DNA sequence pattern matching applications. The algorithm effectively utilizes modern hardware and will help researchers in the sequence alignment, short read error correction, phylogenetic inference etc. Furthermore, the proposed method can be extended to generalized string matching and their applications.
Muhammad Tahir; Muhammad Sardaraz; Ataul Aziz Ikram. EPMA: Efficient pattern matching algorithm for DNA sequences. Expert Systems with Applications 2017, 80, 162 -170.
AMA StyleMuhammad Tahir, Muhammad Sardaraz, Ataul Aziz Ikram. EPMA: Efficient pattern matching algorithm for DNA sequences. Expert Systems with Applications. 2017; 80 ():162-170.
Chicago/Turabian StyleMuhammad Tahir; Muhammad Sardaraz; Ataul Aziz Ikram. 2017. "EPMA: Efficient pattern matching algorithm for DNA sequences." Expert Systems with Applications 80, no. : 162-170.
Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.
Muhammad Sardaraz; Muhammad Tahir; Ataul Aziz Ikram. Advances in high throughput DNA sequence data compression. Journal of Bioinformatics and Computational Biology 2016, 14, 1 -18.
AMA StyleMuhammad Sardaraz, Muhammad Tahir, Ataul Aziz Ikram. Advances in high throughput DNA sequence data compression. Journal of Bioinformatics and Computational Biology. 2016; 14 (3):1-18.
Chicago/Turabian StyleMuhammad Sardaraz; Muhammad Tahir; Ataul Aziz Ikram. 2016. "Advances in high throughput DNA sequence data compression." Journal of Bioinformatics and Computational Biology 14, no. 3: 1-18.
Muhammad Tahir; Muhammad Sardaraz; Ataul Ikram; Hassan Bajwa. HaShRECA: Hadoop Based Short Read Error Correction Algorithm for Genome Assembly. Current Bioinformatics 2015, 10, 469 -475.
AMA StyleMuhammad Tahir, Muhammad Sardaraz, Ataul Ikram, Hassan Bajwa. HaShRECA: Hadoop Based Short Read Error Correction Algorithm for Genome Assembly. Current Bioinformatics. 2015; 10 (4):469-475.
Chicago/Turabian StyleMuhammad Tahir; Muhammad Sardaraz; Ataul Ikram; Hassan Bajwa. 2015. "HaShRECA: Hadoop Based Short Read Error Correction Algorithm for Genome Assembly." Current Bioinformatics 10, no. 4: 469-475.
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms.
Muhammad Sardaraz; Muhammad Tahir; Ataul Aziz Ikram; Hassan Bajwa. SeqCompress: An algorithm for biological sequence compression. Genomics 2014, 104, 225 -228.
AMA StyleMuhammad Sardaraz, Muhammad Tahir, Ataul Aziz Ikram, Hassan Bajwa. SeqCompress: An algorithm for biological sequence compression. Genomics. 2014; 104 (4):225-228.
Chicago/Turabian StyleMuhammad Sardaraz; Muhammad Tahir; Ataul Aziz Ikram; Hassan Bajwa. 2014. "SeqCompress: An algorithm for biological sequence compression." Genomics 104, no. 4: 225-228.
Advances in genomics, proteomics, and bioinformatics have revolutionized the drug discovery and drug development. Computational systems biology, computational bioinformatics, and many biomedical applications are also growing at a rapid pace with an increasing demand for processing power. Hardware clusters and grid computing solutions are approached to fulfill the high demand for the processing power. The grid clusters approach proved success but introduced the need of frameworks to hide the complexity of parallel programming and enable the programmer to focus on the application logic. In this paper we present a novel cloud computing based neural network framework. We will further present results of implementation of Multiple Sequence Alignment (MSA) algorithms in cloud architecture. The experiments show optimal results in terms of computational complexity and preserve accuracy as well.
Ataul Aziz Ikram; Salma Ibrahim; Muhammad Sardaraz; Muhammad Tahir; Hassan Bajwa; Christian Bach. Neural network based cloud computing platform for bioinformatics. 2013 IEEE Long Island Systems, Applications and Technology Conference (LISAT) 2013, 1 -6.
AMA StyleAtaul Aziz Ikram, Salma Ibrahim, Muhammad Sardaraz, Muhammad Tahir, Hassan Bajwa, Christian Bach. Neural network based cloud computing platform for bioinformatics. 2013 IEEE Long Island Systems, Applications and Technology Conference (LISAT). 2013; ():1-6.
Chicago/Turabian StyleAtaul Aziz Ikram; Salma Ibrahim; Muhammad Sardaraz; Muhammad Tahir; Hassan Bajwa; Christian Bach. 2013. "Neural network based cloud computing platform for bioinformatics." 2013 IEEE Long Island Systems, Applications and Technology Conference (LISAT) , no. : 1-6.
Wireless sensor networks (WSNs) are constrained in terms of memory, computation, communication, and energy. To reduce communication overhead and energy expenditure in (WSNs), data aggregation is used. Data aggregation is a very important technique, but it gives extra opportunity to the adversary to attack the network, inject false messages into the network and trick the base station to accept false aggregation results. This paper presents a secure data aggregation framework (SDAF) for (WSNs). The goal of the framework is to ensure data integrity and data confidentiality. SDAF uses two types of keys. Base station shares a unique key with each sensor node that is used for integrity and the aggregator shares a unique key with each sensor node (within that cluster) that is used for data confidentiality. Sensor nodes calculate a message authentication code (MAC) of the sensed data using shared key with base station, which verifies the MAC for message integrity. Sensor nodes encrypt the sensed data using shared key with aggregator, which ensures data confidentiality. Proposed framework has low communication overhead as the redundant packets are dropped at the aggregators
M. Sardaraz; Muhammad Tahir; Ataul Aziz Ikram. SDAF: A Secure Data Aggregation Framework for Wireless Sensor Networks. International Journal of Computer and Electrical Engineering 2013, 447 -450.
AMA StyleM. Sardaraz, Muhammad Tahir, Ataul Aziz Ikram. SDAF: A Secure Data Aggregation Framework for Wireless Sensor Networks. International Journal of Computer and Electrical Engineering. 2013; ():447-450.
Chicago/Turabian StyleM. Sardaraz; Muhammad Tahir; Ataul Aziz Ikram. 2013. "SDAF: A Secure Data Aggregation Framework for Wireless Sensor Networks." International Journal of Computer and Electrical Engineering , no. : 447-450.
Phylogenetics enables us to use various techniques to extract evolutionary relationships from sequence analysis. Most of the phylogenetic analysis techniques produce phylogenetic trees that represent relationship between any set of species or their evolutionary history. This article presents a comprehensive survey of the applications and the algorithms for inference of huge phylogenetic trees and also gives the reader an overview of the methods currently employed for the inference of phylogenetic trees. A comprehensive comparison of the methods and algorithms is presented in this paper.
Muhammad Sardaraz; Muhammad Tahir; Tahir Aziz Ikram; Hassan Bajwa. Applications and Algorithms for Inference of Huge Phylogenetic Trees: a Review. American Journal of Bioinformatics Research 2012, 2, 21 -26.
AMA StyleMuhammad Sardaraz, Muhammad Tahir, Tahir Aziz Ikram, Hassan Bajwa. Applications and Algorithms for Inference of Huge Phylogenetic Trees: a Review. American Journal of Bioinformatics Research. 2012; 2 (1):21-26.
Chicago/Turabian StyleMuhammad Sardaraz; Muhammad Tahir; Tahir Aziz Ikram; Hassan Bajwa. 2012. "Applications and Algorithms for Inference of Huge Phylogenetic Trees: a Review." American Journal of Bioinformatics Research 2, no. 1: 21-26.