This page has only limited features, please log in for full access.

Unclaimed
Congfeng Jiang
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 06 July 2021 in Energies
Reads 0
Downloads 0

In terms of power and energy consumption, DRAMs play a key role in a modern server system as well as processors. Although power-aware scheduling is based on the proportion of energy between DRAM and other components, when running memory-intensive applications, the energy consumption of the whole server system will be significantly affected by the non-energy proportion of DRAM. Furthermore, modern servers usually use NUMA architecture to replace the original SMP architecture to increase its memory bandwidth. It is of great significance to study the energy efficiency of these two different memory architectures. Therefore, in order to explore the power consumption characteristics of servers under memory-intensive workload, this paper evaluates the power consumption and performance of memory-intensive applications in different generations of real rack servers. Through analysis, we find that: (1) Workload intensity and concurrent execution threads affects server power consumption, but a fully utilized memory system may not necessarily bring good energy efficiency indicators. (2) Even if the memory system is not fully utilized, the memory capacity of each processor core has a significant impact on application performance and server power consumption. (3) When running memory-intensive applications, memory utilization is not always a good indicator of server power consumption. (4) The reasonable use of the NUMA architecture will improve the memory energy efficiency significantly. The experimental results show that reasonable use of NUMA architecture can improve memory efficiency by 16% compared with SMP architecture, while unreasonable use of NUMA architecture reduces memory efficiency by 13%. The findings we present in this paper provide useful insights and guidance for system designers and data center operators to help them in energy-efficiency-aware job scheduling and energy conservation.

ACS Style

Kaiqiang Zhang; Dongyang Ou; Congfeng Jiang; Yeliang Qiu; Longchuan Yan. Power and Performance Evaluation of Memory-Intensive Applications. Energies 2021, 14, 4089 .

AMA Style

Kaiqiang Zhang, Dongyang Ou, Congfeng Jiang, Yeliang Qiu, Longchuan Yan. Power and Performance Evaluation of Memory-Intensive Applications. Energies. 2021; 14 (14):4089.

Chicago/Turabian Style

Kaiqiang Zhang; Dongyang Ou; Congfeng Jiang; Yeliang Qiu; Longchuan Yan. 2021. "Power and Performance Evaluation of Memory-Intensive Applications." Energies 14, no. 14: 4089.

Journal article
Published: 28 October 2020 in IEEE Transactions on Cloud Computing
Reads 0
Downloads 0

Workload characteristics are vital for both data center operation and job scheduling in co-located data centers, where online services and batch jobs are deployed on the same production cluster. In this paper, a comprehensive analysis is conducted on Alibaba‘s cluster-trace-v2018 of a production cluster of 4034 machines. The findings and insights are the following: (1) The workload on the production cluster poses a daily cyclical fluctuation, in terms of CPU and disk I/O utilization, and the memory system has become the performance bottleneck of a co-located cluster. (2) Batch jobs including their tasks and derived instances can be approximated as Zipf distribution. However, for all batch jobs with directed acyclic graph dependency, they suffer from co-location with online services since the online services are highly prioritized. (3) The resource usages of containers have similar cyclical fluctuation consistent with the whole cluster, while their memory usages remain approximately constant. (4) The number of batch jobs co-located with online services is dependent on the mispredictions per kilo instructions of online services. In order to guarantee the QoS of online services, when the MPKI of online services rises, the number of batch jobs to be co-located on the same machine should decrease.

ACS Style

Congfeng Jiang; Yitao Qiu; Weisong Shi; Zhefeng Ge; Jiwei Wang; Shenglei Chen; Christophe Cerin; Zujie Ren; Guoyao Xu; Jiangbin Lin. Characterizing Co-located Workloads in Alibaba Cloud Datacenters. IEEE Transactions on Cloud Computing 2020, PP, 1 -1.

AMA Style

Congfeng Jiang, Yitao Qiu, Weisong Shi, Zhefeng Ge, Jiwei Wang, Shenglei Chen, Christophe Cerin, Zujie Ren, Guoyao Xu, Jiangbin Lin. Characterizing Co-located Workloads in Alibaba Cloud Datacenters. IEEE Transactions on Cloud Computing. 2020; PP (99):1-1.

Chicago/Turabian Style

Congfeng Jiang; Yitao Qiu; Weisong Shi; Zhefeng Ge; Jiwei Wang; Shenglei Chen; Christophe Cerin; Zujie Ren; Guoyao Xu; Jiangbin Lin. 2020. "Characterizing Co-located Workloads in Alibaba Cloud Datacenters." IEEE Transactions on Cloud Computing PP, no. 99: 1-1.

Review
Published: 06 August 2020 in IEEE Internet of Things Journal
Reads 0
Downloads 0

In this article, we provide a concise but systematic review on blockchain-enabled cyber-physical systems (CPS). We dissect various blockchain-enabled CPS as reported in the literature in terms of their operations and the features of blockchain that have been used. We identify key common CPS operations that can be enabled by blockchain, and classify them in terms of their time sensitivity and throughput requirements. We also elaborate and classify features of blockchain in terms of different levels of benefits to CPS, including security, privacy, immutability, fault tolerance, interoperability, data provenance, atomicity, automation, data/service sharing, and trust. Finally, we point out two primary open research issues for developing blockchain-enabled CPS, namely, excessive delay in reaching consensus and limited throughput, and outline future research directions.

ACS Style

Wenbing Zhao; Congfeng Jiang; Honghao Gao; Shunkun Yang; Xiong Luo. Blockchain-Enabled Cyber–Physical Systems: A Review. IEEE Internet of Things Journal 2020, 8, 4023 -4034.

AMA Style

Wenbing Zhao, Congfeng Jiang, Honghao Gao, Shunkun Yang, Xiong Luo. Blockchain-Enabled Cyber–Physical Systems: A Review. IEEE Internet of Things Journal. 2020; 8 (6):4023-4034.

Chicago/Turabian Style

Wenbing Zhao; Congfeng Jiang; Honghao Gao; Shunkun Yang; Xiong Luo. 2020. "Blockchain-Enabled Cyber–Physical Systems: A Review." IEEE Internet of Things Journal 8, no. 6: 4023-4034.

Conference paper
Published: 08 October 2019 in Medical Image Computing and Computer Assisted Intervention − MICCAI 2017
Reads 0
Downloads 0

Currently, many cloud providers deploy their big data processing systems as cloud services, which helps users conveniently manage and process their data in clouds. Among different service providers’ big data processing services, how to evaluate and compare their scalability is an interesting and challenging work. Most traditional benchmark tools focus on performance evaluation of big data processing systems, such as aggregated throughput and IOPS, but fail to conduct a quantitative analysis of their scalability. In this paper, we propose a measurement methodology to quantify the scalability of big data processing services, which makes the cloud services scalability comparable. We conduct a group of comparative experiments on AliCloud E-MapReduce and Baidu MRS, and collect their respective scalability characteristics under Hadoop and Spark workloads. The scalability characteristics observed in our work could help cloud users choose the best cloud service platform to set up an optimized big data processing system to achieve their specific goals more successfully.

ACS Style

Xin Zhou; Congfeng Jiang; Yeliang Qiu; Tiantian Fan; Yumei Wang; Liangbin Zhang; Jian Wan; Weisong Shi. Scalability Evaluation of Big Data Processing Services in Clouds. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 2019, 78 -90.

AMA Style

Xin Zhou, Congfeng Jiang, Yeliang Qiu, Tiantian Fan, Yumei Wang, Liangbin Zhang, Jian Wan, Weisong Shi. Scalability Evaluation of Big Data Processing Services in Clouds. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017. 2019; ():78-90.

Chicago/Turabian Style

Xin Zhou; Congfeng Jiang; Yeliang Qiu; Tiantian Fan; Yumei Wang; Liangbin Zhang; Jian Wan; Weisong Shi. 2019. "Scalability Evaluation of Big Data Processing Services in Clouds." Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 , no. : 78-90.

Conference paper
Published: 08 October 2019 in Medical Image Computing and Computer Assisted Intervention − MICCAI 2017
Reads 0
Downloads 0

DRAM is a significant source of server power consumption especially when the server runs memory intensive applications. Current power aware scheduling assumes that DRAM is as energy proportional as other components. However, the non-energy proportionality of DRAM significantly affects the power and energy consumption of the whole server system when running memory intensive applications. Thus good knowledge of server power characterization under memory intensive workloads can help better workload placement with power reduction. In this paper, we investigate the power characteristics of memory intensive applications on real rack servers of different generations. Through comprehensive analysis we find that (1) Server power consumption changes with workload intensity and concurrent execution threads. However, fully utilized memory systems are not the most energy efficient. (2) Powered memory modules of installed memory capacity, i.e. the memory capacity per processor core has significant impact on the application’s performance and server power consumption even if the memory system is not fully utilized. (3) Memory utilization is not always a good indicator for server power consumption when it is running memory intensive applications. Our experiments show that hardware configuration, workload types, as well as concurrently running threads have significant impact on a server’s energy efficiency when running memory intensive applications. Our findings presented in this paper provide useful insights and guidance to system designers, as well as data center operators for energy efficiency aware job scheduling and power reductions.

ACS Style

Yeliang Qiu; Congfeng Jiang; Tiantian Fan; Yumei Wang; Liangbin Zhang; Jian Wan; Weisong Shi. Power Characterization of Memory Intensive Applications: Analysis and Implications. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 2019, 189 -201.

AMA Style

Yeliang Qiu, Congfeng Jiang, Tiantian Fan, Yumei Wang, Liangbin Zhang, Jian Wan, Weisong Shi. Power Characterization of Memory Intensive Applications: Analysis and Implications. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017. 2019; ():189-201.

Chicago/Turabian Style

Yeliang Qiu; Congfeng Jiang; Tiantian Fan; Yumei Wang; Liangbin Zhang; Jian Wan; Weisong Shi. 2019. "Power Characterization of Memory Intensive Applications: Analysis and Implications." Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 , no. : 189-201.

Journal article
Published: 05 September 2019 in IEEE Access
Reads 0
Downloads 0

The increasing demand for cloud-based services, such as big data analytics and online e-commerce, leads to rapid growth of large-scale internet data centers. In order to provide highly reliable, cost effective, and high quality cloud services, data centers are equipped with sensors to monitor the operational states of infrastructure hardware, such as servers, storage arrays, networking devices, and computer room air conditioning systems. However, such coarse grained monitoring cannot provide fine grained real time information for resource multiplexing and job scheduling. Moreover, the monitoring of node level power consumption plays an important role in the optimization of workload placement and energy efficiency in data centers. In this paper, we propose an edge computing platform for intelligent operational monitoring in data centers. The platform integrates wireless sensors and on-board built-in sensors to collect data during the operation and maintenance of data centers. Using logical functions, we divide the data center clusters into grids, and then deploy wireless sensors and edge servers in each grid. As such, data processing on edge servers can reduce the latency in data transmission to central clouds and thereby enhance the real time resource mapping decisions in data centers. In addition, the proposed platform also provides predictions of resource utilization, workload characteristics, and hardware health trends in data centers.

ACS Style

Congfeng Jiang; Yeliang Qiu; Honghao Gao; Tiantian Fan; Kangkang Li; Jian Wan. An Edge Computing Platform for Intelligent Operational Monitoring in Internet Data Centers. IEEE Access 2019, 7, 133375 -133387.

AMA Style

Congfeng Jiang, Yeliang Qiu, Honghao Gao, Tiantian Fan, Kangkang Li, Jian Wan. An Edge Computing Platform for Intelligent Operational Monitoring in Internet Data Centers. IEEE Access. 2019; 7 (99):133375-133387.

Chicago/Turabian Style

Congfeng Jiang; Yeliang Qiu; Honghao Gao; Tiantian Fan; Kangkang Li; Jian Wan. 2019. "An Edge Computing Platform for Intelligent Operational Monitoring in Internet Data Centers." IEEE Access 7, no. 99: 133375-133387.

Journal article
Published: 30 August 2019 in IEEE Access
Reads 0
Downloads 0

The explosive growth of massive data generation from Internet of Things in industrial, agricultural and scientific communities has led to a rapid increase for data analytics in cloud data centers. The ubiquitous and pervasive demand for near-data processing urges the edge computing paradigm in recent years. Edge computing is promising for less network backbone bandwidth usage and thus less data center side processing pressure, as well as enhanced service responsiveness and data privacy protection. Computation offloading plays a crucial role in edge computing in terms of network packets transmission and system responsiveness through dynamic task partitioning between cloud data centers and edge servers and edge devices. In this paper a thorough literature review is conducted to reveal the state-of-the-art of computation offloading in edge computing. Various aspects of computation offloading, including energy consumption minimization, Quality of Services guarantee, and Quality of Experiences enhancement are surveyed. Moreover, resource scheduling approaches, gaming and tradeoffing among system performance and overheads for computation offloading decision making are also reviewed.

ACS Style

Congfeng Jiang; Xiaolan Cheng; Honghao Gao; Xin Zhou; Jian Wan. Toward Computation Offloading in Edge Computing: A Survey. IEEE Access 2019, 7, 131543 -131558.

AMA Style

Congfeng Jiang, Xiaolan Cheng, Honghao Gao, Xin Zhou, Jian Wan. Toward Computation Offloading in Edge Computing: A Survey. IEEE Access. 2019; 7 (99):131543-131558.

Chicago/Turabian Style

Congfeng Jiang; Xiaolan Cheng; Honghao Gao; Xin Zhou; Jian Wan. 2019. "Toward Computation Offloading in Edge Computing: A Survey." IEEE Access 7, no. 99: 131543-131558.

Conference paper
Published: 29 August 2019 in Communications in Computer and Information Science
Reads 0
Downloads 0

The explosive growth in cloud-based services, big data analytics, and artificial intelligence related services provisioning leads to the rapid growth of construction of large scale Internet data centers (IDCs). Modern IDCs are equipped with various sensors to monitor its healthy operation and maintenance states, such as temperature, thermal distribution, air flow, etc. However, fine-grained, single-node level and even mainboard level monitoring including server resource consumption and power consumption is still needed for more progressive resource multiplexing and job scheduling in IDCs. In this paper, we propose an edge computing platform for intelligent Internet data center operational monitoring, which integrates wireless sensors and on-board built-in sensors to sense and collect the data of data center operation and maintenance. The edge computing based solution reduces the latency for data transportation to central clouds and reduces the amount of data and enhances the real time resource capping decisions in data centers.

ACS Style

Yeliang Qiu; Congfeng Jiang; Tiantian Fan; Jian Wan. An Edge Computing Platform for Intelligent Internet Data Center Operational Monitoring. Communications in Computer and Information Science 2019, 16 -28.

AMA Style

Yeliang Qiu, Congfeng Jiang, Tiantian Fan, Jian Wan. An Edge Computing Platform for Intelligent Internet Data Center Operational Monitoring. Communications in Computer and Information Science. 2019; ():16-28.

Chicago/Turabian Style

Yeliang Qiu; Congfeng Jiang; Tiantian Fan; Jian Wan. 2019. "An Edge Computing Platform for Intelligent Internet Data Center Operational Monitoring." Communications in Computer and Information Science , no. : 16-28.

Conference paper
Published: 29 August 2019 in Communications in Computer and Information Science
Reads 0
Downloads 0

Edge computing is an emerging paradigm to meet the ever-increasing computation demands from pervasive devices such as sensors, actuators, and smart things. Though the edge devices can execute complex applications, it is necessary for some applications to migrate to centralized servers. By offloading the computation from the edge nodes to the edge servers or cloud servers, the quality of computation experience could be greatly improved. However, it may cause delay and increase network overheads, and energy consumption eventually. Therefore, an optimal offloading strategy should take into account what task should be offloaded, when to offload and where to offload to avoid the overheads. Thus, it is important to tradeoff between energy consumption, computation delay and throughput when the system makes the computation offloading to achieve high energy efficiency. In this paper, we conduct a survey of energy aware edge computing, including the existing work on computation offloading frameworks and strategies in edge computing. Specifically, we describe the strategies from the perspective of energy aware offloading, energy optimization offloading and offloading algorithms.

ACS Style

Tiantian Fan; Yeliang Qiu; Congfeng Jiang; Jian Wan. Energy Aware Edge Computing: A Survey. Communications in Computer and Information Science 2019, 79 -91.

AMA Style

Tiantian Fan, Yeliang Qiu, Congfeng Jiang, Jian Wan. Energy Aware Edge Computing: A Survey. Communications in Computer and Information Science. 2019; ():79-91.

Chicago/Turabian Style

Tiantian Fan; Yeliang Qiu; Congfeng Jiang; Jian Wan. 2019. "Energy Aware Edge Computing: A Survey." Communications in Computer and Information Science , no. : 79-91.

Conference paper
Published: 29 August 2019 in Communications in Computer and Information Science
Reads 0
Downloads 0

The explosive growth of massive data generation from Internet of Things in industrial, agricultural and scientific communities has led to a rapid increase in cloud data centers for data analytics. The ubiquitous and pervasive demand for near-data processing urges the edge computing paradigm in recent years. Edge computing is promising for less network backbone bandwidth usage and thus less data center side processing, as well as enhanced service responsiveness and data privacy protection. Computation offloading plays a crucial role in network packets transmission and system responsiveness through dynamic task partitioning between cloud data centers and edge servers and edge devices. In this paper a thorough literature review is conduct to reveal the state-of-the-art of computation offloading in edge computing. Various aspects of computation offloading, including energy consumption minimization, Quality of Services (QoS), and Quality of Experiences (QoE) are surveyed. Resource scheduling approaches, gaming and tradeoffing among system performance and system overheads for offloading decision making are also reviewed.

ACS Style

Xiaolan Cheng; Xin Zhou; Congfeng Jiang; Jian Wan. Towards Computation Offloading in Edge Computing: A Survey. Communications in Computer and Information Science 2019, 3 -15.

AMA Style

Xiaolan Cheng, Xin Zhou, Congfeng Jiang, Jian Wan. Towards Computation Offloading in Edge Computing: A Survey. Communications in Computer and Information Science. 2019; ():3-15.

Chicago/Turabian Style

Xiaolan Cheng; Xin Zhou; Congfeng Jiang; Jian Wan. 2019. "Towards Computation Offloading in Edge Computing: A Survey." Communications in Computer and Information Science , no. : 3-15.

Journal article
Published: 01 June 2019 in Sustainable Computing: Informatics and Systems
Reads 0
Downloads 0
ACS Style

Congfeng Jiang; Yumei Wang; Dongyang Ou; Youhuizi Li; Jilin Zhang; Jian Wan; Bing Luo; Weisong Shi. Energy efficiency comparison of hypervisors. Sustainable Computing: Informatics and Systems 2019, 22, 311 -321.

AMA Style

Congfeng Jiang, Yumei Wang, Dongyang Ou, Youhuizi Li, Jilin Zhang, Jian Wan, Bing Luo, Weisong Shi. Energy efficiency comparison of hypervisors. Sustainable Computing: Informatics and Systems. 2019; 22 ():311-321.

Chicago/Turabian Style

Congfeng Jiang; Yumei Wang; Dongyang Ou; Youhuizi Li; Jilin Zhang; Jian Wan; Bing Luo; Weisong Shi. 2019. "Energy efficiency comparison of hypervisors." Sustainable Computing: Informatics and Systems 22, no. : 311-321.

Journal article
Published: 17 February 2019 in Energies
Reads 0
Downloads 0

Power consumption is a primary concern in modern servers and data centers. Due to varying in workload types and intensities, different servers may have a different energy efficiency (EE) and energy proportionality (EP) even while having the same hardware configuration (i.e., central processing unit (CPU) generation and memory installation). For example, CPU frequency scaling and memory modules voltage scaling can significantly affect the server’s energy efficiency. In conventional virtualized data centers, the virtual machine (VM) scheduler packs VMs to servers until they saturate, without considering their energy efficiency and EP differences. In this paper we propose EASE, the Energy efficiency and proportionality Aware VM SchEduling framework containing data collection and scheduling algorithms. In the EASE framework, each server’s energy efficiency and EP characteristics are first identified by executing customized computing intensive, memory intensive, and hybrid benchmarks. Servers will be labelled and categorized with their affinity for different incoming requests according to their EP and EE characteristics. Then for each VM, EASE will undergo workload characterization procedure by tracing and monitoring their resource usage including CPU, memory, disk, and network and determine whether it is computing intensive, memory intensive, or a hybrid workload. Finally, EASE schedules VMs to servers by matching the VM’s workload type and the server’s EP and EE preference. The rationale of EASE is to schedule VMs to servers to keep them working around their peak energy efficiency point, i.e., the near optimal working range. When workload fluctuates, EASE re-schedules or migrates VMs to other servers to make sure that all the servers are running as near their optimal working range as they possibly can. The experimental results on real clusters show that EASE can save servers’ power consumption as much as 37.07%–49.98% in both homogeneous and heterogeneous clusters, while the average completion time of the computing intensive VMs increases only 0.31%–8.49%. In the heterogeneous nodes, the power consumption of the computing intensive VMs can be reduced by 44.22%. The job completion time can be saved by 53.80%.

ACS Style

Yeliang Qiu; Congfeng Jiang; Yumei Wang; Dongyang Ou; Youhuizi Li; Jian Wan. Energy Aware Virtual Machine Scheduling in Data Centers. Energies 2019, 12, 646 .

AMA Style

Yeliang Qiu, Congfeng Jiang, Yumei Wang, Dongyang Ou, Youhuizi Li, Jian Wan. Energy Aware Virtual Machine Scheduling in Data Centers. Energies. 2019; 12 (4):646.

Chicago/Turabian Style

Yeliang Qiu; Congfeng Jiang; Yumei Wang; Dongyang Ou; Youhuizi Li; Jian Wan. 2019. "Energy Aware Virtual Machine Scheduling in Data Centers." Energies 12, no. 4: 646.

Conference paper
Published: 07 February 2019 in Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
Reads 0
Downloads 0

Metadata extraction from scholarly PDF documents is the fundamental work of publishing, archiving, digital library construction, bibliometrics, and scientific competitiveness analysis and evaluations. However, different scholarly PDF documents have different layout and document elements, which make it impossible to compare different extract approaches since testers use different source of test documents even if the documents are from the same journal or conference. Therefore, standard datasets based performance evaluation of various extraction approaches can setup a fair and reproducible comparison. In this paper we present a dataset, namely, PARDA(Pdf Analysis and Recognition DAtaset), for performance evaluation and analysis of scholarly documents, especially on metadata extraction, such as title, authors, affiliation, author-affiliation-email matching, year, date, etc. The dataset covers computer science, physics, life science, management, mathematics, and humanities from various publishers including ACM, IEEE, Springer, Elsevier, arXiv, etc. And each document has distinct layouts and appearance in terms of formatting of metadata. We also construct the ground truth metadata in Dublin Core XML format and BibTex format file associated this dataset.

ACS Style

Tiantian Fan; Junming Liu; Yeliang Qiu; Congfeng Jiang; Jilin Zhang; Wei Zhang; Jian Wan. PARDA: A Dataset for Scholarly PDF Document Metadata Extraction Evaluation. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2019, 417 -431.

AMA Style

Tiantian Fan, Junming Liu, Yeliang Qiu, Congfeng Jiang, Jilin Zhang, Wei Zhang, Jian Wan. PARDA: A Dataset for Scholarly PDF Document Metadata Extraction Evaluation. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. 2019; ():417-431.

Chicago/Turabian Style

Tiantian Fan; Junming Liu; Yeliang Qiu; Congfeng Jiang; Jilin Zhang; Wei Zhang; Jian Wan. 2019. "PARDA: A Dataset for Scholarly PDF Document Metadata Extraction Evaluation." Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering , no. : 417-431.

Journal article
Published: 06 February 2019 in IEEE Access
Reads 0
Downloads 0

In order to reduce power and energy costs, giant cloud providers now mix online and batch jobs on the same cluster. Although the co-allocation of such jobs improves machine utilization, it challenges the data center scheduler and workload assignment in terms of quality of service, fault tolerance, and failure recovery, especially for latency critical online services. In this paper, we explore various characteristics of co-allocated online services and batch jobs from a production cluster containing 1.3k servers in Alibaba Cloud. From the trace data we find the following:(1) For batch jobs with multiple tasks and instances, 50.8% failed tasks wait and halted after a very long time interval when their first and the only one instance fails. This wastes much time and resources as the remaining instances are running for an impossible successful termination. (2) For online services jobs, they are clustered in 25 categories according to their requested CPU, memory, and disk resources. Such clustering can help co-allocation of online services jobs with batch jobs. (3) Servers are clustered into 7 groups by CPU utilization, memory utilization, and their correlations. Machines with strong correlation between CPU and memory utilization provides opportunity for job co-allocation and resource utilization estimation. (4) The MTBF (mean time between failures) of instances are in the interval [400, 800] seconds while the average completion time of the 99th percentile is 1003 seconds. We also compare the cumulative distribution functions of jobs and servers and explain the differences and opportunities for workload assignment between them. Our findings and insights presented in this paper can help the community and data center operators better understand the workload characteristics, improve resource utilization, and failure recovery design.

ACS Style

Congfeng Jiang; Guangjie Han; Jiangbin Lin; Gangyong Jia; Weisong Shi; Jian Wan. Characteristics of Co-Allocated Online Services and Batch Jobs in Internet Data Centers: A Case Study From Alibaba Cloud. IEEE Access 2019, 7, 22495 -22508.

AMA Style

Congfeng Jiang, Guangjie Han, Jiangbin Lin, Gangyong Jia, Weisong Shi, Jian Wan. Characteristics of Co-Allocated Online Services and Batch Jobs in Internet Data Centers: A Case Study From Alibaba Cloud. IEEE Access. 2019; 7 (99):22495-22508.

Chicago/Turabian Style

Congfeng Jiang; Guangjie Han; Jiangbin Lin; Gangyong Jia; Weisong Shi; Jian Wan. 2019. "Characteristics of Co-Allocated Online Services and Batch Jobs in Internet Data Centers: A Case Study From Alibaba Cloud." IEEE Access 7, no. 99: 22495-22508.

Journal article
Published: 12 December 2018 in Sensors
Reads 0
Downloads 0

In virtualized sensor networks, virtual machines (VMs) share the same hardware for sensing service consolidation and saving power. For those VMs that reside in the same hardware, frequent interdomain data transfers are invoked for data analytics, and sensor collaboration and actuation. Traditional ways of interdomain communications are based on virtual network interfaces of bilateral VMs for data sending and receiving. Since these network communications use TCP/IP (Transmission Control Protocol/Internet Protocol) stacks, they result in lengthy communication paths and frequent kernel interactions, which deteriorate the I/O (Input/Output) performance of involved VMs. In this paper, we propose an optimized interdomain communication approach based on shared memory to improve the interdomain communication performance of multiple VMs residing in the same sensor hardware. In our approach, the sending data are shared in memory pages maintained by the hypervisor, and the data are not transferred through the virtual network interface via a TCP/IP stack. To avoid security trapping, the shared data are mapped in the user space of each VM involved in the communication, therefore reducing tedious system calls and frequent kernel context switches. In implementation, the shared memory is created by a customized shared-device kernel module that has bidirectional event channels between both communicating VMs. For performance optimization, we use state flags in a circular buffer to reduce wait-and-notify operations and system calls during communications. Experimental results show that our proposed approach can provide five times higher throughput and 2.5 times less latency than traditional TCP/IP communication via a virtual network interface.

ACS Style

Congfeng Jiang; Tiantian Fan; Yeliang Qiu; Hongyuan Wu; Jilin Zhang; Neal N. Xiong; Jian Wan. Interdomain I/O Optimization in Virtualized Sensor Networks. Sensors 2018, 18, 4395 .

AMA Style

Congfeng Jiang, Tiantian Fan, Yeliang Qiu, Hongyuan Wu, Jilin Zhang, Neal N. Xiong, Jian Wan. Interdomain I/O Optimization in Virtualized Sensor Networks. Sensors. 2018; 18 (12):4395.

Chicago/Turabian Style

Congfeng Jiang; Tiantian Fan; Yeliang Qiu; Hongyuan Wu; Jilin Zhang; Neal N. Xiong; Jian Wan. 2018. "Interdomain I/O Optimization in Virtualized Sensor Networks." Sensors 18, no. 12: 4395.

Conference paper
Published: 01 July 2018 in 2018 IEEE 11th International Conference on Cloud Computing (CLOUD)
Reads 0
Downloads 0

With the development of big data, big data processing systems, such as Hadoop and Spark, are widely used to handle large-scale data. To avoid the complexity and expensiveness of building a self-owned big data processing system, cloud providers tend to deploy big data processing tools as cloud services. Typical examples include Amazon EMR, Azure HDInsight and AliCloud E-MapReduce. However, how to build a cost-efficient system and scale the system is still challenging. In this paper, we have conducted a case study on AliCloud E-MapReduce, and analyzed the system performance upon local and remote file systems. We compared the scalability of Hadoop and Spark by using scaleout and scale-up strategies respectively. Based on the analysis results, we derive several observations and implications, which will contribute to guide the performance optimization.

ACS Style

Congfeng Jiang; Wei Huang; Zujie Ren; Youhuizi Li; Jian Wan; Feng Cao; Jiangbin Lin. Towards Building a Scalable Data Analytics System on Clouds: An Early Experience on AliCloud. 2018 IEEE 11th International Conference on Cloud Computing (CLOUD) 2018, 891 -895.

AMA Style

Congfeng Jiang, Wei Huang, Zujie Ren, Youhuizi Li, Jian Wan, Feng Cao, Jiangbin Lin. Towards Building a Scalable Data Analytics System on Clouds: An Early Experience on AliCloud. 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). 2018; ():891-895.

Chicago/Turabian Style

Congfeng Jiang; Wei Huang; Zujie Ren; Youhuizi Li; Jian Wan; Feng Cao; Jiangbin Lin. 2018. "Towards Building a Scalable Data Analytics System on Clouds: An Early Experience on AliCloud." 2018 IEEE 11th International Conference on Cloud Computing (CLOUD) , no. : 891-895.

Journal article
Published: 30 November 2016 in International Journal of Grid and Distributed Computing
Reads 0
Downloads 0
ACS Style

Congfeng Jiang; Jingling Mao; Dongyang Ou; Yumei Wang; Xindong You; Jilin Zhang; Jian Wan. Power and QoS Aware Multi-level Resource Coordination and Scheduling in Virtualized Servers. International Journal of Grid and Distributed Computing 2016, 9, 323 -336.

AMA Style

Congfeng Jiang, Jingling Mao, Dongyang Ou, Yumei Wang, Xindong You, Jilin Zhang, Jian Wan. Power and QoS Aware Multi-level Resource Coordination and Scheduling in Virtualized Servers. International Journal of Grid and Distributed Computing. 2016; 9 (11):323-336.

Chicago/Turabian Style

Congfeng Jiang; Jingling Mao; Dongyang Ou; Yumei Wang; Xindong You; Jilin Zhang; Jian Wan. 2016. "Power and QoS Aware Multi-level Resource Coordination and Scheduling in Virtualized Servers." International Journal of Grid and Distributed Computing 9, no. 11: 323-336.

Conference paper
Published: 01 January 2016 in 2016 Seventh International Green and Sustainable Computing Conference (IGSC)
Reads 0
Downloads 0

Heterogeneous multi-core platforms, e.g., ARM's big.LITTLE, are a promising trend to improve the performance and energy efficiency of future mobile systems. However, the immediate benefits and the challenges to take advantage of the heterogeneity are still not clear. In this paper, we present our early experiences about the energy efficiency of the two big.LITTLE heterogeneous platforms: ODROID XU+E and ODROID XU3. We quantified compared them with homogeneous platforms through multiple benchmarks which include popular mobile applications and high-performance parallel benchmarks. Besides, we analyzed the scheduling impact on the energy consumption of the heterogeneous platforms and the migration cost is also discussed. Based on the results, several insights, such as fine-granularity power control and thread level parallelism, related to hardware, application and system design are derived.

ACS Style

Youhuizi Li; Weisong Shi; Congfeng Jiang; Jilin Zhang; Jian Wan. Energy efficiency analysis of heterogeneous platforms: Early experiences. 2016 Seventh International Green and Sustainable Computing Conference (IGSC) 2016, 1 -6.

AMA Style

Youhuizi Li, Weisong Shi, Congfeng Jiang, Jilin Zhang, Jian Wan. Energy efficiency analysis of heterogeneous platforms: Early experiences. 2016 Seventh International Green and Sustainable Computing Conference (IGSC). 2016; ():1-6.

Chicago/Turabian Style

Youhuizi Li; Weisong Shi; Congfeng Jiang; Jilin Zhang; Jian Wan. 2016. "Energy efficiency analysis of heterogeneous platforms: Early experiences." 2016 Seventh International Green and Sustainable Computing Conference (IGSC) , no. : 1-6.

Conference paper
Published: 01 January 2016 in 2016 Seventh International Green and Sustainable Computing Conference (IGSC)
Reads 0
Downloads 0

Current cloud data centers are fully virtualized for service consolidations and power/energy reduction. Although virtualization could reduce real time power and overall energy consumption, the energy characteristics of hypervisors hosting different workloads are not well profiled and understood. In this paper, we investigate the power and energy characteristics of mainstream hypervisors and container engine, i.e., VMware ESXi, Microsoft Hyper-V, KVM, XenServer and Docker, on five different platforms (two mainstream 2U rack servers, one emerging ARM64 server, one desktop server, and one laptop) with hundreds of hours' power measures. We use both computing intensive and mixed web server-database workloads to explore the power and energy characteristics of different hypervisors. Extensive experiment results of four workload levels (very light, light, fair, and very heavy workload) demonstrate that hypervisors expose different power and energy characteristics. We find that: (1) Hypervisors expose different power and energy consumption on the same hardware running same workloads. (2) Although mainstream hypervisors have different energy efficiencies aligned with different workload types and workload levels, there is no single hypervisor that outperforms all other hypervisors on all platforms in terms of power or energy consumptions. (3) Although container virtualization is considered as light-weight virtualization in terms of implementation and maintenance, it is not essentially more power efficient than traditional virtualization technology. (4) ARM64 server does have lower power consumption, but they finish computing jobs with longer execution time and sometimes consume more energy. And ARM64 servers has medium energy consumption per database operations for mixed workloads. The results presented in this paper provide useful insights to system designers, as well as data center operators for power-aware workload placement and virtual machine scheduling.

ACS Style

Congfeng Jiang; Dongyang Ou; Yumei Wang; Xindong You; Jilin Zhang; Jian Wan; Bing Luo; Weisong Shi. Energy efficiency comparison of hypervisors. 2016 Seventh International Green and Sustainable Computing Conference (IGSC) 2016, 1 -8.

AMA Style

Congfeng Jiang, Dongyang Ou, Yumei Wang, Xindong You, Jilin Zhang, Jian Wan, Bing Luo, Weisong Shi. Energy efficiency comparison of hypervisors. 2016 Seventh International Green and Sustainable Computing Conference (IGSC). 2016; ():1-8.

Chicago/Turabian Style

Congfeng Jiang; Dongyang Ou; Yumei Wang; Xindong You; Jilin Zhang; Jian Wan; Bing Luo; Weisong Shi. 2016. "Energy efficiency comparison of hypervisors." 2016 Seventh International Green and Sustainable Computing Conference (IGSC) , no. : 1-8.

Journal article
Published: 31 October 2015 in International Journal of Grid and Distributed Computing
Reads 0
Downloads 0
ACS Style

Li Zhou; Chi Dong; Xindong You; Jie Huang; Congfeng Jiang. High Availability Green Gear-shifting Mechanism in Cloud Storage System. International Journal of Grid and Distributed Computing 2015, 8, 303 -314.

AMA Style

Li Zhou, Chi Dong, Xindong You, Jie Huang, Congfeng Jiang. High Availability Green Gear-shifting Mechanism in Cloud Storage System. International Journal of Grid and Distributed Computing. 2015; 8 (5):303-314.

Chicago/Turabian Style

Li Zhou; Chi Dong; Xindong You; Jie Huang; Congfeng Jiang. 2015. "High Availability Green Gear-shifting Mechanism in Cloud Storage System." International Journal of Grid and Distributed Computing 8, no. 5: 303-314.