This page has only limited features, please log in for full access.
This article argues that low latency, high bandwidth, device proliferation, sustainable digital infrastructure, and data privacy and sovereignty continue to motivate the need for edge computing research even though its initial concepts were formulated more than a decade ago.
Blesson Varghese; Eyal De Lara; Aaron Yi Ding; Cheol-Ho Hong; Flavio Bonomi; Schahram Dustdar; Paul Harvey; Peter Hewkin; Weisong Shi; Mark Thiele; Peter Willis. Revisiting the Arguments for Edge Computing Research. IEEE Internet Computing 2021, PP, 1 -1.
AMA StyleBlesson Varghese, Eyal De Lara, Aaron Yi Ding, Cheol-Ho Hong, Flavio Bonomi, Schahram Dustdar, Paul Harvey, Peter Hewkin, Weisong Shi, Mark Thiele, Peter Willis. Revisiting the Arguments for Edge Computing Research. IEEE Internet Computing. 2021; PP (99):1-1.
Chicago/Turabian StyleBlesson Varghese; Eyal De Lara; Aaron Yi Ding; Cheol-Ho Hong; Flavio Bonomi; Schahram Dustdar; Paul Harvey; Peter Hewkin; Weisong Shi; Mark Thiele; Peter Willis. 2021. "Revisiting the Arguments for Edge Computing Research." IEEE Internet Computing PP, no. 99: 1-1.
In cloud systems, computing resources, such as the CPU, memory, network, and storage devices, are virtualized and shared by multiple users. In recent decades, methods to virtualize these resources efficiently have been intensively studied. Nevertheless, the current virtualization techniques cannot achieve effective I/O virtualization when packets are transferred between a virtual machine and a host system. For example, VirtIO, which is a network device driver for KVM-based virtualization, adopts an interrupt-based packet-delivery mechanism, and incurs frequent switch overheads between the virtual machine and the host system. Therefore, VirtIO wastes valuable CPU resources and decreases network performance. To address this limitation, this paper proposes an adaptive polling-based network I/O processing technique, called NetAP, for virtualized environments. NetAP processes network requests via a periodical polling-based mechanism. For this purpose, NetAP adopts the golden-section search algorithm to determine the near-optimal polling interval for various workloads with different characteristics. We implement NetAP in a Linux kernel and evaluated it with up to six virtual machines. The evaluation results show that NetAP can improve the network performance of virtual machines by up to 31.16%, while only using 32.92% of the host CPU time used by VirtIO for packet processing.
Hyunchan Park; Juyong Seong; Munkyu Lee; Kyungwoon Lee; Cheol-Ho Hong. NetAP: Adaptive Polling Technique for Network Packet Processing in Virtualized Environments. Applied Sciences 2020, 10, 5219 .
AMA StyleHyunchan Park, Juyong Seong, Munkyu Lee, Kyungwoon Lee, Cheol-Ho Hong. NetAP: Adaptive Polling Technique for Network Packet Processing in Virtualized Environments. Applied Sciences. 2020; 10 (15):5219.
Chicago/Turabian StyleHyunchan Park; Juyong Seong; Munkyu Lee; Kyungwoon Lee; Cheol-Ho Hong. 2020. "NetAP: Adaptive Polling Technique for Network Packet Processing in Virtualized Environments." Applied Sciences 10, no. 15: 5219.
In cloud computing, a shared storage server, which provides a network-attached storage device, is usually used for centralized data management. However, when multiple virtual machines (VMs) concurrently access the storage server through the network, the performance of each VM may decrease due to limited bandwidth. To address this issue, a flash-based storage device such as a solid state drive (SSD) is often employed as a cache in the host server. This host-side flash cache saves remote data, which are frequently accessed by the VM, locally in the cache. However, frequent VM migration in the data center can weaken the effectiveness of a host-side flash cache as the migrated VM needs to warm up its flash cache again on the destination machine. This study proposes Cachemior, Firepan, and FirepanIF for rapid flash-cache migration in cloud computing. Cachemior warms up the flash cache with a data preloading approach using the shared storage server after VM migration. However, it does not achieve a satisfactory level of performance. Firepan and FirepanIF use the source node’s flash cache as the data source for flash cache warm-up. They can migrate the flash-cache more quickly than conventional methods as they can avoid storage and network congestion on the shared storage server. Firepan incurs downtime of the VM during flash cache migration for data consistency. FirepanIF minimizes the VM downtime with the invalidation filter, which traces the I/O activity of the migrated VM during flash cache migration in order to invalidate inconsistent cache blocks. We implement and evaluate the three flash cache migration techniques in a realistic virtualized environment. FirepanIF demonstrates that it can improve the performance of the I/O workload by up to 21.87% compared to conventional methods.
Hyunchan Park; Munkyu Lee; Cheol-Ho Hong. FirepanIF: High Performance Host-Side Flash Cache Warm-Up Method in Cloud Computing. Applied Sciences 2020, 10, 1014 .
AMA StyleHyunchan Park, Munkyu Lee, Cheol-Ho Hong. FirepanIF: High Performance Host-Side Flash Cache Warm-Up Method in Cloud Computing. Applied Sciences. 2020; 10 (3):1014.
Chicago/Turabian StyleHyunchan Park; Munkyu Lee; Cheol-Ho Hong. 2020. "FirepanIF: High Performance Host-Side Flash Cache Warm-Up Method in Cloud Computing." Applied Sciences 10, no. 3: 1014.
Network scheduling is important to satisfy the bandwidth requirements of virtual networks that consist of virtual machines in the end-hosts and the virtual routers connecting them. However, existing studies have focused on developing bandwidth allocation techniques for end-host virtual machines, but do not consider the network performance of virtual routers. In this article, we propose a new network scheduling framework for virtual routers—CreditBank. CreditBank dynamically allocates network resources to virtual routers according to bandwidth requirements, and it adapts to changing network environments without adding significant overhead. CreditBank offers three scheduling policies: minimum bandwidth reservation, weight-based proportional sharing, and hybrid scheduling. In addition, CreditBank supports an efficient work-conserving method to maximize network utilization. We implement CreditBank based on the Xen and Kernel-based Virtual Machine (KVM) hypervisors and evaluate its performance. The evaluation results indicate that CreditBank satisfies bandwidth requirements of the virtual routers while utilizing up to 99% of network resources.
Kyungwoon Lee; Cheol-Ho Hong; Jaehyun Hwang; Chuck Yoo. Dynamic Network Scheduling for Virtual Routers. IEEE Systems Journal 2019, 14, 3618 -3629.
AMA StyleKyungwoon Lee, Cheol-Ho Hong, Jaehyun Hwang, Chuck Yoo. Dynamic Network Scheduling for Virtual Routers. IEEE Systems Journal. 2019; 14 (3):3618-3629.
Chicago/Turabian StyleKyungwoon Lee; Cheol-Ho Hong; Jaehyun Hwang; Chuck Yoo. 2019. "Dynamic Network Scheduling for Virtual Routers." IEEE Systems Journal 14, no. 3: 3618-3629.
The cloud has become integral to most Internet-based applications and user gadgets. This article provides a brief history of the cloud and presents a researcher's view of the prospects for innovating at the infrastructure, middleware, and applications and delivery levels of the already crowded cloud computing stack.
Blesson Varghese; Philipp Leitner; Suprio Ray; Kyle Chard; Adam Barker; Yehia Elkhatib; Herry Herry; Cheol-Ho Hong; Jeremy Singer; Fung Po Tso; Eiko Yoneki; Mohamed-Faten Zhani. Cloud Futurology. Computer 2019, 52, 68 -77.
AMA StyleBlesson Varghese, Philipp Leitner, Suprio Ray, Kyle Chard, Adam Barker, Yehia Elkhatib, Herry Herry, Cheol-Ho Hong, Jeremy Singer, Fung Po Tso, Eiko Yoneki, Mohamed-Faten Zhani. Cloud Futurology. Computer. 2019; 52 (9):68-77.
Chicago/Turabian StyleBlesson Varghese; Philipp Leitner; Suprio Ray; Kyle Chard; Adam Barker; Yehia Elkhatib; Herry Herry; Cheol-Ho Hong; Jeremy Singer; Fung Po Tso; Eiko Yoneki; Mohamed-Faten Zhani. 2019. "Cloud Futurology." Computer 52, no. 9: 68-77.
It is widely believed that software routers based on commodity operating systems cannot deliver high-speed packet processing, and a number of alternative approaches (including user-space network stacks) have been proposed. This paper revisits the inefficiency of kernel-level packet processing inside modern OS-based software routers and explores whether a redesign of kernel network stacks can improve the incompetence. We present a case contrary to the belief through a redesign: Kafe--a kernel-based advanced forwarding engine that can process packets as fast as user-space network stacks. The Kafe neither adds any new API nor depends on proprietary hardware features, but the Kafe outperforms Linux by seven times and RouteBricks by three times. The current implementation of the Kafe can forward 64-byte IPv4 packets at 28.2 Gbps using eight cores running at 2.6 GHz. Our evaluation results show that the Kafe achieves similar packet forwarding performance to Intel DPDK while consuming much less CPU and memory resources.
Cheol-Ho Hong; Kyungwoon Lee; Jaehyun Hwang; Hyunchan Park; Chuck Yoo. Kafe: Can OS Kernels Forward Packets Fast Enough for Software Routers? IEEE/ACM Transactions on Networking 2018, 26, 2734 -2747.
AMA StyleCheol-Ho Hong, Kyungwoon Lee, Jaehyun Hwang, Hyunchan Park, Chuck Yoo. Kafe: Can OS Kernels Forward Packets Fast Enough for Software Routers? IEEE/ACM Transactions on Networking. 2018; 26 (6):2734-2747.
Chicago/Turabian StyleCheol-Ho Hong; Kyungwoon Lee; Jaehyun Hwang; Hyunchan Park; Chuck Yoo. 2018. "Kafe: Can OS Kernels Forward Packets Fast Enough for Software Routers?" IEEE/ACM Transactions on Networking 26, no. 6: 2734-2747.
Fog computing is a new computing paradigm that employs computation and network resources at the edge of a network to build small clouds, which perform as small data centers. In fog computing, lightweight virtualization (e.g., containers) has been widely used to achieve low overhead for performance-limited fog devices such as WiFi access points (APs) and set-top boxes. Unfortunately, containers have a weakness in the control of network bandwidth for outbound traffic, which poses a challenge to fog computing. Existing solutions for containers fail to achieve desirable network bandwidth control, which causes bandwidth-sensitive applications to suffer unacceptable network performance. In this paper, we propose qCon, which is a QoS-aware network resource management framework for containers to limit the rate of outbound traffic in fog computing. qCon aims to provide both proportional share scheduling and bandwidth shaping to satisfy various performance demands from containers while implementing a lightweight framework. For this purpose, qCon supports the following three scheduling policies that can be applied to containers simultaneously: proportional share scheduling, minimum bandwidth reservation, and maximum bandwidth limitation. For a lightweight implementation, qCon develops its own scheduling framework on the Linux bridge by interposing qCon’s scheduling interface on the frame processing function of the bridge. To show qCon’s effectiveness in a real fog computing environment, we implement qCon in a Docker container infrastructure on a performance-limited fog device—a Raspberry Pi 3 Model B board.
Cheol-Ho Hong; Kyungwoon Lee; Minkoo Kang; Chuck Yoo. qCon: QoS-Aware Network Resource Management for Fog Computing. Sensors 2018, 18, 3444 .
AMA StyleCheol-Ho Hong, Kyungwoon Lee, Minkoo Kang, Chuck Yoo. qCon: QoS-Aware Network Resource Management for Fog Computing. Sensors. 2018; 18 (10):3444.
Chicago/Turabian StyleCheol-Ho Hong; Kyungwoon Lee; Minkoo Kang; Chuck Yoo. 2018. "qCon: QoS-Aware Network Resource Management for Fog Computing." Sensors 18, no. 10: 3444.
Fog computing, which places computing resources close to IoT devices, can offer low latency data processing for IoT applications. With software-defined networking (SDN), fog computing can enable network control logics to become programmable and run on a decoupled control plane, rather than on a physical switch. Therefore, network switches are controlled via the control plane. However, existing control planes have limitations in providing isolation and high performance, which are crucial to support multi-tenancy and scalability in fog computing. In this paper, we present optimization techniques for Linux to provide isolation and high performance for the control plane of SDN. The new techniques are (1) separate execution environment (SE2), which separates the execution environments between multiple control planes, and (2) separate packet processing (SP2), which reduces the complexity of the existing network stack in Linux. We evaluate the proposed techniques on commodity hardware and show that the maximum performance of a control plane increases by four times compared to the native Linux while providing strong isolation.
Kyungwoon Lee; Chiyoung Lee; Cheol-Ho Hong; Chuck Yoo. Enhancing the Isolation and Performance of Control Planes for Fog Computing. Sensors 2018, 18, 3267 .
AMA StyleKyungwoon Lee, Chiyoung Lee, Cheol-Ho Hong, Chuck Yoo. Enhancing the Isolation and Performance of Control Planes for Fog Computing. Sensors. 2018; 18 (10):3267.
Chicago/Turabian StyleKyungwoon Lee; Chiyoung Lee; Cheol-Ho Hong; Chuck Yoo. 2018. "Enhancing the Isolation and Performance of Control Planes for Fog Computing." Sensors 18, no. 10: 3267.
Solid-state drive (SSD) becomes popular as the main storage device. However, over time, the reliability of SSD degrades due to bit errors, which poses a serious issue. The periodic remapping (PR) has been suggested to overcome the issue, but it still has a critical weakness as PR increases lifetime loss. Therefore, we propose the conditional remapping invocation method (CRIM) to sustain reliability without lifetime loss. CRIM uses a probability-based threshold to determine the condition of invoking remapping operation. We evaluate the effectiveness of CRIM using the real workload trace data. In our experiments, we show that CRIM can extend a lifetime of SSD more than PR by up to 12.6% to 17.9% of 5-year warranty time. In addition, we show that CRIM can reduce the bit error probability of SSD by up to 73 times in terms of typical bit error rate in comparison with PR.
Youngpil Kim; Hyunchan Park; Cheol-Ho Hong; Chuck Yoo. CRIM: Conditional Remapping to Improve the Reliability of Solid-State Drives with Minimizing Lifetime Loss. Scientific Programming 2018, 2018, 1 -10.
AMA StyleYoungpil Kim, Hyunchan Park, Cheol-Ho Hong, Chuck Yoo. CRIM: Conditional Remapping to Improve the Reliability of Solid-State Drives with Minimizing Lifetime Loss. Scientific Programming. 2018; 2018 ():1-10.
Chicago/Turabian StyleYoungpil Kim; Hyunchan Park; Cheol-Ho Hong; Chuck Yoo. 2018. "CRIM: Conditional Remapping to Improve the Reliability of Solid-State Drives with Minimizing Lifetime Loss." Scientific Programming 2018, no. : 1-10.
Increasingly high performance computing (HPC) application developers are opting to use cloud resources due to higher availability. Virtualized GPUs would be an obvious and attractive option for HPC application developers using cloud hosting services. Unfortunately, existing GPU virtualization software is not ready to address fairness, utilization, and performance limitations associated with consolidating mixed HPC workloads. This paper presents FairGV, a radically redesigned GPU virtualization system that achieves system-wide weighted fair sharing and strong performance isolation in mixed workloads that use GPUs with variable degrees of intensity. To achieve its objectives, FairGV introduces a trap-less GPU processing architecture, a new fair queuing method integrated with work-conserving and GPU-centric coscheduling polices, and a collaborative scheduling method for non-preemptive GPUs. Our prototype implementation achieves near ideal fairness (≥ 0.97 Min-Max Ratio) with little performance degradation (≤ 1.02 aggregated overhead) in a range of mixed HPC workloads that leverage GPUs.
Cheol-Ho Hong; Ivor Spence; Dimitrios S. Nikolopoulos. FairGV: Fair and Fast GPU Virtualization. IEEE Transactions on Parallel and Distributed Systems 2017, 28, 3472 -3485.
AMA StyleCheol-Ho Hong, Ivor Spence, Dimitrios S. Nikolopoulos. FairGV: Fair and Fast GPU Virtualization. IEEE Transactions on Parallel and Distributed Systems. 2017; 28 (12):3472-3485.
Chicago/Turabian StyleCheol-Ho Hong; Ivor Spence; Dimitrios S. Nikolopoulos. 2017. "FairGV: Fair and Fast GPU Virtualization." IEEE Transactions on Parallel and Distributed Systems 28, no. 12: 3472-3485.
Jaehyun Hwang; Cheol-Ho Hong; Hyo-Joong Suh. Dynamic Inbound Rate Adjustment Scheme for Virtualized Cloud Data Centers. IEICE Transactions on Information and Systems 2016, E99.D, 760 -762.
AMA StyleJaehyun Hwang, Cheol-Ho Hong, Hyo-Joong Suh. Dynamic Inbound Rate Adjustment Scheme for Virtualized Cloud Data Centers. IEICE Transactions on Information and Systems. 2016; E99.D (3):760-762.
Chicago/Turabian StyleJaehyun Hwang; Cheol-Ho Hong; Hyo-Joong Suh. 2016. "Dynamic Inbound Rate Adjustment Scheme for Virtualized Cloud Data Centers." IEICE Transactions on Information and Systems E99.D, no. 3: 760-762.