Program

arms-cc-2016-timetable

ARMS-CC-2016 Accepted Papers with Abstracts

  • Ta-Yuan Hsu and Ajay Kshemkalyani. Performance of Approximate Causal Consistency for Partially-Replicated Systems

Abstract: Causal consistency is one of the widely used consistency models in geo-replicated systems due to highly scalable semantics. Partial replication is a replication mechanism that emphasizes a better network capacity utilization. However, it has a challenge of higher meta-data overhead and processing complexity in communication. Algorithm Approx-OptTrack has been proposed to reduce meta-data size, however, by risking that causal consistency might be violated. In an effort to bride this gap and reconcile the trade-off between them, we present the analytic data to show the performance of Approx-Opt-Track. We also give simulation results to show the potential benefits of Approx-Opt-Track, under the same guarantees as causal consistency, at a smaller cost.


  • Ansuman Banerjee, Himadri Sekhar Paul and Arijit Mukherjee. I^2oT: Inexactness in IoT

Abstract: Recent research on inexact computing shows promising results for improved energy utilization for resource hungry applications across different layers of the execution stack. The general philosophy of inexact computing is to trade-off correctness within acceptable limits with the premise of improved energy utilization. In this paper, we explore this philosophy in the context of a heterogeneous Internet-of-Things (IoT) architecture for application execution. We consider an application workflow, comprising of a set of methods with their possible inexact lightweight variants, a deadline for completion, and a multi-tiered IoT compute architecture (e.g. mobile device, gateway, cloud, server, etc.). Our methodology produces a time-optimized execution solution that assigns each method, with an appropriate variant (the exact one or any of its inexact realizations), to an appropriate computing layer such that the deadline is met with quality as best as possible. We present experimental results to demonstrate the efficacy of our proposal on two real-life case studies.


  • Arani Bhattacharya and Pradipta De. Computation Offloading from Mobile Devices: Can Edge Devices Perform Better Than the Cloud?

Abstract: Mobile devices like smartphones can augment their low-power processors by offloading portions of mobile applications to cloud servers. However, offloading to cloud data centers has a high network latency. To mitigate the problem of network latency, recently offloading to computing resources lying within the user’s premises, such as network routers, tablets or laptop has been proposed. In this paper, we determine the devices whose processors have sufficient power to act as servers for computation offloading. We perform trace-driven simulation of SPECjvm2008 benchmarks to study the performance using different hardware. Our simulation shows that offloading to current state-of-the-art processors of user devices can improve performance of mobile applications. We find that offloading to user’s own laptop reduces finish time of benchmark applications by 10%, compared to offloading to a commercial cloud server.


  • Cristian Chilipirea, Alexandru Constantin, Dan Popa, Octavian Crintea and Ciprian Dobre. Cloud Elasticity: going beyond demand as user load

Abstract: Cloud computing systems have become not only popular, but extensively used. They are supported and exploited by both industry and academia. Cloud providers have diversified and so did the software offered by their systems. Infrastructure as a Service (IaaS) clouds are now available from single virtual machine use cases, such as a personal server, to specialized high performance or machine learning engines. This popularity has been brought by the low cost and risk free feature of renting computing resources instead of buying them, in a large, one-time investment. Furthermore, clouds permit their clients the use of elasticity. Elasticity is the most relevant feature of cloud computing. It refers to the clients’ ability to easily change the number of rented resources in a live environment. This permits the entire system to handle differences in load. Most cloud clients serve web applications or services to third parties. In these cases, load differences can be correlated to the number of users for the service and elasticity is used to handle differences in what is called “user load”. Of course, load is not directly related to the number of users, there are other factors that affect load such as resource-expensive database queries. Most of the scientific literature approaches elasticity looking solely at “user load”. In this paper we present a novel way of utilizing elasticity. We propose that the number of resources can be varied to better fit each individual step of a workflow or algorithm whose execution does not depend on a “user load”. We show this can be achieved for several workflows from different fields and that it can bring significant benefits to execution time and cost.


  • Bogdan-Costel Mocanu, Sever-Vlad Mureșean, Mariana Mocanu and Valentin Cristea. Cloud live streaming System based on Auto-adaptive Overlay for Cyber Physical Infrastructure

Abstract: According to Mark Zuckerberg’s speech, at the Samsung S7 launch in February 2016, the age of massive data streaming, especially massive video streaming, is here. At the beginning of 2000, most people used to share and search almost only text. Then, they started to be interested in sharing and searching images and, in the latter years, videos with different formats and resolutions. One of the emerging technologies of this decade is the Virtual Reality (VR) through which people can share resources in a more exciting and enjoyable manner. If a decade ago people used to only read about how to do things, now they see it being done in videos. Expectations are that, in the not so distant future, they will be able to visualize and experience it through the power of VR. But, new technologies come with new challenges. The video streaming, especially for VR, generates a great amount of data. Techniques used until now need to evolve or lead the way for better ones. Centralized approaches for big data live streaming is no longer appropriate. Peer-to-Peer networks are more suitable due to their decentralized nature and auto-adaptive property. The aim of this paper is to analyze and evaluate the opportunity of using SPIDER Peer-to-Peer Overlay for two Cloud use cases. The first scenario entitled CyberWater is willing to create an e-platforms for sustainable water resources with a high focus on pollution phenomena. The second scenario called ClueFarm is a Cloud service-based system for quality business development in the farming area.


  • Stelios SotiriadisData management of sensor signals for high bandwidth data streaming to the Cloud

Abstract: The use of wearable sensors and their connectivity to the Internet delivers significant benefits for storing sensing data that could be utilized intelligently for monitoring purposes (e.g. in the healthcare domain). This work proposes a gateway service taking advantage of modern mobile devices and their capabilities to communicate with wearable Bluetooth Low Energy (BLE) sensors such that data is forwarded to the cloud on the fly and in real time. The service transforms a mobile platform (such as a smartphone) into a gateway allowing continuous and fast communication of data that is streamed from the device to the cloud on demand, or automatically, for automated decision making. The service (a) makes use of an internal processing mechanism for the BLE sensor signals and defines the way in which data is sent to the cloud, (b) is dynamic as it has the ability to recognize new BLE sensor properties by easily adapting the information according to a generic data schema and (c) is universal as BLE devices are registered automatically and are monitored on the fly while it keeps historical data that could be integrated into meaningful business intelligence. Building upon principles of service oriented design, the service takes full advantage of cloud services for processing potential high bandwidth big data streams produced by an ever increasing number of users and sensors while being scalable by offering robust support and NoSQL data storage. The experiments are prosperous as show an average data transmission time of 128 milliseconds that is considered significantly low for real time data.


  • Seyed Saeid Masoumzadeh and Helmut Hlavacs. A Gossip-Based Dynamic Virtual Machine Consolidation Strategy for Large-Scale Cloud Data Centers

Abstract: Dynamic virtual machine consolidation strategies refer to a number of resource management algorithms aiming at finding the right balance between energy consumption and SLA violations by using live migration techniques in virtualized cloud data centers. Most strategies found in the literature typically focus on centralized approaches, with a single management node responsible for VM placement. These approaches suffer from poor scalability, as the management node may become a performance bottleneck if the number of physical and virtual machines grows. In this paper we propose a fully decentralized dynamic virtual machine consolidation strategy on top of an unstructured P2P network of physical host nodes and investigate the performance of the strategy in terms of energy consumption, average CPU utilization, performance degradation due to overloading, performance degradation due to migration and total number of sleep nodes inside data center. The experimental results show that the proposed P2P strategy can achieve a global efficiency in terms of energy and performance very close to a centralized approach while assure scalability due to increasing the number of hosts in data center.


  • Gildo Torres and Chen Liu. The Impact on the Performance of Co-running Virtual Machines in a Virtualized Environment

Abstract: The success of cloud computing technologies heavily depends on the underlying hardware as well as the system software support for virtualization. As hardware resources become more abundant with each technology generation, the complexity of managing the resources of computing systems has increased dramatically. Past research has demonstrated that contention for shared resources in modern multi-core multi-threaded microprocessors (MMMP) can lead to poor and unpredictable performance. In this paper we conduct a performance degradation study targeting a virtualized environment. Firstly, we present our findings of the possible impact on the performance of virtual machines (VMs) when managed by the default Linux scheduler as regular host processes. Secondly, we study how the performance of virtual machines and the applications running within them can be affected by different co-scheduling schemes at the host level. Finally, we conduct a correlation study where we try to identify which hardware event(s) can be used at runtime to identify performance degradation of the applications running inside the virtual machines.


  • Dominik Meiländer and Sergei Gorlatch. Modelling the Scalability of Real-Time Online Interactive Applications on Clouds
Abstract: We address the scalability of Real-Time Online Interactive Applications (ROIA) on Clouds. Popular examples of ROIA include, e.g., multi-player online computer games, simulation-based e-learning, and training in real-time virtual environments. Cloud computing allows to combine ROIA’s high demands on QoE (Quality of Experience) with the requirement of efficient and economic utilization of computation and network resources. We propose a generic scalability model for ROIA on Clouds that monitors the application performance at runtime and predicts the load-balancing decisions: by weighting the potential benefits of particular load-balancing actions against the time and resources overhead of them, our model recommends, whether and how often to redistribute workload or add/remove Cloud resources when the number of users changes. We describe how the scalability is modelled w.r.t. to two kinds of resources — computation (CPU) and communication (network) — and how we combine these models together. We experimentally evaluate the quality of our combined model using a challenging multi-playes shooter game as a use case.