ARMS-CC-2017 Schedule – Clouds Resource Management and Execution Environments

Session ChairStelios Sotiriadis

July 28, 2017, Marvin Center, Rm307, 08:45-11:30

08:45-09:00 – Organziers Welcome

09:00-09:30 – Pelagia Tsiachri Renta, Stelios Sotiriadis and Euripides Petrakis. Healthcare Sensor Data Management on the Cloud
Abstract: The quality of medical services can be significantly improved by supporting health care procedures with new technologies such as Cloud computing and Internet of Things (IoTs). The need to monitor patient’s health remotely and in real time becomes more and more a vital requirement, especially for chronic patients and elderly. A possible solution is the use of wearable sensors connected to the Internet and capable of transmitting patient health status to the Cloud so health care personnel can access and take decisions fast. Here, we will focus on the management of health care related information on the Cloud. We present a Sensor Management System that collects vital user data (e.g. cardiac pulse rate and blood oxygen saturation). We develop a Cloud solution that enables sensor data collection and processing on the Cloud very fast and efficient, while users like medical personnel can subscribe to patient’s real time data. The Cloud application includes a new awareness service for both health care providers and patients minimizing the risk of data loss or late response to emergency conditions. These is based on predefined rules encoding patient specific reaction plans for example in cases where normal medical measurement levels exceed the predefined limits. We present an experimental study where we evaluate our system based on real world sensors, while we generate a synthetic dataset for simulating thousands of users. The results are prosperous as the system responds close to real time even under heavy loads approaching the limits of the web server that receives the service request. In particular, the heaviest workload that simulates 2000 user request (while 80 are executed concurrently) is completed in less than 13 seconds.

09:30-10:00 – Swagata Biswas, Swarnava Dey, Arijit Mukherjee and Rimita Lahiri. A Distributed and Fault Tolerant Robotic Localisation and Mapping in Network Edge
Abstract: Off late Cloud Robotic technologies are being used to augment low-end robots with enhanced sensor data processing, storing and communication capabilities. In an era when commodity hardware is replacing costly, specialized hardware in most scenarios, software reliability within Cloud Robotic middleware will allow its distributed execution on lightweight, low cost robots and network edge devices. However successful functioning of multi-robot systems in critical missions requires resilience in the middleware such that the overall functioning degrades gracefully in the face of hardware failure and connectivity failure to Cloud server. In the current work reliable, distributed execution capability is added to a well known robotic localization and mapping task such that minimum data transfer is required between participating nodes and the application degrades gracefully in the face of failure of the participating robots. To ensure fault tolerance, a reliable execution model based on the failure probabilities of individual robots and their components is proposed. A lightweight timeseries analysis scheme is presented, that enables the robots to find their individual failure probabilities and use that to enhance system reliability in a distributed manner. Both the distribution and predictive recovery schemes are evaluated using standard datasets on virtual machines running robotic middleware.

10:00 – 10:30 – Coffee Break

10:30-11:00 – Carlos Ruiz, Hector Duran-Limon and Nikos Parlavantzas. A RLS memory-based mechanism for the automatic adaptation of VMs on Cloud environments
Abstract: One key factor for Cloud computing success is the resource flexibility it provides. Because of such characteristic, academia and industry have focused their efforts on making an efficient use of cloud computational resources without having to sacrifice performance. One way to achieve such purpose is through the automatic adaptation of VM computational capabilities according to its resource utilization and performance. In this paper we present the design and preliminary results of our resource adaptation solution, which proactively adapts VMs (Memory-based vertical scaling) to meet an expected performance. Our solution targets Multi-tier applications deployed on Cloud environments, its core resides in RLS-based resource and performance predictors. Our results show that our solution, when compared with VMs with larger and permanent allocated computational resources, is able to maintain expected performance while reducing resource wastage.

11:00-11:30 – Suejb Memeti, Lu Li, Sabri Pllana, Joanna Kolodziej and Christoph Kessler. Benchmarking OpenCL, OpenACC, OpenMP, and CUDA: programming productivity, performance, and energy consumption
Abstract: Many modern parallel computing systems are heterogeneous at their node level. Such nodes may comprise general purpose CPUs and accelerators (such as, GPU, or Intel Xeon Phi) that provide high performance with suitable energy-consumption characteristics. However, exploiting the available performance of heterogeneous architectures may be challenging. There are various parallel programming frameworks (such as, OpenMP, OpenCL, OpenACC, CUDA) and selecting the one that is suitable for a target context is not straightforward. In this paper, we study empirically the characteristics of OpenMP, OpenACC, OpenCL, and CUDA with respect to programming productivity, performance, and energy. To evaluate the programming productivity we use our homegrown tool CodeStat, which enables us to determine the percentage of code lines required to parallelize the code using a specific framework. We use our tool x-MeterPU to evaluate the energy consumption and the performance. Experiments are conducted using the industry-standard SPEC benchmark suite and the Rodinia benchmark suite for accelerated computing on heterogeneous systems that combine Intel Xeon E5 Processors with a GPU accelerator or an Intel Xeon Phi co-processor.