Posts

Showing posts from March, 2019

Understanding and Auto-Adjusting Performance-Sensitive Configurations

Authors: Shu Wang, Chi Li, Henry Hoffman, Shan Lu, William Sentosa, Achmad Imam Kistijantoro Venue: ASPLOS 2018 This paper presents a control theory approach to solving performance problems in workloads with many configurable parameters. The authors reference database workloads such as Cassandra, HBase, HDFS, and Hadoop MapReduce. The authors employ control theory with two key components outside of traditional control theory: a dynamic pole (error tolerance factor), and a virtual goal. Combined, these two approaches allow SmartConf to meet performance goals and hard constraints better than previous approaches. The authors also go into detail as to how their approach could be integrated into commercial software. See Yukta (ISCA 2018) for a similar-flavor paper which also uses control theory.  The remainder of this post will be subjective. This paper is exceptionally well-written, using many real-world examples to build motivation. Objectively, the paper's novelty is software ...

PADDLE: Performance Analysis using a Data-Driven Learning Environment

Authors: Jayaraman Thiagarajan, Rushil Anirdh, Bhavya Kaikhura, Nikhil Jain, Tanzima Islam, Abhinav Bhatele, Jae-Seung Yeom, Todd Gamblin Venue: I EEE International Parallel and Distributed Processing Symposium (IPDPS) In the scope of HPC, machine learning is gaining increased traction to add in performance analysis and tuning. However, this approach includes a pipeline of data collection, data pre-processing, various machine learning algorithm testing, tuning, and then finally trying to understand the model. The paper states that while this process is repetitive, rarely can insights be reused from one domain to another. To address this void, the propose PADDLE. PADDLE has three key steps: deep feature extraction, model design, and visualization. The first step allows users to throw extensive amounts of data at the problem, and an automated solutions determines the key inputs, mapping them to a new feature space. The next step in paddle automatically tests a number of machine lea...

Flexible and Efficient Decision-Making for Proactive Latency-Aware Self-Adaptation

Venue: T ransactions on Autonomous and Adaptive Systems (TAAS) Authors: Gabriel A. Moreno, Javier Camara, David Garlan, Bradley Schmerl The title essentially encapsulates the problem which this paper is addressing. The setting is a Markov Decision Process, but with deterministic adaptations that do not effect the evolution of the environment. Restated another way, decisions do not impact the state, but rather the reward (and penalty). This leads to a separate notion of environment state  and system state . Note that the utility (reward) may be a factor of both. The innovation of this paper lies specifically in encapsulating that while deterministic, the different actions have different delays in terms of system state. The environment evolution is modeled via a discrete time markov chain (DTMC), which can be used to generate a partial probability tree. Specifically, an Extended Pearson-Tukey (EP-T) three point approximation. To encode latency, the progress of an adaptation is e...