QoS Policies and Architecture for Cache/Memory in CMP Platforms
Authors: Ravi Iyer, Li Zhao, Fei Guo, Steve Reinhardt (Intel, NC State, U Michigan)
Venue: SIGMETRICS 2007
Venue: SIGMETRICS 2007
First and foremost, the biggest thing this paper has going for it is the evaluation methodology. While the paper does not have adequate space to display a lot of test cases (few workloads are shown), they provide performance numbers for a purely trace-based methodology as well as a full-system mock up. What makes this particularly impressive is that the full-system is done with a modified Linux kernel which adds priority level to processes as well as new system calls to change QoS bits in platform registers.
Unlike other QoS papers, this one focuses on app prioritization. They do so by adjusting two key shared resources: memory bandwidth and cache allocation. The ideas present don't seem to provide any unsurprising insight. Essentially, giving more resources to a high-priority app (more than it's fair share) boost that app's performance at the cost of other low priority apps. The trade off is, not surprisingly, workload depending. A high priority streaming app doesn't need a lot of cache space, a low-priority cache-friendly workload does. While my read through was a bit light, I didn't see any specific algorithm notes that prevent this case from happening. Additionally, the workloads studied (at least for intervals tested) don't display any phase behavior, making the static allocations nearly identical (or in some cases better) than the dynamic allocations.
TL;DR: Resource allocation via application priority. Good test methodology. App (process) priority exists today in current Linux kernels.
Comments
Post a Comment