Dynamic Partitioning of Shared Cache Memory
Authors: G. E. Suh, L. Rudolph, S. Devadas
Venue: SuperComputing 2004
This paper was released around the same era in which multi-core CPUs began to go mainstream. As a result, this is one of the first works to address resource partitioning, specifically, LLC partitioning. The authors utilize a framework to partition the cache based of marginal gains by allocating more cache. The work proposes a framework to allocate cache chunks (groups of blocks) by this scheme. However, to minimize hardware overhead, they are only able to sample marginal gains by way-granularity. They mention that this is one of the reasons in which their scheme performs sub-optimally. The results show a few outliers with significant gains (30%+), but excluding these, the results are lackluster. The significance of this work is primarily that it addresses the subject and emphasizes the importance in the future.
Full Text
Venue: SuperComputing 2004
This paper was released around the same era in which multi-core CPUs began to go mainstream. As a result, this is one of the first works to address resource partitioning, specifically, LLC partitioning. The authors utilize a framework to partition the cache based of marginal gains by allocating more cache. The work proposes a framework to allocate cache chunks (groups of blocks) by this scheme. However, to minimize hardware overhead, they are only able to sample marginal gains by way-granularity. They mention that this is one of the reasons in which their scheme performs sub-optimally. The results show a few outliers with significant gains (30%+), but excluding these, the results are lackluster. The significance of this work is primarily that it addresses the subject and emphasizes the importance in the future.
Full Text
Comments
Post a Comment