Title:
Active management of Cache resources

dc.contributor.advisor Yalamanchili, Sudhakar
dc.contributor.author Ramaswamy, Subramanian en_US
dc.contributor.committeeMember Davis, Jeffrey
dc.contributor.committeeMember Ramachandran, Umakishore
dc.contributor.committeeMember Schimmel, David
dc.contributor.committeeMember Wardi, Yorai
dc.contributor.department Electrical and Computer Engineering en_US
dc.date.accessioned 2008-09-17T19:28:24Z
dc.date.available 2008-09-17T19:28:24Z
dc.date.issued 2008-07-08 en_US
dc.description.abstract This dissertation addresses two sets of challenges facing processor design as the industry enters the deep sub-micron region of semiconductor design. The first set of challenges relates to the memory bottleneck. As the focus shifts from scaling processor frequency to scaling the number of cores, performance growth demands increasing die area. Scaling the number of cores also places a concurrent area demand in the form of larger caches. While on-chip caches occupy 50-60% of area and consume 20-30% of energy expended on-chip, their performance and energy efficiencies are less than 15% and 1% respectively for a range of benchmarks! The second set of challenges is posed by transistor leakage and process variation (inter-die and intra-die) at future technology nodes. Leakage power is anticipated to increase exponentially and sharply lower defect-free yield with successive technology generations. For performance scaling to continue, cache efficiencies have to improve significantly. This thesis proposes and evaluates a broad family of such improvements. This dissertation first contributes a model for cache efficiencies and finds them to be extremely low - performance efficiencies less than 15% and energy efficiencies in the order of 1%. Studying the sources of inefficiency leads to a framework for efficiency improvement based on two interrelated strategies. The approach for improving energy efficiency primarily relies on sizing the cache to match the application memory footprint during a program phase while powering down all remaining cache sets. Importantly, the sized is fully functional with no references to inactive sets. Improving performance efficiency primarily relies on cache shaping, i.e., changing the placement function and thereby the manner in which memory shares the cache. Sizing and shaping are applied at different phase of the design cycle: i) post-manufacturing & offline, ii) at compile-time, and at iii) run-time. This thesis proposes and explores techniques at each phase collectively realizing a repertoire of techniques for future memory system designers. The techniques use a combination of HW-SW techniques and are demonstrated to provide substantive improvements with modest overheads. en_US
dc.description.degree Ph.D. en_US
dc.identifier.uri http://hdl.handle.net/1853/24663
dc.publisher Georgia Institute of Technology en_US
dc.subject Efficiency en_US
dc.subject Customized placement en_US
dc.subject Reconfigurable caches en_US
dc.subject Customized caches en_US
dc.subject Efficient caches en_US
dc.subject.lcsh Cache memory
dc.subject.lcsh Memory management (Computer science)
dc.subject.lcsh Energy conservation
dc.title Active management of Cache resources en_US
dc.type Text
dc.type.genre Dissertation
dspace.entity.type Publication
local.contributor.corporatename School of Electrical and Computer Engineering
local.contributor.corporatename College of Engineering
relation.isOrgUnitOfPublication 5b7adef2-447c-4270-b9fc-846bd76f80f2
relation.isOrgUnitOfPublication 7c022d60-21d5-497c-b552-95e489a06569
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
ramaswamy_subramanian_200808_phd.pdf
Size:
1.06 MB
Format:
Adobe Portable Document Format
Description: