Organizational Unit:
School of Computer Science

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 2 of 2
  • Item
    Accelerating microarchitectural simulation via statistical sampling principles
    (Georgia Institute of Technology, 2012-12-05) Bryan, Paul David
    The design and evaluation of computer systems rely heavily upon simulation. Simulation is also a major bottleneck in the iterative design process. Applications that may be executed natively on physical systems in a matter of minutes may take weeks or months to simulate. As designs incorporate increasingly higher numbers of processor cores, it is expected the times required to simulate future systems will become an even greater issue. Simulation exhibits a tradeoff between speed and accuracy. By basing experimental procedures upon known statistical methods, the simulation of systems may be dramatically accelerated while retaining reliable methods to estimate error. This thesis focuses on the acceleration of simulation through statistical processes. The first two techniques discussed in this thesis focus on accelerating single-threaded simulation via cluster sampling. Cluster sampling extracts multiple groups of contiguous population elements to form a sample. This thesis introduces techniques to reduce sampling and non-sampling bias components, which must be reduced for sample measurements to be reliable. Non-sampling bias is reduced through the Reverse State Reconstruction algorithm, which removes ineffectual instructions from the skipped instruction stream between simulated clusters. Sampling bias is reduced via the Single Pass Sampling Regimen Design Process, which guides the user towards selected representative sampling regimens. Unfortunately, the extension of cluster sampling to include multi-threaded architectures is non-trivial and raises many interesting challenges. Overcoming these challenges will be discussed. This thesis also introduces thread skew, a useful metric that quantitatively measures the non-sampling bias associated with divergent thread progressions at the beginning of a sampling unit. Finally, the Barrier Interval Simulation method is discussed as a technique to dramatically decrease the simulation times of certain classes of multi-threaded programs. It segments a program into discrete intervals, separated by barriers, which are leveraged to avoid many of the challenges that prevent multi-threaded sampling.
  • Item
    Exploring and visualizing the impact of multiple shared displays on collocated meeting practices
    (Georgia Institute of Technology, 2009-05-18) Plaue, Christopher M.
    A tremendous amount of information is produced in the world around us, both as a product of our daily lives and as artifacts of our everyday work. An emerging area of Human-Computer Interaction (HCI) focuses on helping individuals manage this flood of information. Prior research shows that multiple displays can improve an individual user's ability to deal with large amounts of information, but it is unclear whether these advantages extend for teams of people. This is particularly relevant as more employees are spending large portions of their workdays in meetings My contribution to HCI research is empirical fieldwork and laboratory studies investigating how multiple shared displays improve aspects of teamwork. In particular, I present an insight-based evaluation method for analyzing how teams collaborate on a data-intensive sensemaking task. Using this method, I show how the presence and location of multiple shared displays impacted the meeting process with respect to performance, collaboration, and satisfaction. I also illustrate how multiple shared displays engaged team members who might not have otherwise contributed to the collaboration process. Finally, I present Mimosa, a software tool developed to visualize large volumes of time series data. Mimosa combines aspects of information visualization with data analysis, facilitating a deep and iterative exploration of relationships within large datasets.