A Comparative Evaluation of Techniques for Studying Parallel Systems
Author(s)
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
This paper presents a comparative and qualitative survey of techniques for
evaluating parallel systems. We also survey metrics that have been proposed
for capturing and quantifying the details of complex parallel system
interactions. Experimentation, theoretical/analytical modeling and simulation
are three frequently used techniques in performance evaluation. Experimentation
uses real or synthetic workloads, usually called benchmarks, to measure and
analyze their performance on actual hardware. Theoretical and analytical
models are used to abstract details of a parallel system, providing the view
of a simplified system parameterized by a limited number of degrees of freedom
that are kept tractable. Simulation and related performance
monitoring/visualization tools have become extremely popular because of their
ability to capture the dynamic nature of the interaction between applications
and architectures.
We first present the figures of merit that are important for any performance
evaluation technique. With respect to these figures of merit, we survey the
three techniques and make a qualitative comparison of their pros and cons.
In particular, for each of the above techniques we discuss: representative case
studies; the underlying models that are used for the workload and the
architecture; the feasibility and ease of quantifying standard performance
metrics from the available statistics; the accuracy/validity of the output
statistics; and the cost/effort that is expended in each evaluation
strategy.
Sponsor
Date
1994
Extent
190280 bytes
Resource Type
Text
Resource Subtype
Technical Report