Person:
Harrold, Mary Jean

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 4 of 4
  • Item
    Automated Concolic Testing of Smartphone Apps
    (Georgia Institute of Technology, 2012) Anand, Saswat ; Naik, Mayur ; Yang, Hongseok ; Harrold, Mary Jean
    We present an algorithm and a system for generating input events to exercise smartphone apps. Our approach is based on concolic testing and generates sequences of events automatically and systematically. It alleviates the path-explosion problem by checking a condition on program executions that identifies subsumption between different event sequences. We also describe our implementation of the approach for Android, the most popular smartphone app platform, and the results of an evaluation that demonstrates its effectiveness on five Android apps.
  • Item
    Matching Test Cases for Effective Fault Localization
    (Georgia Institute of Technology, 2011) Baah, George K. ; Podgurski, Andy ; Harrold, Mary Jean
    Finding the cause of a program’s failure from a causal-analysis perspective requires, for each statement, tests that cover the statement and tests that do not cover the statement. However, in practice the composition of test suites can be detrimental to effective fault localization for two reasons: (1) lack-of-balance, which occurs if the coverage characteristics of tests that cover a statement differ from tests that do not cover the statement, and (2) lack-of-overlap, which occurs if test cases that reach the control-dependence predecessor of a statement cover or do not cover the statement. This paper addresses these two problems. First, the paper presents empirical results that show that, for effective fault localization, the composition of test suites should exhibit balance and overlap. Second, the paper presents new techniques to overcome these problems—matching to address lack-of-balance and causal-effect imputation to overcome lack-of-overlap—and presents empirical evidence that these techniques increase the effectiveness of fault localization.
  • Item
    Probabilistic Slicing for Predictive Impact Analysis
    (Georgia Institute of Technology, 2010) Santelices, Raul ; Harrold, Mary Jean
    Program slicing is a technique that determines which statements in a program affect or are affected by another statement in that program. Static forward slicing, in particular, can be used for impact analysis by identifying all potential effects of changes in software. This information helps developers design and test their changes. Unfortunately, static slicing is too imprecise—it often produces large sets of potentially affected statements, limiting its usefulness. To reduce the resulting set of statements, other forms of slicing have been proposed, such as dynamic slicing and thin slicing, but they can miss relevant statements. In this paper, we present a new technique, called Probabilistic Slicing (p-slicing), that augments a static forward slice with a relevance score for each statement by exploiting the observation that not all statements have the same probability of being affected by a change. P-slicing can be used, for example, to focus the attention of developers on the “most impacted” parts of the program first. It can also help testers, for example, by estimating the difficulty of “killing” a particular mutant in mutation testing and prioritizing test cases. We also present an empirical study that shows the effectiveness of p-slicing for predictive impact analysis and we discuss potential benefits for other tasks.
  • Item
    Applying Aggressive Propagation-based Strategies for Testing Changes
    (Georgia Institute of Technology, 2010) Santelices, Raul ; Harrold, Mary Jean
    Test-suite augmentation for evolving software— the process of augmenting a test suite to adequately test software changes—is necessary for any program that undergoes modifications as part of its development and maintenance cycles. Recently, we presented a new technique for test-suite augmentation based on leveraging the propagation conditions for the effects of changes. Although empirical studies show that this technique can be quite effective for testing changes, the experiments have been limited because of the complexity of the implementation. In this paper, we present a new and more efficient approach for propagation-based testing of changes that can reach much longer propagation-distances and can focus the testing more precisely on those behaviors of changes that can actually affect the output. Using an implementation of this new approach, we performed a study on a set of changes on Java programs for which we compared, to a much larger extent, our propagation-based strategy with other existing techniques for testing changes. The results of the study not only confirm the superior effectiveness of propagation-based strategies over these other techniques for testing changes, but also quantify that superiority and clarify the conditions under which our approach is most effective.