Person:
Orso, Alessandro

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 11
  • Item
    BugRedux: Reproducing Field Failures for In-house Debugging
    (Georgia Institute of Technology, 2011) Jin, Wei ; Orso, Alessandro
    When a software system fails in the field, on a user machine, and the failure is reported to the developers, developers in charge of debugging the failure must be able to reproduce the failing behavior in house. Unfortunately, reproducing field failures is a notoriously challenging task that has little support today. Typically, developers are provided with a bug report that contains data about the failure, such as memory dumps and, in the best case, some additional information provided by the user. However, this data is usually insufficient for recreating the problem, as recently reported in a survey conducted among developers of the Apache, Eclipse, and Mozilla projects. Even more advanced approaches for gathering field data and help in-house debugging tend to collect either too little information, which results in inexpensive but often ineffective techniques, or too much information, which makes the techniques effective but too costly. To address this issue, we present a novel general approach for supporting in-house debugging of field failures, called BUGREDUX. The goal of BUGREDUX is to synthesize, using execution data collected in the field, executions that mimic the observed field failures. We define several instances of BUGREDUX that collect different types of execution data and perform, through an empirical study, a cost-benefit analysis of the approach and its variations. In the study, we use a tool that implements our approach to recreate 17 failures of 15 realworld programs. Our results are promising and lead to several findings, some of which unexpected. In particular, they show that by collecting a suitable yet limited set of execution data the approach can synthesize in-house executions that reproduce the observed failures.
  • Item
    Execution Hijacking: Improving Dynamic Analysis by Flying off Course
    (Georgia Institute of Technology, 2010) Tsankov, Petar ; Jin, Wei ; Orso, Alessandro ; Sinha, Saurabh
    Typically, dynamic-analysis techniques operate on a small subset of all possible program behaviors, which limits their effectiveness and the representativeness of the computed results. To address this issue, a new paradigm is emerging: execution hijacking—techniques that explore a larger set of program behaviors by forcing executions along specific paths. Although hijacked executions are infeasible for the given inputs, they can still produce feasible behaviors that could be observed under other inputs. In such cases, execution hijacking can improve the effectiveness of dynamic analysis without requiring the (expensive) generation of additional inputs. To evaluate the usefulness of execution hijacking, we defined, implemented, and evaluated several variants of it. Specifically, we performed empirical study where we assessed whether execution hijacking could improve the effectiveness of two common dynamic analyses: software testing and memory error detection. The results of the study show that execution hijacking, if suitably performed, can indeed help dynamic analysis techniques.
  • Item
    Camouflage: Automated Sanitization of Field Data
    (Georgia Institute of Technology, 2009) Clause, James ; Orso, Alessandro
    Privacy and security concerns have adversely affected the usefulness of many types of techniques that leverage information gathered from deployed applications. To address this issue, we present a new approach for automatically sanitizing failure-inducing inputs. Given an input I that causes a failure f, our technique can generate a sanitized input I' that is different from I but still causes f. I' can then be sent to the developers to help them debug f, without revealing the possibly sensitive information contained in I. We implemented our approach in a prototype tool, camouflage, and performed an empirical evaluation. In the evaluation, we applied camouflage to a large set of failure-inducing inputs for several real applications. The results of the evaluation are promising; they show that camouflage is both practical and effective at generating sanitized inputs. In particular, for the inputs that we considered, I and I' shared no sensitive information.
  • Item
    Understanding Data Dependences in the Presence of Pointers
    (Georgia Institute of Technology, 2003) Orso, Alessandro ; Sinha, Saurabh ; Harrold, Mary Jean
    Understanding data dependences in programs is important for many software-engineering activities, such as program understanding, impact analysis, reverse engineering, and debugging. The presence of pointers, arrays, and structures can cause subtle and complex data dependences that can be difficult to understand. For example, in languages such as C, an assignment made through a pointer dereference can assign a value to one of several variables, none of which may appear syntactically in that statement. In the first part of this paper, we describe two techniques for classifying data dependences in the presence of pointer dereferences. The first technique classifies data dependences based on definition type, use type, and path type. The second technique classifies data dependences based on span. We present empirical results to illustrate the distribution of data-dependence types and spans for a set of real C programs. In the second part of the paper, we discuss two applications of the classification techniques. First, we investigate different ways in which the classification can be used to facilitate data-flow testing and verification. We outline an approach that uses types and spans of data dependences to determine the appropriate verification technique for different data dependences; we present empirical results to illustrate the approach. Second, we present a new slicing paradigm that computes slices based on types of data dependences. Based on the new paradigm, we define an incremental slicing technique that computes a slice in multiple steps. We present empirical results to illustrate the sizes of incremental slices and the potential usefulness of incremental slicing for debugging.
  • Item
    Interclass Testing of Object Oriented Software
    (Georgia Institute of Technology, 2002) Martena, Vincenzo ; Orso, Alessandro ; Pezze, Mauro
    The characteristics of object-oriented software affect type and relevance of faults. In particular, the state of the objects may cause faults that cannot be easily revealed with traditional testing techniques. This paper proposes a new technique for interclass testing, that is, the problem of deriving test cases for suitably exercising interactions among clusters of classes. The proposed technique uses data-flow analysis for deriving a suitable set of test case specifications for interclass testing. The paper then shows how to automatically generate feasible test cases that satisfy the derived specifications using symbolic execution and automated deduction. Finally, the paper demonstrates the effectiveness of the proposed technique by deriving test cases for a microscope controller developed for the European Space Laboratory of the Columbus Orbital Facility.
  • Item
    A Framework for Understanding Data Dependences
    (Georgia Institute of Technology, 2002) Orso, Alessandro ; Liang, Donglin ; Sinha, Saurabh ; Harrold, Mary Jean
    Identifying and understanding data dependences is important for a variety of software-engineering tasks. The presence of pointers, arrays, and dynamic memory allocation introduces subtle and complex data dependences that may be difficult to understand. In this paper, we present a refinement of our previously developed classification that also distinguishes the types of memory locations, considers interprocedural data dependences, and further distinguishes such data dependences based on the kinds of interprocedura paths on which they occur. This new classification enables reasoning about the complexity of data dependences in programs using features such as pointers, arrays, and dynamic memory allocation. We present an algorithm for computing interprocedural data dependences according to our classification. To evaluate the classification, we compute the distribution of data dependences for a set of real C programs and we discuss how the distribution can be useful in understanding the characteristics of a program. We also evaluate how alias information provided by different algorithms, varying in precision, affects the distribution. Finally, we investigate how the classification can be exploited to estimate complexity of the data dependences in a program.
  • Item
    Gamma System: Continuous Evolution of Software after Deployment
    (Georgia Institute of Technology, 2002) Orso, Alessandro ; Liang, Donglin ; Harrold, Mary Jean ; Lipton, Richard J.
    In this paper, we present the Gamma system---a new approach for continuous improvement of software systems after their deployment. The Gamma system facilitates remote monitoring of deployed software using a revolutionary approach that exploits the opportunities presented by a software product being used by many users connected through a network. Gamma splits monitoring tasks across different instances of the software, so that partial information can be collected from different users by means of light-weight instrumentation, and integrated to gather the overall monitoring information. This system enables software producers (1) to perform continuous, minimally intrusive analyses of their software's behavior, and (2) to use the information thus gathered to improve and evolve their software. We describe the Gamma system and its underlying technology in detail, and illustrate the different components of the system. We also present a prototype implementation of the system and show our initial experiences with it.
  • Item
    A Technique for Dynamic Updating of Java Software
    (Georgia Institute of Technology, 2002) Orso, Alessandro ; Rao, Anup ; Harrold, Mary Jean
    TDuring maintenance, systems are updated to correct faults, improve functionality, and adapt the software to changes in its execution environment. The typical software-update process consists of stopping the system to be updated, performing the actual update of the code, and restarting the system. For systems such as banking and telecommunication software, however, the cost of downtime can be prohibitive. The situation is even worse for systems such as air-traffic controllers and life-support software, for which a shut-down is in general not an option. In those cases, the use of some form of on-the-fly program modification is required. In this paper, we propose a new technique for dynamic updating of Java software. Our technique is based on the use of proxy classes and does not require any support from the runtime system. The technique allows for updating a running Java program by substituting, adding, and deleting classes. We also present Dusc (Dynamic Updating through Swapping of Classes), a tool that we developed and that implements our technique. Finally, we describe an empirical study that we performed to validate the technique on a real Java subject. The results of the study show that our technique can be effectively applied to Java software with only little overhead in both execution time and program size.
  • Item
    Using Component Metadata to Support the Regression Testing of Component-Based Software
    (Georgia Institute of Technology, 2000) Harrold, Mary Jean ; Orso, Alessandro ; Rosenblum, David S. ; Rothermel, Gregg ; Soffa, Mary Lou ; Do, Hyunsook
    Interest in component-based software continues to grow with the recognition of its potential in managing the increasing complexity of software systems. However, the use of externally-provided components has serious drawbacks, in most cases due to the lack of information about the components, for a wide range of activities in the engineering of component-based applications. Consider the activity of regression testing, whose high cost has been, and continues to be, a problem. In the case of component-based applications, regression testing can be even more expensive. When a new version of one or more components is integrated into an application, the lack of information about such externally-developed components makes it difficult to effectively determine the test cases that should be rerun on the resulting application. In previous work, we proposed the use of metadata, which are additional data provided with a component, to support software engineering tasks. In this paper, we present two new metadata-based techniques that address the problem of regression test selection for component-based applications: a code-based approach and a specification-based approach. First, using an example, we illustrate the two techniques. Then, we present a case study that applies the code-based technique to a real component-based system. The results of the study indicate that, on average, 26% of the overall testing effort can be saved over seven releases of the component-based system studied, with a maximum savings of 99% of the testing effort for one version. This reduction demonstrates that metadata can produce benefits in regression testing by reducing the costs related to this activity.
  • Item
    Incremental Slicing Based on Data-Dependences Types
    (Georgia Institute of Technology, 2000) Orso, Alessandro ; Sinha, Saurabh ; Harrold, Mary Jean
    Program slicing is useful for assisting with software-maintenance tasks, such as program understanding, debugging, impact analysis, and regression testing. The presence and frequent usage of pointers, in languages such as C, causes complex data dependences. To function effectively on such programs, slicing techniques must account for pointerinduced data dependences. Although many existing slicing techniques function in the presence of pointers, none of those techniques distinguishes data dependences based on their types. This paper presents a new slicing technique, in which slices are computed based on types of data dependences. This new slicing technique offers several benefits and can be exploited in different ways, such as identifying subtle data dependences for debugging purposes, computing reduced-size slices quickly for complex programs, and performing incremental slicing. In particular, this paper describes an algorithm for incremental slicing that increases the scope of a slice in steps, by incorporating different types of data dependences at each step. The paper also presents empirical results to illustrate the performance of the technique in practice. The experimental results show how the sizes of the slices grow for different small- and mediumsized subjects. Finally, the paper presents a case study that explores a possible application of the slicing technique for debugging.