Person:
Rugaber, Spencer

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    MASTERMIND project final report
    (Georgia Institute of Technology, 1998) Rugaber, Spencer
  • Item
    An Example of Program Understanding
    (Georgia Institute of Technology, 1998) Rugaber, Spencer
    What does it mean to understand a program? What sorts of questions can be answered about a program? What background knowledge is required to answer them? What tools can help the process? To answer questions like these, we will look at an example of program understanding in action. Imagine the following scenario: You are assigned responsibility for maintaining a program you have never seen before. It is written in the FORTRAN language and is concerned with finding the roots of function. We will assume that you know the FORTRAN language but are not an expert at it. That is, you still have to occasionally look at the reference manual to answer questions about the language. We will also assume that you have a computer science background, either through formal education or by on-the-job experience, so that you know how to design and compose programs similar to the one you are about to maintain. Finally, we will assume that you have a passing acquaintance with numerical analysis, possibly from a course you took. Hence, you are familiar with the idea of finding a root of a function, but you would have to look up an actual algorithm in order to write a root-finding program yourself. As you are responsible for long-term maintenance of the program, you want to understand it better. So you decide to systematically read it. This is distinct from the situation where you have a specific task to accomplish, such as finding a bug, adding a new feature, or updating the program to conform to a change in the language or operating environment. In these cases, instead of systematic reading, you might direct your efforts to accomplishing the specific task. Here, we will assume that you are going to make a single, sequential pass through the program text, with perhaps a few side trips to answer small questions as they arise. The following program text is taken directly from its source [1]. We have added three digits of line numbers in the left margin for expository purposes; they are not part of the program itself. We will annotate each line in the program as we come to it. The idea is to express some idea of what that line is there for. Sometimes we will raise questions, sometimes we will generate hypotheses that need to be confirmed, and sometimes we will speculate about the application domain. The intent is to get a feel for the kinds of knowledge required to understand a program on a line-by-line basis.
  • Item
    Automating UI Generation by Model Composition
    (Georgia Institute of Technology, 1998) Stirewalt, R. E. Kurt ; Rugaber, Spencer
    Automated user-interface generation environments have been criticized for their failure to deliver rich and powerful interactive applications. To specify more powerful systems, designers need multiple, specialized modeling notations. The model composition problem is concerned with automatically deriving powerful, correct, and efficient user interfaces from multiple models specified in different notations. Solutions balance the advantages of separating code generation into specialized code generators with deep, model-specific knowledge against the correctness and efficiency obstacles that result from such separation. We present a solution that maximizes the advantages of separating code generation. In our approach, highly specialized, model-specific code generators synthesize run-time modules from individual models. We address the correctness and efficiency obstacles by formalizing composition mechanisms that code generators may assume and that are guaranteed by a run-time infrastructure. The mechanisms operate to support run-time module composition as conjunctions in the sense defined by Zave and Jackson.
  • Item
    Automating the Design of Specification Interpreters
    (Georgia Institute of Technology, 1996) Stirewalt, R. E. Kurt ; Rugaber, Spencer ; Abowd, Gregory D.
    In this paper, we demonstrate the use of model checking in an automated technique to verify the operationalization of a declarative specification language. We refer to an interpreter synthesizer as a software tool that transforms a declarative specification into an executable interpreter. Iterative approaches to synthesizer generation refine initial synthesizer designs by validating them over a test suite of specifications. Carefully chosen test suites and structural constraints enable inductive reasoning with support from a model checker to assert the correctness of generated interpreters. This iterative approach to synthesizer generation occurred naturally in our work on developing interpreters for declarative human-computer dialogue languages as part of the DARPA MASTERMIND project. We will discuss the issues underlying the translation, operationalization and verification of the hierarchical task language for MASTERMIND. We will also discuss the importance of this semi-automated, iterative approach for assessing non-functional design tradeoffs.
  • Item
    The MASTERMIND User Interface Generation Project
    (Georgia Institute of Technology, 1996) Browne, Thomas ; Davila, David ; Rugaber, Spencer ; Stirewalt, R. E. Kurt
    Graphical user interfaces are difficult to construct and, consequently, suffer from high development and maintenance costs. Automatic generation from declarative descriptions can reduce costs and enforce design principles. MASTERMIND is a model based approach to user interface generation. Designers model different aspects of an interface using declarative modeling languages, and tools synthesize these models into run-time code. The design process begins with user task and application modeling. These models are then refined into dialogue, presentation, and interaction models and an application API. These latter models drive the synthesis of run-time code. A design tool called Dukas is employed to support the refinement of task models into dialogue models.