Person:
Schwan, Karsten

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 63
  • Item
    Data staging on future platforms: Systems management for high performance and resilience
    (Georgia Institute of Technology, 2014-05) Schwan, Karsten ; Eisenhauer, Greg S. ; Wolf, Matthew
  • Item
    DDDAS-TMRP: Dynamic, simulation-based management of surface transportation systems
    (Georgia Institute of Technology, 2009-12-21) Fujimoto, Richard M. ; Leonard, John D., ll ; Guensler, Randall L. ; Schwan, Karsten ; Hunter, Michael D.
  • Item
    Interactive program steering of high performance parallel systems
    (Georgia Institute of Technology, 2009-09-21) Schwan, Karsten
  • Item
    I/O Virtualization - from self-virtualizing devices to metadata-rich information appliances
    (Georgia Institute of Technology, 2009-07-12) Schwan, Karsten ; Eisenhauer, Greg S. ; Gavrilovska, Ada
  • Item
    XCHANGE: High Performance Data Morphing in Distributed Applications
    (Georgia Institute of Technology, 2005) Lofstead, Jay ; Schwan, Karsten
    Distributed applications in which large volumes of data are exchanged between components that generate, process, and store or display data are common in both the high performance and enterprise domains. A key issue in these domains is mismatches of the data being generated with the data required by end users or by intermediate components. Mismatches are due to the need to customize or personalize data for certain end users, or they arise from natural differences in the data representations used by different components. In either case, mismatch correction – data morphing -- requires either servers or clients to perform extensive data processing. This paper describes automated methods and associated tools for morphing data in overlay networks that connect data producers with consumers. These methods automatically generate data transformation codes from declarative specifications, ‘just in time’, i.e., when and as needed. By describing data transformations declaratively, code generation can take into account the current nature of the data being generated, the current needs of data sinks, and the current resources available in the overlay connecting sources to sinks. In addition, code generation can consider the shared requirements of multiple consumers, to reduce redundant data transmissions and transformations. Data morphing is realized with the XCHANGE toolset, and in this paper, it is applied to both high performance and enterprise applications. Runtime generation and deployment of data morphing codes for filtering and transforming the large data volumes exchanged in a high performance remote data visualization is shown to result in improved network usage, able to generate code that matches the data volumes exchanged to available network resources. Morphing codes dynamically generated and deployed for an enterprise application in the healthcare domain demonstrates the importance of generating code so as to improve server scalability by reducing server loads.
  • Item
    Java Mirrors: Building Blocks for Interacting with High Performance Applications
    (Georgia Institute of Technology, 2005) Chen, Yuan ; Schwan, Karsten ; Rosen, David W.
    Mirror objects are the key building blocks in the virtual 'workbenches' and 'portals' for scientific and engineering applications constructed by our group. This paper uses mirror objects in the implementation of the RTTB design workbench, which controls components of the RTTB rapid tooling and prototyping testbed. Mirror objects continuously mirror the states of remote software or even hardware entities, and the operations performed on mirrors are automatically propagated to these entities. Thus, end users perceive mirrors as virtualizations of remote entities. This paper presents the concept of mirror objects, their JMOSS Java-based implementation, the interoperation of JMOSS Java mirrors with the CORBA-based MOSS mirror object implementation, demonstrations of mirror functionality and utility with a virtual `design workbench' used by engineers for rapid tooling and prototyping processes, and performance evaluations of mirror objects. We also present initial evaluations of JMOSS mirrors in mobile environments, where workbench users can continue their PC-based online interactions via handheld devices carried to the shop floor.
  • Item
    Leveraging Block Decisions and Aggregation in the ShareStreams QoS Architecture
    (Georgia Institute of Technology, 2003) Krishnamurthy, Rajaram B. ; Yalamanchili, Sudhakar ; Schwan, Karsten ; West, Richard
    ShareStreams (Scalable Hardware Architectures for Stream Schedulers) is a canonical architecture for realizing a range of scheduling disciplines. This paper discusses the design choices and tradeoffs made in the development of a Endsystem/Host-based router realization of the ShareStreams architecture. We evaluate the impact of block decisions and aggregation on the ShareStreams architecture. Using processor resources for queuing and data movement, and FPGA hardware for accelerating stream selection and stream priority updates, ShareStreams can easily meet the wire-speeds of 10Gbps links. This allows provision of customized scheduling solutions and interoperability of scheduling disciplines. FPGA hardware uses a single-cycle Decision block to compare multiple stream attributes simultaneously for pairwise ordering and a Decision block arrangement in a recirculating network to conserve area and improve scalability. Our hardware implemented in the Xilinx Virtex family easily scales from 4 to 32 stream-slots on a single chip. A running FPGA prototype in a PCI card under systems software control can provide scheduling support for a mix of EDF, static-priority and fair-share streams based on user specifications and meet the temporal bounds and packet-time requirements of multi-gigabit links.
  • Item
    A Practical Approach for Zero Downtime in an Operational Information System
    (Georgia Institute of Technology, 2002) Gavrilovska, Ada ; Schwan, Karsten ; Oleson, Van
    An Operational Information System (OIS) supports a real-time view of an organization's information critical to its logistical business operations. A central component of an OIS is an engine that integrates data events captured from distributed, remote sources in order to derive meaningful real-time views of current operations. This Event Derivation Engine (EDE) continuously updates these views and also publishes them to a potentially large number of remote subscribers. This paper describes a sample OIS and EDE in the context of an airline's operations. It then defines the performance and availability requirements to be met by this system, specifically focusing on the EDE component. One particular requirement for the EDE is that subscribers to its output events should not experience downtime due to EDE failures and crashes or increased processing loads. This paper describes a practical technique for masking failures and for hiding the costs of recovery from EDE subscribers. This technique utilizes redundant EDEs that coordinate view replicas with a relaxed synchronous fault tolerance protocol. Combination of pre- and post-buffering replicas is used to attain an optimal solution, which still prevents system-wide failure in the face of deterministic faults, such as ill-formed messages. By minimizing the amounts of synchronization used across replicas, the resulting zero downtime EDE can be scaled to support the large number of subscribers it must service.
  • Item
    Method Partitioning - Runtime Customization of Pervasive Programs without Design-time Application Knowledge
    (Georgia Institute of Technology, 2002) Zhou, Dong ; Pande, Santosh ; Schwan, Karsten
    Heterogeneity, decoupling, and dynamics in distributed, component-based applications indicate the need for dynamic program customization and adaptation. Method Partitioning is a dynamic unit placement based technique for customizing performance-critical message-based interactions between program components, at runtime and without the need for design-time application knowledge. The technique partitions message handling functions, and offers high customizability and low-cost adaptation of such partitioning. It consists of (a) static analysis of message handling methods to produce candidate partitioning plans for the methods, (b) cost models for evaluating the cost/benefits of different partitioning plans, (c) a Remote Continuation mechanism that "connects" the distributed parts of a partitioned method at runtime, and (d) Runtime Profiling and Reconfiguration Units that monitor actual costs of candidate partitioning plans and that dynamically select "best" plans from candidates. A prototypical implementation of Method Partitioning the JECho distributed event system is applied to two distributed applications: (1) a communication-bound application running on a wireless-connected mobile platform, and (2) a compute-intensive code mapped to power- and therefore, computationally limited embedded processors. Experiments with method Partitioning demonstrate significant performance improvements for both types of applications, derived from the fine-grain, low overhead adaptation actions applied whenever necessitated by changes in program behavior or environment characteristics.
  • Item
    Optimizing Dynamic Producer/Consumer Style Applications in Embedded Environments
    (Georgia Institute of Technology, 2002) Zhou, Dong ; Pande, Santosh ; Schwan, Karsten
    Many applications in pervasive computing environments are subject to resource constraints in terms of limited bandwidth and processing power. As such applications grow in scale and complexity, these constraints become increasingly difficult to predict at design and deployment times. Runtime adaptation is hence required for the dynamics in such constraints. However, to maintain the lightweightness of such adaptation it is important to statically gather relevant program information to reduce the runtime overhead of dynamic adaptation. This paper presents methods that use both static program analysis and runtime profiling to support the adaptation of producer/consumer-style pervasive applications. It demonstrates these methods with a network traffic-centric cost model and a program execution time-centric cost model. A communication bandwidth critical application and a computation intensive application are used to demonstrate the significant performance improvement opportunities offered by these methods under the presence of respective resource constraints.