Person:
Schwan, Karsten

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 14
  • Item
    Data staging on future platforms: Systems management for high performance and resilience
    (Georgia Institute of Technology, 2014-05) Schwan, Karsten ; Eisenhauer, Greg S. ; Wolf, Matthew
  • Item
    I/O Virtualization - from self-virtualizing devices to metadata-rich information appliances
    (Georgia Institute of Technology, 2009-07-12) Schwan, Karsten ; Eisenhauer, Greg S. ; Gavrilovska, Ada
  • Item
    ITR: collaborative research: morphable software services: self-modifying programs for distributed embedded systems
    (Georgia Institute of Technology, 2008-12-14) Schwan, Karsten ; Pu, Calton ; Pande, Santosh ; Eisenhauer, Greg S. ; Balch, Tucker
  • Item
    Utility-Driven Availability-Management in Enterprise-Scale Information Flows
    (Georgia Institute of Technology, 2006) Cai, Zhongtang ; Kumar, Vibhore ; Cooper, Brian F. ; Eisenhauer, Greg S. ; Schwan, Karsten ; Strom, Robert E.
    Enterprises rely critically on the timely and sustained delivery of information, supported by middleware that ensures high-availability for such information flows. Our goal is to augment such middleware to create resilient information flows that deliver information while maximizing the utility end user applications derive from such information. Towards this end, this paper presents a `proactive availability-management' technique to offer (1) information flows that dynamically self-determine their availability requirement based on high-level utility specifications, (2) flows that can trade recovery time for performance based on the `perceived' stability and failure predictions (early alarm) for the underlying system, and (3) methods, based on real-world case studies, to deal with both transient and non-transient failures. We have incorporated `proactive availability-management' into information flow middleware, and experiments reported in this paper demonstrate its capability to self-determine availability guarantees, to offer improved performance over a statically configured system, and to be resilient to a wide-range of faults
  • Item
    Autonomic Information Flows
    (Georgia Institute of Technology, 2005) Schwan, Karsten ; Cooper, Brian F. ; Eisenhauer, Greg S. ; Gavrilovska, Ada ; Wolf, Matthew ; Abbasi, Hasan ; Agarwala, Sandip ; Cai, Zhongtang ; Kumar, Vibhore ; Lofstead, Jay ; Mansour, Mohamed S. ; Seshasayee, Balasubramanian ; Widener, Patrick M. (Patrick McCall)
    Today's enterprise systems and applications implement functionality that is critical to the ability of society to function. These complex distributed applications, therefore, must meet dynamic criticality objectives even when running on shared heterogeneous and dynamic computational and communication infrastructures. Focusing on the broad class of applications structured as distributed information flows, the premise of our research is that it is difficult, if not impossible, to meet their dynamic service requirements unless these applications exhibit autonomic or self-adjusting behaviors that are `vertically' integrated with underlying distributed systems and hardware. Namely, their autonomic functionality should extend beyond the dynamic load balancing or request routing explored in current web-based software infrastructures to (1) exploit the ability of middleware or systems to be aware of underlying resource availabilities, (2) dynamically and jointly adjust the behaviors of interacting elements of the software stack being used, and even (3) dynamically extend distributed platforms with enterprise functionality (e.g., network-level business rules for data routing and distribution). The resulting vertically integrated systems can meet stringent criticality or performance requirements, reduce potentially conflicting behaviors across applications, middleware, systems, and resources, and prevent breaches of the `performance firewalls' that isolate critical from non-critical applications. This paper uses representative information flow applications to argue the importance of vertical integration for meeting criticality requirements. This is followed by a description of the AutoFlow middleware, which offers methods that drive the control of application services with runtime knowledge of current resource behavior. Finally, we demonstrate the opportunities derived from the additional ability of AutoFlow to enhance such methods by also dynamically extending and controlling the underlying software stack, first to better understand its behavior and second, to dynamically customize it to better meet current criticality requirements.
  • Item
    IQ-Services: Network-Aware Middleware for Interactive Large-Data Applications
    (Georgia Institute of Technology, 2004) Cai, Zhongtang ; Eisenhauer, Greg S. ; He, Qi ; Kumar, Vibhore ; Schwan, Karsten ; Wolf, Matthew
    IQ-Services are application-specific, resource-aware code modules executed by data transport middleware. They constitute a 'thin' layer between application components and the underlying computational and communication resources that implements the data manipulations necessary to permit wide-area collaborations to proceed smoothly, despite dynamic resource variations. IQ-Services interact with the application and resource layers via dynamic performance attributes, and end-to-end implementations of such attributes also permit clients to interact with data providers. Joint middleware/resource and provider/consumer interactions implement a cooperative approach to data management for the large-data applications targeted by our research. Experimental results in this paper demonstrate substantial performance improvements attained by coordinating network-level with service-level adaptations of the data being transported and by permitting end users to dynamically deploy and use application-specific services for manipulating data in ways suitable for their current needs.
  • Item
    Native Data Representation: an Efficient Wire Format for High Performance Computing
    (Georgia Institute of Technology, 2001) Bustamante, Fabián Ernesto ; Schwan, Karsten ; Eisenhauer, Greg S.
    Flexible and high-performance data exchange is becoming increasingly important. This trend is due in part to the growing interest among high-performance researchers in tool- and component-based approaches to software development. In trying to reap the well-known benefits of these approaches, the question of what communications infrastructure should be used to link the various application components arises. Traditional HPC-style communication libraries such as MPI offer good performance, but are not intended for loosely-coupled systems. Object- and metadata-based approaches like XML offer the needed plug-and-play flexibility, but with significantly lower performance. We observe that the flexibility and baseline performance of data exchange systems are strongly determined by their "wire formants," or by how they represent data for transmission in the heterogeneous environments. Upon examining the performance implications of using a number of different wire formats, we propose an alternative approach to flexible high-performance data exchange, Native Data Representation, and evaluate its current implementation in the Portable Binary I/O library.
  • Item
    JECho - Supporting Distributed High Performance Applications with Java Event Channels
    (Georgia Institute of Technology, 2000) Schwan, Karsten ; Eisenhauer, Greg S. ; Chen, Yuan ; Zhou, Dong
    This paper presents JECho, a Java-based communication infrastructure for collaborative high performance applications. JECho implements a publish/subscribe communication paradigm, permitting distributed, concurrently executing sets of components to provide interactive service to collaborating end users via event channels. JECho's efficient implementation enables it to move events at rates higher than other Java-based event system implementations. In addition, using JECho's eager handler concept, individual event subscribers can dynamically tailor event flows to adapt to runtime changes in component behaviors and needs, and to changes in platform resources. JECho has been used to build distributed collaborative scientific codes as well as ubiquitous applications. Its event interface and eager handler mechanism have been shown flexible and in some scenarios, critical to the successful implementations of such applications. This paper's micro-benchmarks demonstrate that, with optimizations and customizations of the runtime system and the object transport layer, TCP-based reliable group communication in Java can reach good performance levels. These benchmark results also suggest that it is viable to use JECho to build large-scale, high-performance event delivery systems. JECho's implementation is in pure Java. Its group-cast communication layer is based on Java Sockets, and it also runs in some embedded environments that currently lack standard object serialization support.
  • Item
    Open Metadata Formats: Efficient XML-Based Communication for Heterogeneous Distributed Systems
    (Georgia Institute of Technology, 2000) Schwan, Karsten ; Eisenhauer, Greg S. ; Widener, Patrick M. (Patrick McCall)
    Definition and translation of metadata is incorporated in all systems that exchange structured data. We observe that the manipulation of his metadata can be decomposed into three separate steps: discovery of the metadata, binding of program objects to the message formats represented in the metadata, and marshaling of data to and from wire formats using the metadata. We have designed a method of representing message formats in XML, using datatypes available in the XML Schema specification. We have implemented a tool, xml2wire, that uses such metadata and exploits this decomposition in order to provide flexible metadata definition facilities for an efficient binary communications mechanism. We also observe that the use of xml2wire makes possible such flexibility without intolerable performance effects.
  • Item
    Real-Time Visualization in Distributed Computational Laboratories
    (Georgia Institute of Technology, 1999) King, Davis ; Schwan, Karsten ; Eisenhauer, Greg S. ; Plale, Beth ; Isert, Carsten
    Large data volumes cannot be transported, processed or displayed in real-time unless we apply to them general or application-specific compression and filtering techniques. In addition, when multiple end users inspect such data sets or when multiple programs access or consume them, data distribution and display should be performed differentially, in accordance with the queries generated by programs or end users. Finally, if dynamic access queries cannot be formulated precisely, then they must be refined as they progress in order to avoid unnecessary data retrievals, transfers, and information overload for programs or end users with uninteresting or unimportant data. The principal idea of our research is to create Active User Interfaces (AUIs) that continuously emit events describing their internal states and/or current information needs. Based on these events, we then develop methods for controlling the information streams directed at these interfaces, for single and for multiple, collaborating end users. The purposes of stream control are twofold. First, stream control is performed to deal with heterogeneous underlying hardware and software systems, where streams may originate at secondary storage media or may be generated dynamically, may have to be moved across the Internet or may utilize local area or high performance interconnects, and where collaborating user interfaces may range from low end PC-based displays to high end immersive visualization engines. Second, stream control aims to achieve scalability for user interfaces to large-scale, complex data streams directed at them, by offloading computations from visualizations to information generators or to information routing sites, to dynamically migrate such computations to appropriate locations, and to adapt these computations in order to effect tradeoffs in the amount of data moved across network links vs. the computations required when performing data rendering, compression, filtering, and routing actions.