Person:
Schwan, Karsten

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 46
Thumbnail Image
Item

Experimentation with Event-Based Methods of Adaptive Quality of Service Management

1999 , West, Richard , Schwan, Karsten

Many complex distributed applications require quality of service (QoS) guarantees on the end-to-end transfer of information across a distributed system. A major problem faced by any system, or infrastructure, providing QoS guarantees to such applications is that resource requirements and availability may change at run-time. Consequently, adaptive resource (and, hence, service) management mechanisms are required to guarantee quality of service to these applications. This paper describes different methods of adaptive quality of service management, implemented with the event-based mechanisms offered by the Dionisys quality of service infrastructure. Dionisys allows applications to influence: (1) how service should be adapted to maintain required quality, (2) when such adaptations should occur, and (3) where these adaptations should be performed. In Dionisys, service managers execute application-specific functions to monitor and adapt service (and, hence, resource usage and allocation), in order to meet the quality of service requirements of adaptable applications. This approach allows service managers to provide service in a manner specific to the needs of individual applications. Moreover, applications can monitor and pin-point resource bottlenecks, adapt their requirements for heavily-demanded resources, or adapt to different requirements of alternative resources, in order to improve or maintain their overall quality of service. Likewise, service managers can cooperate with other service managers and, by using knowledge of application-specific resource requirements and adaptation capabilities, each service manager can make better decisions about resource allocation. Using a real-time client-server application, built on top of Dionisys, we compare alternative strategies for adapting and coordinating CPU and network services. In this fashion, we demonstrate the importance of flexibility in quality of service management.

Thumbnail Image
Item

Min-cut Methods for Mapping Dataflow Graphs

1999 , Elling, Volker Wilhelm , Schwan, Karsten

High performance applications and the underlying hardware platforms are becoming increasingly dynamic; runtime changes in the behavior of both are likely to result in inappropriate mappings of tasks to parallel machines during application execution. This fact is prompting new research on mapping and scheduling the dataflow graphs that represent parallel applications. In contrast to recent research which focuses on critical paths in dataflow graphs, this paper presents new mapping methods that compute near-min-cut partitions of the dataflow graph. Our methods deliver mappings that are an order of magnitude more efficient than those of DSC, a state-of-the-art critical-path algorithm, for sample high performance applications.

Thumbnail Image
Item

IR-DOMS project (SBIR)

1998 , Schwan, Karsten

Thumbnail Image
Item

Dynamic Authentication for High-Performance Networked Applications

1998 , Schneck, Phyllis Adele , Schwan, Karsten

Both government and business are increasingly interested in addressing the growing threats imposed by the lack of adequate information security. Consistent with these efforts, our work focuses on the integrity and protection of information exchanged in high-performance networked computing applications such as video teleconferencing and other streamed interactive data exchanges. For these applications, security procedures are often omitted in the interest of performance. Since this may not be acceptable when using public communications media, our research makes explicit and then utilizes the inherent tradeoffs in realizing performance vs. security in communications. In this paper, we expand the notion of QoS to include the level of security that can be offered within performance and CPU resource availability constraints. To address performance and security tradeoffs in asymmetric and dynamic client-server environments, we developed Authenticast, a dynamically configurable, user-level communications protocol, offering variable levels of security throughout execution. The Authenticast protocol comprises a suite of heuristics to realize dynamic security levels, as well as heuristics that decide when and how to apply dynamic security. To demonstrate this protocol, we have implemented a prototype of a high performance privacy system. We have developed and experimented with a novel security control abstraction with which tradeoffs in security vs. performance may be made explicit and then utilized with dynamic client-server asymmetries. This abstraction is called a security thermostat [12], and interacts directly with Authenticast to enable adaptive security processing. Our results demonstrate overall increased scalability and improved performance when adaptive security is applied to the client-server platform with varying numbers of clients and varying resource availabilities at clients.

Thumbnail Image
Item

AASERT : dynamic configuration of distributed objects : on-line monitoring

1999 , Schwan, Karsten

Thumbnail Image
Item

Scalable Scheduling Support for Loss and Delay Constrained Media Streams

1998 , West, Richard , Schwan, Karsten , Poellabauer, Christian

Real-time media servers need to service hundreds and, possibly, thousands of clients, each with their own quality of service (QoS) requirements. To guarantee such diverse QoS requires fast and efficient scheduling support at the server. This paper describes the practical issues concerned with the implementation of a scalable real-time packet scheduler resident on a server, designed to meet service constraints on information transferred across a network to many clients. Specifically, we describe the implementation issues and performance achieved by Dynamic Window-Constrained Scheduling (DWCS), which is designed to meet the delay and loss constraints on packets from multiple streams with different performance objectives. In fact, DWCS is designed to limit the number of late packets over finite numbers of consecutive packets in loss-tolerant and/or delay-constrained, heterogeneous traffic streams. We show how DWCS can be efficiently implemented to provide service guarantees to hundreds of streams. We compare the costs of different implementations, including an approximation algorithm, which trades service quality for speed of execution.

Thumbnail Image
Item

Adaptation and Specialization for High Performance Mobile Agents

1998 , Zhou, Dong , Schwan, Karsten

Mobile agents as a new design paradigm for distributed computing potentially permit network applications to operate across dynamic and heterogeneous systems and networks. Agent computing, however, is subject to inefficiencies due to the complexities of the environments in which agents are deployed, which requires agent-based programs to rely on underlying agent systems to mask some of those complexities, by using system-wide, uniform representations of agent code and data and by 'hiding' the volatility in agents' 'spatial' relationships. This paper explores two approaches for improving the performance of agent-based programs: (1) runtime adaptation and (2) agent specialization. Our general aim is to enable programmers to employ these techniques to improve program performance without sacrificing the fundamental advantages promised by mobile agent programming. The specific results in this paper demonstrate the beneficial effects of agent adaptation both for a single mobile agent and for several cooperating agents, using the adaptation techniques of agent morphing and agent fusion. Experimental results are attained with two sample high performance distributed applications, derived from the scientific domain and from sensor-based codes, respectively.

Thumbnail Image
Item

Real-Time Visualization in Distributed Computational Laboratories

1999 , King, Davis , Schwan, Karsten , Eisenhauer, Greg S. , Plale, Beth , Isert, Carsten

Large data volumes cannot be transported, processed or displayed in real-time unless we apply to them general or application-specific compression and filtering techniques. In addition, when multiple end users inspect such data sets or when multiple programs access or consume them, data distribution and display should be performed differentially, in accordance with the queries generated by programs or end users. Finally, if dynamic access queries cannot be formulated precisely, then they must be refined as they progress in order to avoid unnecessary data retrievals, transfers, and information overload for programs or end users with uninteresting or unimportant data. The principal idea of our research is to create Active User Interfaces (AUIs) that continuously emit events describing their internal states and/or current information needs. Based on these events, we then develop methods for controlling the information streams directed at these interfaces, for single and for multiple, collaborating end users. The purposes of stream control are twofold. First, stream control is performed to deal with heterogeneous underlying hardware and software systems, where streams may originate at secondary storage media or may be generated dynamically, may have to be moved across the Internet or may utilize local area or high performance interconnects, and where collaborating user interfaces may range from low end PC-based displays to high end immersive visualization engines. Second, stream control aims to achieve scalability for user interfaces to large-scale, complex data streams directed at them, by offloading computations from visualizations to information generators or to information routing sites, to dynamically migrate such computations to appropriate locations, and to adapt these computations in order to effect tradeoffs in the amount of data moved across network links vs. the computations required when performing data rendering, compression, filtering, and routing actions.

Thumbnail Image
Item

Interactors: Capturing Tradeoffs in Bandwidth Versus CPU Usage for Quality of Service Constrained Objects

1998 , West, Richard , Schwan, Karsten

Complex distributed applications, including virtual environments, and real-time multimedia require performance guarantees in the end-to-end transfer of information across a network. To make such guarantees requires the management of processing, memory, and network resources. This paper describes the Dionisys end-system quality of service (QoS) approach to specifying, translating, and enforcing end-to-end, object-level QoS constraints. Dionisys differs from previous work on QoS architectures by supporting QoS constraints on distributed shared objects, as well as multimedia streams. Consequently, we introduce 'interactors', which capture the QoS constraints and resource requirements at each stage in the generation, processing, and transfer of information between multiple cooperating objects. Using interactors, Dionisys is able to coordinate both thread and packet-level scheduling, so that information is processed and transmitted at matching rates. However, there are tradeoffs in the use of CPU cycles for scheduling and the need to meet QoS constraints on the information transferred between interacting objects. We show, empirically, the packet scheduling frequency to minimize CPU overheads while maximizing bandwidth usage.

Thumbnail Image
Item

IR-DOMS project (SBIR) : phase II

1998 , Schwan, Karsten