ArchiveSpace Name Record
Publication Search Results
Now showing 1 - 10 of 11
ItemGeoCast: An Efficient Overlay System for Multicast Applications(Georgia Institute of Technology, 2009) Liu, Ling ; Pu, Calton ; Wang, Yuehua ; Zhang, GongIn this paper, we present GeoCast, a geographical location aware overlay network framework designed for providing efficient group communication services. GeoCast can be seen as an extension to the CAN network in the term of topology management and routing protocol. Geocast design has three important properties that attractive to group communication applications. First, it uses geographical mapping of nodes to regions to take advantage of the similarity between physical and network proximity. Second, a shortcut enabled geo-distance routing protocol is employed in GeoCast, which is more resilient than Chord-like or Pastry-like overlay networks due to the availability of multiple independent routing paths. Third and most importantly, a novel routing table management scheme is designed to allow the applications based on that have ability to manage their maintenance overhead in terms of network resource constrains.
ItemCosmos: A Wiki Data Management System(Georgia Institute of Technology, 2009) Wu, Qinyi ; Pu, Calton ; Irani, DaneshWiki applications are becoming increasingly important for knowledge sharing between large numbers of users. To prevent against vandalism and recover from destructive edits, wiki applications need to maintain the revision histories of all documents. Due to the large amounts of data and traffic, a Wiki application needs to store the data economically and retrieve documents efficiently. Current Wiki Data Management Systems (WDMS) make a trade-off between storage requirement and access time for document update and retrieval. We introduce a new data management system, Cosmos, to balance this trade-off. To compare Cosmos with the other WDMSs, we use a 68GB data sample from English Wikipedia. Our experiments show that Cosmos uses one-fifth of the disk space when compared to MediaWiki (Wikipedia’s backend) and performs faster than other WDMSs at document retrieval.
ItemConsistency in Real-time Collaborative Editing Systems Based on Partial Persistent Sequences(Georgia Institute of Technology, 2009) Wu, Qinyi ; Pu, CaltonIn real-time collaborative editing systems, users create a shared document by issuing insert, delete, and undo operations on their local replica anytime and anywhere. Data consistency issues arise due to concurrent editing conflicts. Traditional consistency models put restrictions on editing operations updating different portions of a shared document, which is unnecessary for many editing scenarios, and cause their view synchronization strategies to become less efficient. To address these problems, we propose a new data consistency model that preserves convergence and synchronizes editing operations only when they access overlapped or contiguous characters. Our view synchronization strategy is implemented by a novel data structure–partial persistent sequence. A partial persistent sequence is an ordered set of items indexed by persistent and unique position identifiers. It captures data dependencies of editing operations and encodes them in a way that they can be correctly executed on any document replica. As a result, a simple and efficient view synchronization strategy can be implemented.
ItemITR/SI: Guarding the next internet frontier: countering denial of information(Georgia Institute of Technology, 2008-12-19) Ahamad, Mustaque ; Omiecinski, Edward ; Pu, Calton ; Mark, Leo ; Liu, Ling
ItemITR: collaborative research: morphable software services: self-modifying programs for distributed embedded systems(Georgia Institute of Technology, 2008-12-14) Schwan, Karsten ; Pu, Calton ; Pande, Santosh ; Eisenhauer, Greg S. ; Balch, Tucker
ItemEnforcing Configurable Trust in Client-side Software Stacks by Splitting Information Flow(Georgia Institute of Technology, 2007) Singaravelu, Lenin ; Kauer, Bernhard ; Boettcher, Alexander ; Härtig, Hermann ; Pu, Calton ; Jung, Gueyoung ; Weinhold, CarstenCurrent client-server applications such as online banking employ the same client-side software stack to handle information with differing security and functionality requirements, thereby increasing the size and complexity of software that needs to be trusted. While the high complexity of existing software is a significant hindrance to testing and analysis, existing software and interfaces are too widely used to be entirely abandoned. We present a proxy-based approach called FlowGuard to address the problem of large and complex client-side software stacks. FlowGuard’s proxy employs mappings from sensitiveness of information to trustworthiness of software stacks to demultiplex incoming messages amongst multiple client-side software stacks. One of these stacks is a fully-functional legacy software stack and another is a small and simple stack designed to handle sensitive information. In contrast to previous approaches, FlowGuard not only reduces the complexity of software handling sensitive information but also minimizes modifications to legacy software stacks. By allowing users and service providers to define the mappings, FlowGuard also provides flexibility in determining functionality-security tradeoffs. We demonstrate the feasibility of our approach by implementing a FlowGuard, called BLAC, for https-based applications. BLAC relies on text patterns to identify sensitive information in HTTP responses and redirects such responses to a small and simple TrustedViewer, with an unmodified legacy software stack handling the rest of the responses. We developed a prototype implementation that works with a prominent bank’s online banking site. Our evaluation shows that BLAC reduces size and complexity of software that needs to be trusted by an order of magnitude, with a manageable overhead of few tens of milliseconds per HTTP response.
ItemA Secure Middleware Architecture for Web Services(Georgia Institute of Technology, 2007) Singaravelu, Lenin ; Wei, Jinpeng ; Pu, CaltonCurrent web service platforms (WSPs) often perform all web services related processing, including security-sensitive information handling, in the same protection domain. Consequently, the entire WSP may have access to security-sensitive information such as credit card numbers, forcing us to trust a large and complex piece of software. To address this problem, we propose ISO-WSP, a new middleware architecture that decomposes current WSPs into two parts executing in separate protection domains: (1) a small trusted T-WSP to handle security-sensitive data, and (2) a large, legacy untrusted U-WSP that provides the normal WSP functionality, but uses the T-WSP for security-sensitive data handling. By restricting security-sensitive data access to T-WSP, ISO-WSP reduces the software complexity of trusted code, thereby improving the testability of ISO-WSP. To achieve end-to-end security, the application code is also decomposed into two parts, isolating a small trusted part from the remaining untrusted code. The trusted part encapsulates all accesses to security-sensitive data through a Secure Functional Interface (SFI). To ease the migration of legacy applications to ISO-WSP, we developed tools to translate direct manipulations of security-sensitive data by the untrusted part into SFI invocations. Using a prototype implementation based on the Apache Axis2 WSP, we show that ISO-WSP reduces software complexity of trusted components by a factor of five, while incurring a modest performance overhead of few milliseconds per request. We also show that existing applications can be migrated to run on ISO-WSP with minimal effort: a few tens of lines of new and modified code.
ItemFile-based Race Condition Attacks on Multiprocessors Are Practical Threat(Georgia Institute of Technology, 2006) Wei, Jinpeng ; Pu, CaltonTOCTTOU (Time-of-Check-to-Time-of-Use) attacks exploit race conditions in file systems. Although TOCTTOU attacks have been known for 30 years, they have been considered "low risk" due to their typically low probability of success, which depends on fortuitous interleaving between the attacker and victim processes. For example, recent discovery of TOCTTOU vulnerability in vi showed a success rate in low single digit percentages for files smaller than 1MB size. In this paper, we show that in a multiprocessor the uncertainties due to scheduling are reduced, and the success probability of vi attack increases to almost 100% for files of 1 byte size. Similarly, another recently discovered vulnerability in gedit, which had almost zero probability of success, changes to 83% success rate on a multiprocessor. The main reason for the increased success rate to almost certainty is the speed up of attacker process when running on a dedicated processor. These case studies show the sharply increased risks represented by file-based race condition attacks such as TOCTTOU on the next generation multiprocessors, e.g., those with multi-core processors.
ItemReliable End System Multicasting with a Heterogeneous Overlay Network(Georgia Institute of Technology, 2004-05-03) Zhang, Jianjun ; Liu, Ling ; Pu, Calton ; Ammar, Mostafa H.This paper presents PeerCast, a reliable and self-configurable peer to peer system for End System Multicast (ESM). Our approach has three unique features compared with existing approaches to application-level multicast systems. First, we propose a capacity-aware overlay construction technique to balance the multicast load among peers with heterogeneous capabilities. Second, we utilize the landmark signature technique to cluster peer nodes of the ESM overlay network, aiming at exploiting the network proximity of end system nodes for efficient multicast group subscription and fast dissemination of information across wide area networks. Third and most importantly, we develop a dynamic passive replication scheme to provide reliable subscription and multicast dissemination of information in an environment of inherently unreliable peers. We also present an analytical model to discuss its fault tolerance properties, and report a set of initial experiments, showing the feasibility and the effectiveness of the proposed approach.
ItemDistributed Workflow Restructuring: An Experimental Study(Georgia Institute of Technology, 2002) Ruiz, Duncan Dubugras ; Liu, Ling ; Pu, CaltonWorkflow systems have been one of the enabling technologies for automation of business processes in corporate enterprises. Many modern production workflows need to incorporate deadline control throughout the workflow management system. However, the increasing volume and diversity of digital information available online and the unpredictable amount of network delays or server failures have led to a growing problem that conventional workflow management systems do not have, namely how to reorganize existing workflow activities in order to meet deadlines in the presence of unexpected delays. We refer to this problem as the workflow-restructuring problem. This paper describes the notation and issues of workflow restructuring, and discusses a set of workflow activity restructuring operators. We illustrate the inherent semantics of these restructuring operators using the Transactional Activity Model (TAM). The paper contains two main contributions. First, we study the environmental instabilities (e.g., resource shortages and network delays) that cause workflows to perform sub-optimally and how workflow restructuring can address this problem. Second, we evaluate the effectiveness of workflow-restructuring operators through simulation. Our initial experiments demonstrate that run-time workflow restructuring can improve response time significantly for unstable environments.