Series
CERCS Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)

Publication Search Results

Now showing 1 - 10 of 12
  • Item
    Redactable Signatures on Data with Dependencies
    (Georgia Institute of Technology, 2009) Bauer, David ; Blough, Douglas M. ; Mohan, Apurva
    The storage of personal information by service providers entails a significant risk of privacy loss due to data breaches. One way to mitigate this problem is to limit the amount of personal information that is provided. Our prior work on minimal disclosure credentials presented a computationally efficient mechanism to facilitate this capability. In that work, personal data was broken into individual claims, which could be released in arbitrary subsets while still being cryptographically verifiable. In expanding the applications for that work, we encountered the problem of connections between different claims, which manifest as dependencies on the release of those claims. In this new work, we provide an efficient way to provide the same selective disclosure, but with cryptographic enforcement of dependencies between claims, as specified by the certifier of the claims. This constitutes a mechanism for redactable signatures on data with release dependencies. Our scheme was implemented and benchmarked over a wide range of input set sizes, and shown to verify thousands of claims in tens to hundreds of milliseconds. We also describe ongoing work in which the approach is being used within a larger system for holding and dispensing personal health records.
  • Item
    Analysis of a Redactable Signature Scheme on Data With Dependencies
    (Georgia Institute of Technology, 2009) Bauer, David ; Blough, Douglas M.
    Storage of personal information by service providers risks privacy loss from data breaches. Our prior work on minimal disclosure credentials presented a mechanism to limit the amount of personal information provided. In that work, personal data was broken into individual claims, which can be released in arbitrary subsets while still being cryptographically verifiable. In applying that work, we encountered the problem of connections between claims, which manifest as disclosure dependencies. In further prior work, we provide an efficient way to provide minimal disclosure, but with cryptographic enforcement of dependencies between claims, as specified by the claims certifier. Now, this work provides security proofs showing that the scheme is secure against forgery and the violation of dependencies in the random oracle model. Additional motivation is provided for a preservation of privacy and security in the standard model.
  • Item
    A Patient-centric, Attribute-based, Source-verifiable Framework for Health Record Sharing
    (Georgia Institute of Technology, 2009) Mohan, Apurva ; Bauer, David ; Blough, Douglas M. ; Ahamad, Mustaque ; Bamba, Bhuvan ; Krishnan, Ramkumar ; Liu, Ling ; Mashima, Daisuke ; Palanisamy, Balaji
    The storage of health records in electronic format, and the wide-spread sharing of these records among different health care providers, have enormous potential benefits to the U.S. healthcare system. These benefits include both improving the quality of health care delivered to patients and reducing the costs of delivering that care. However, maintaining the security of electronic health record systems and the privacy of the information they contain is paramount to ensure that patients have confidence in the use of such systems. In this paper, we propose a framework for electronic health record sharing that is patient centric, i.e. it provides patients with substantial control over how their information is shared and with whom; provides for verifiability of original sources of health information and the integrity of the data; and permits fine-grained decisions about when data can be shared based on the use of attribute-based techniques for authorization and access control. We present the architecture of the framework, describe a prototype system we have built based on it, and demonstrate its use within a scenario involving emergency responders' access to health record information.
  • Item
    The Design and Evaluation of Techniques for Route Diversity in Distributed Hash Tables
    (Georgia Institute of Technology, 2007) Harvesf, Cyrus ; Blough, Douglas M.
    To achieve higher efficiency over their unstructured counterparts, structured peer-to-peer systems hold each node responsible for serving a specified set of keys and correctly routing lookups. Unfortunately, malicious participants can abuse these responsibilities to deny access to a set of keys or misroute lookups. We look to address both of these problems through replica placement. We present a replica placement scheme for any distributed hash table that uses a prefix-matching routing scheme and evaluate the number of replicas necessary to produce a desired number of disjoint routes. We show through simulation that this placement can make a significant improvement in routing robustness over other placements. Furthermore, we consider another route diversity mechanism that we call neighbor set routing and show that, when used with our replica placement, it can successfully route messages to a correct replica even with a quarter of the nodes in the system failed at random. Finally, we demonstrate a family of replica query strategies that can trade off response time and system load. We present a hybrid query strategy that keeps response time low without producing too high a load.
  • Item
    Minimum Information Disclosure with Efficiently Verifiable Credentials
    (Georgia Institute of Technology, 2007) Bauer, David ; Blough, Douglas M. ; Cash, David
    Public-key based certificates provide a standard way to prove one's identity, as certified by some certificate authority (CA). However, standard certificates provide a binary identification: either the whole identity of the subject is known, or nothing is known. We propose using a Merkle hash tree structure, whereby it is possible for a single certificate to certify many separate claims or attributes, each of which may be proved independently, without revealing the others. Additionally, we demonstrate how trees from multiple sources can be combined together by modifying the tree structure slightly. This allows claims by different authorities, such as an employer or professional organization, to be combined under a single certificate, without the CA needing to know (let alone verify) all of the claims. In addition to describing the hash tree structure and protocols for constructing and verifying our proposed credential, we formally prove that it provides unforgeability and privacy and we present initial performance results demonstrating its efficiency.
  • Item
    Copy-Resistant Credentials with Minimum Information Disclosure
    (Georgia Institute of Technology, 2006) Bauer, David ; Blough, Douglas M.
    Public-key based certificates provide a standard way to prove one's identity, as certified by some certificate authority (CA). But standard certificates provide a binary identification: either the whole identity of the subject is known, or nothing is known. By using a Merkle hash tree structure, it is possible for a single certificate to certify many separate claims or attributes, each of which may be proved independently, without revealing the others. Additionally, trees from multiple sources can be combined together by modifying the tree structure slightly. This allows claims by different authorities, such as an employer or professional organization, to be combined under a single tree, without the CA needing to know (let alone verify) all of the claims.
  • Item
    Distributed Global Identification for Sensor Networks
    (Georgia Institute of Technology, 2005) Ould-Ahmed-Vall, ElMoustapha ; Blough, Douglas M. ; Ferri, Bonnie H. ; Riley, George F.
    A sensor network consists of a set of battery-powered nodes, which collaborate to perform sensing tasks in a given environment. It may contain one or more base stations to collect sensed data and possibly relay it to a central processing and storage system. These networks are characterized by scarcity of resources, in particular the available energy. We present a distributed algorithm to solve the unique ID assignment problem. The proposed solution starts by assigning long unique IDs and organizing nodes in a tree structure. This tree structure is used to compute the size of the network. Then, unique IDs are assigned using the minimum number of bytes. Globally unique IDs are useful in providing many network functions, e.g. configuration, monitoring of individual nodes, and various security mechanisms. Theoretical and simulation analysis of the proposed solution have been preformed. The results demonstrate that a high percentage of nodes (more than 99%) are assigned globally unique IDs at the termination of the algorithm when the algorithm parameters are set properly. Furthermore, the algorithm terminates in a relatively short time that scales well with the network size. For example, the algorithm terminates in about 5 minutes for a network of 1,000 nodes.
  • Item
    Practical Share Renewal for Large Amounts of Data
    (Georgia Institute of Technology, 2005) Subbiah, Arun ; Blough, Douglas M.
    Threshold secret sharing schemes encode data into several shares such that a threshold number of shares can be used to recover the data. Such schemes provide confidentiality of stored data without using encryption, thus avoiding the problems associated with key management. To provide long-term confidentiality, proactive secret sharing techniques can be used, where shares are refreshed or renewed periodically so that an adversary who obtains fewer than the threshold shares in each time period does not learn any information on the encoded data. Share renewal is an expensive process, in terms of the computation and network communication involved. In the proactive model, this share renewal process must complete as soon as possible so that an adversary who compromises servers in the present time period does not learn shares stored in the last time period. This paper proposes an algorithm where the shares of all the stored data are renewed by the share renewal of only one secret. The computation and network communication overheads are thus drastically reduced, allowing for the share renewal of all the stored data to complete quickly. These benefits are gained at the expense of some performance penalty during reads and writes, which is shown to be worthwhile.
  • Item
    An Approach for Fault Tolerant and Secure Data Storage in Collaborative Work Environments
    (Georgia Institute of Technology, 2005) Subbiah, Arun ; Blough, Douglas M.
    We describe a novel approach for building a secure and fault tolerant data storage service in collaborative work environments. In such environments, sensitive data must be accessible only to a select group of people, whose membership may change over time. Key management issues are a recognized problem in such environments. We eliminate this problem for confidential and secure data storage by using perfect secret sharing techniques for storing data. Perfect secret sharing schemes have found little use in managing generic data because of the high computation overheads incurred by existing schemes. Our proposed approach uses a novel combination of XOR secret sharing and replication mechanisms, which drastically reduce the computation overheads and achieve speeds comparable to standard encryption schemes. The combination of secret sharing and replication manifests itself as an architectural framework, which has the attractive property that its dimension can be varied to tradeoff amongst different performance metrics. We evaluate the properties and performance of the proposed framework to show that the combination of perfect secret sharing and replication can be used to build efficient fault-tolerant and secure distributed data storage systems for collaborative work environments.
  • Item
    Using Byzantine Quorum Systems to Manage Confidential Data
    (Georgia Institute of Technology, 2004-04-01) Subbiah, Arun ; Ahamad, Mustaque ; Blough, Douglas M.
    This paper addresses the problem of using proactive cryptosystems for generic data storage and retrieval. Proactive cryptosystems provide high security and confidentiality guarantees for stored data, and are capable of withstanding attacks that may compromise all the servers in the system over time. However, proactive cryptosystems are unsuitable for generic data storage uses for two reasons. First, proactive cryptosystems are usually used to store keys, which are rarely updated. On the other hand, generic data could be actively written and read. The system must therefore be highly available for both write and read operations. Second, existing share renewal protocols (the critical element to achieve proactive security) are expensive in terms of computation and communication overheads, and are time consuming operations. Since generic data will be voluminous, the share renewal process will consume substantial system resources and cause a significant amount of system downtime. Two schemes are proposed that combine Byzantine quorum systems and proactive secret sharing techniques to provide high availability and security guarantees for stored data, while reducing the overhead incurred during the share renewal process. Several performance metrics that can be used to evaluate proactively-secure generic data storage schemes are identified. The proposed schemes are thus shown to render proactive systems suitable for confidential generic data storage.