Person:
Blough, Douglas M.

Associated Organization(s)
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 20
  • Item
    Distributed MIMO Interference Cancellation for Interfering Wireless Networks: Protocol and Initial Simulation
    (Georgia Institute of Technology, 2013-02) Cortes-Pena, Luis Miguel ; Blough, Douglas M.
    In this report, the problem of interference in dense wireless network deployments is addressed. Two example scenarios are: 1) overlapping basic service sets (OBSSes) in wireless LAN deployments, and 2) interference among close-by femtocells. The proposed approach is to exploit the interference cancellation and spatial multiplexing capabilities of multiple-input multiple- output (MIMO) links to mitigate interference and improve the performance of such networks. Both semi-distributed and fully distributed protocols for 802.11-based wireless networks standard are presented and evaluated. The philosophy of the approach is to minimize modifications to existing protocols, particularly within client-side devices. Thus, modifications are primarily made at the access points (APs). The semi-distributed protocol was fully implemented within the 802.11 package of ns-3 to evaluate the approach. Simulation results with two APs, and with either one client per AP or two clients per AP, show that within 5 seconds of network operation, our protocol increases the goodput on the downlink by about 50%, as compared against a standard 802.11n implementation.
  • Item
    Fast and Accurate Link Discovery Integrated with Reliable Multicast in 802.11
    (Georgia Institute of Technology, 2013) Lertpratchya, Daniel ; Blough, Douglas M. ; Riley, George F.
    Abstract—Maintaining accurate neighbor information in wireless networks is an important operation upon which many higher layer protocols rely. However, this operation is not supported in the IEEE 802.11 MAC layer, forcing applications that need it to each include their own neighborhood mechanism, creating redundancies and inefficiencies and failing to capitalize on potential synergies with other MAC layer operations. In this work, we propose to integrate link discovery and neighborhood maintenance with a reliable multicast extension to the IEEE 802.11 MAC.We show through simulations that our protocol adapts to neighborhood changes faster than traditional neighborhood maintenance mechanisms, thereby allowing MAC-layer multicast operations to achieve higher delivery rates. We also demonstrate that our protocol can quickly and reliably distinguish between unidirectional and bidirectional links. Traditional mechanisms assume links are bidirectional based on one-way reception of a short “hello” packet, which results in significant problems with higher-layer operations such as routing because of many unidirectional links being classified as bidirectional.
  • Item
    CT-T: MedVault-ensuring security and privacy for electronic medical records
    (Georgia Institute of Technology, 2011-08-31) Blough, Douglas M. ; Liu, Ling ; Sainfort, Francois ; Ahamad, Mustaque
  • Item
    Exploring the Design Space of Greedy Link Scheduling Algorithms for Wireless Multihop Networks
    (Georgia Institute of Technology, 2011) Lertpratchya, Daniel ; Blough, Douglas M.
    It is known that using a spatial TDMA (STDMA) access scheme can increase the capacity of a wireless network over CSMA/CD access scheme. Modern wireless devices are capable of transmitting at different data rates depending on the current network condition. However, little attention has been paid to how best is to use the multiple data rates capability. In this report, we focus on greedy link scheduling algorithms that work with variable rates, where devices can transmit at lower data rates to accommodate lower quality links. We propose criteria that can be used in the scheduling algorithms and investigate performances of different scheduling algorithms that employ these different criteria. We use the more realistic physical interference model, where packet reception rate depends on signal-to-interference-plus-noise ratio. Our investigation shows that by using the variable rate approach, we can increase the overall capacity of the network over traditional single-threshold-based algorithms.
  • Item
    Detection of Conflicts and Inconsistencies in Taxonomy-based Authorization Policies
    (Georgia Institute of Technology, 2011) Mohan, Apurva ; Blough, Douglas M. ; Kurc, Tahsin ; Post, Andrew ; Saltz, Joel
    The values of data elements stored in biomedical databases often draw from biomedical ontologies. Authorization rules can be defined on these ontologies to control access to sensitive and private data elements in such databases. Authorization rules may be specified by different authorities at different times for various purposes, and as such policy rules may conflict with each other, inadvertently allowing access to sensitive information. Detecting policy conflicts is nontrivial because it involves identification of applicable rules and detecting conflicts among them dynamically during execution of data access requests. It also requires dynamically verifying conformance with required policies and logging relevant information about decisions for audit. Another problem in biomedical data protection is inference attacks, in which a user who has legitimate access to some data elements is able to infer information related to other data elements. This type of inadvertent data disclosure should be prevented by ensuring policy consistency; that is, data elements which can lead to inference about other data elements should be protected by the same level of authorization policies as the other data elements. We propose two strategies; one for detecting policy consistencies to avoid potential inference attacks and the other for detecting policy conflicts. We have implemented these algorithms in Java language and evaluated their execution times experimentally.
  • Item
    Fast, Lightweight Virtual Machine Checkpointing
    (Georgia Institute of Technology, 2010) Sun, Michael H. ; Blough, Douglas M.
    Virtual machine checkpoints provide a clean encapsulation of the full state of an executing system. Due to the large nature of state involved, the process of VM checkpoints can be slow and costly. We describe the implementation of a fast and lightweight mechanism of VM checkpointing for the Xen virtualization machine monitor that utilizes copy-on-write techniques to reduce the VM’s downtime and performance overhead incurred by other forms of VM checkpointing.
  • Item
    Redactable Signatures on Data with Dependencies
    (Georgia Institute of Technology, 2009) Bauer, David ; Blough, Douglas M. ; Mohan, Apurva
    The storage of personal information by service providers entails a significant risk of privacy loss due to data breaches. One way to mitigate this problem is to limit the amount of personal information that is provided. Our prior work on minimal disclosure credentials presented a computationally efficient mechanism to facilitate this capability. In that work, personal data was broken into individual claims, which could be released in arbitrary subsets while still being cryptographically verifiable. In expanding the applications for that work, we encountered the problem of connections between different claims, which manifest as dependencies on the release of those claims. In this new work, we provide an efficient way to provide the same selective disclosure, but with cryptographic enforcement of dependencies between claims, as specified by the certifier of the claims. This constitutes a mechanism for redactable signatures on data with release dependencies. Our scheme was implemented and benchmarked over a wide range of input set sizes, and shown to verify thousands of claims in tens to hundreds of milliseconds. We also describe ongoing work in which the approach is being used within a larger system for holding and dispensing personal health records.
  • Item
    Analysis of a Redactable Signature Scheme on Data With Dependencies
    (Georgia Institute of Technology, 2009) Bauer, David ; Blough, Douglas M.
    Storage of personal information by service providers risks privacy loss from data breaches. Our prior work on minimal disclosure credentials presented a mechanism to limit the amount of personal information provided. In that work, personal data was broken into individual claims, which can be released in arbitrary subsets while still being cryptographically verifiable. In applying that work, we encountered the problem of connections between claims, which manifest as disclosure dependencies. In further prior work, we provide an efficient way to provide minimal disclosure, but with cryptographic enforcement of dependencies between claims, as specified by the claims certifier. Now, this work provides security proofs showing that the scheme is secure against forgery and the violation of dependencies in the random oracle model. Additional motivation is provided for a preservation of privacy and security in the standard model.
  • Item
    A Patient-centric, Attribute-based, Source-verifiable Framework for Health Record Sharing
    (Georgia Institute of Technology, 2009) Mohan, Apurva ; Bauer, David ; Blough, Douglas M. ; Ahamad, Mustaque ; Bamba, Bhuvan ; Krishnan, Ramkumar ; Liu, Ling ; Mashima, Daisuke ; Palanisamy, Balaji
    The storage of health records in electronic format, and the wide-spread sharing of these records among different health care providers, have enormous potential benefits to the U.S. healthcare system. These benefits include both improving the quality of health care delivered to patients and reducing the costs of delivering that care. However, maintaining the security of electronic health record systems and the privacy of the information they contain is paramount to ensure that patients have confidence in the use of such systems. In this paper, we propose a framework for electronic health record sharing that is patient centric, i.e. it provides patients with substantial control over how their information is shared and with whom; provides for verifiability of original sources of health information and the integrity of the data; and permits fine-grained decisions about when data can be shared based on the use of attribute-based techniques for authorization and access control. We present the architecture of the framework, describe a prototype system we have built based on it, and demonstrate its use within a scenario involving emergency responders' access to health record information.
  • Item
    The Design and Evaluation of Techniques for Route Diversity in Distributed Hash Tables
    (Georgia Institute of Technology, 2007) Harvesf, Cyrus ; Blough, Douglas M.
    To achieve higher efficiency over their unstructured counterparts, structured peer-to-peer systems hold each node responsible for serving a specified set of keys and correctly routing lookups. Unfortunately, malicious participants can abuse these responsibilities to deny access to a set of keys or misroute lookups. We look to address both of these problems through replica placement. We present a replica placement scheme for any distributed hash table that uses a prefix-matching routing scheme and evaluate the number of replicas necessary to produce a desired number of disjoint routes. We show through simulation that this placement can make a significant improvement in routing robustness over other placements. Furthermore, we consider another route diversity mechanism that we call neighbor set routing and show that, when used with our replica placement, it can successfully route messages to a correct replica even with a quarter of the nodes in the system failed at random. Finally, we demonstrate a family of replica query strategies that can trade off response time and system load. We present a hybrid query strategy that keeps response time low without producing too high a load.