Organizational Unit:
School of Computational Science and Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 35
  • Item
    Reverse computing compiler technology
    (Georgia Institute of Technology, 2011-09-15) Fujimoto, Richard M. ; Vulov, George
  • Item
    Enabling collaborative behaviors among cubesats
    (Georgia Institute of Technology, 2011-07-08) Browne, Daniel C.
    Future spacecraft missions are trending towards the use of distributed systems or fractionated spacecraft. Initiatives such as DARPA's System F6 are encouraging the satellite community to explore the realm of collaborative spacecraft teams in order to achieve lower cost, lower risk, and greater data value over the conventional monoliths in LEO today. Extensive research has been and is being conducted indicating the advantages of distributed spacecraft systems in terms of both capability and cost. Enabling collaborative behaviors among teams or formations of pico-satellites requires technology development in several subsystem areas including attitude determination and control subsystems, orbit determination and maintenance capabilities, as well as a means to maintain accurate knowledge of team members' position and attitude. All of these technology developments desire improvements (more specifically, decreases) in mass and power requirements in order to fit on pico-satellite platforms such as the CubeSat. In this thesis a solution for the last technology development area aforementioned is presented. Accurate knowledge of each spacecraft's state in a formation, beyond improving collision avoidance, provides a means to best schedule sensor data gathering, thereby increasing power budget efficiency. Our solution is composed of multiple software and hardware components. First, finely-tuned flight system software for the maintaining of state knowledge through equations of motion propagation is developed. Additional software, including an extended Kalman filter implementation, and commercially available hardware components provide a means for on-board determination of both orbit and attitude. Lastly, an inter-satellite communication message structure and protocol enable the updating of position and attitude, as required, among team members. This messaging structure additionally provides a means for payload sensor and telemetry data sharing. In order to satisfy the needs of many different missions, the software has the flexibility to vary the limits of accuracy on the knowledge of team member position, velocity, and attitude. Such flexibility provides power savings for simpler applications while still enabling missions with the need of finer accuracy knowledge of the distributed team's state. Simulation results are presented indicating the accuracy and efficiency of formation structure knowledge through incorporation of the described solution. More importantly, results indicate the collaborative module's ability to maintain formation knowledge within bounds prescribed by a user. Simulation has included hardware-in-the-loop setups utilizing an S-band transceiver. Two "satellites" (computers setup with S-band transceivers and running the software components of the collaborative module) are provided GPS inputs comparable to the outputs provided from commercial hardware; this partial hardware-in-the-loop setup demonstrates the overall capabilities of the collaborative module. Details on each component of the module are provided. Although the module is designed with the 3U CubeSat framework as the initial demonstration platform, it is easily extendable onto other small satellite platforms. By using this collaborative module as a base, future work can build upon it with attitude control, orbit or formation control, and additional capabilities with the end goal of achieving autonomous clusters of small spacecraft.
  • Item
    Detecting Communities from Given Seeds in Social Networks
    (Georgia Institute of Technology, 2011-02-22) Riedy, Jason ; Bader, David A. ; Jiang, Karl ; Pande, Pushkar ; Sharma, Richa
    Analyzing massive social networks challenges both high-performance computers and human under- standing. These massive networks cannot be visualized easily, and their scale makes applying complex analysis methods computationally expensive. We present a region-growing method for finding a smaller, more tractable subgraph, a community, given a few example seed vertices. Unlike existing work, we focus on a small number of seed vertices, from two to a few dozen. We also present the first comparison between five algorithms for expanding a small seed set into a community. Our comparison applies these algorithms to an R-MAT generated graph component with 240 thousand vertices and 32 million edges and evaluates the community size, modularity, Kullback-Leibler divergence, conductance, and clustering coefficient. We find that our new algorithm with a local modularity maximizing heuristic based on Clauset, Newman, and Moore performs very well when the output is limited to 100 or 1000 vertices. When run without a vertex size limit, a heuristic from McCloskey and Bader generates communities containing around 60% of the graph's vertices and having a small conductance and modularity appropriate to the result size. A personalized PageRank algorithm based on Andersen, Lang, and Chung also performs well with respect to our metrics.
  • Item
    Computational methods for nonlinear dimension reduction
    (Georgia Institute of Technology, 2010-11-30) Zha, Hongyuan ; Park, Haesun
  • Item
    High-performance computing for massive graph analysis
    (Georgia Institute of Technology, 2010-10-30) Bader, David A.
  • Item
    Algorithms and software with turnable parallelism
    (Georgia Institute of Technology, 2010-09-30) Vuduc, Richard
  • Item
    Domain knowledge, uncertainty, and parameter constraints
    (Georgia Institute of Technology, 2010-08-24) Mao, Yi
  • Item
    Algorithm design on multicore processors for massive-data analysis
    (Georgia Institute of Technology, 2010-06-28) Agarwal, Virat
    Analyzing massive-data sets and streams is computationally very challenging. Data sets in systems biology, network analysis and security use network abstraction to construct large-scale graphs. Graph algorithms such as traversal and search are memory-intensive and typically require very little computation, with access patterns that are irregular and fine-grained. The increasing streaming data rates in various domains such as security, mining, and finance leaves algorithm designers with only a handful of clock cycles (with current general purpose computing technology) to process every incoming byte of data in-core at real-time. This along with increasing complexity of mining patterns and other analytics puts further pressure on already high computational requirement. Processing streaming data in finance comes with an additional constraint to process at low latency, that restricts the algorithm to use common techniques such as batching to obtain high throughput. The primary contributions of this dissertation are the design of novel parallel data analysis algorithms for graph traversal on large-scale graphs, pattern recognition and keyword scanning on massive streaming data, financial market data feed processing and analytics, and data transformation, that capture the machine-independent aspects, to guarantee portability with performance to future processors, with high performance implementations on multicore processors that embed processorspecific optimizations. Our breadth first search graph traversal algorithm demonstrates a capability to process massive graphs with billions of vertices and edges on commodity multicore processors at rates that are competitive with supercomputing results in the recent literature. We also present high performance scalable keyword scanning on streaming data using novel automata compression algorithm, a model of computation based on small software content addressable memories (CAMs) and a unique data layout that forces data re-use and minimizes memory traffic. Using a high-level algorithmic approach to process financial feeds we present a solution that decodes and normalizes option market data at rates an order of magnitude more than the current needs of the market, yet portable and flexible to other feeds in this domain. In this dissertation we discuss in detail algorithm design challenges to process massive-data and present solutions and techniques that we believe can be used and extended to solve future research problems in this domain.
  • Item
    Matrix algorithms for data clustering and nonlinear dimension reduction
    (Georgia Institute of Technology, 2008-10-03) Zha, Hongyuan ; Zhang, Ming
  • Item
    Workshop on Future Direction in Numerical Algorithms and Optimization
    (Georgia Institute of Technology, 2008-01-15) Park, Haesun ; Golub, Gene ; Wu, Weili ; Du, Ding-Zhu