Fujimoto, Richard M.
ArchiveSpace Name Record
Publication Search Results
Now showing 1 - 10 of 34
ItemReverse computing compiler technology(Georgia Institute of Technology, 2011-09-15) Fujimoto, Richard M. ; Vulov, George
ItemDDDAS-TMRP: Dynamic, simulation-based management of surface transportation systems(Georgia Institute of Technology, 2009-12-21) Fujimoto, Richard M. ; Leonard, John D., ll ; Guensler, Randall L. ; Schwan, Karsten ; Hunter, Michael D.
ItemCollaborative research: ITR: global multi-scale kinetic simulations of the earth's magnetosphere using parallel discrete event simulation(Georgia Institute of Technology, 2009-11-30) Fujimoto, Richard M. ; Pande, Santosh ; Perumalla, Kalyan S. ; Omelchenko, Yuri ; Driscoll, Jonathan
ItemAgent-based simulations using human performance models for national airspace system risk assessment(Georgia Institute of Technology, 2009-07-14) Goldsman, David ; Alexopoulos, Christos ; Fujimoto, Richard M. ; Loper, Margaret L. ; Pritchett, Amy R.
ItemScalable Simulation of Electromagnetic Hybrid Codes(Georgia Institute of Technology, 2005) Perumalla, Kalyan S. ; Dave, Jagrut Durdant ; Fujimoto, Richard M. ; Karimabadi, Homa ; Driscoll, Jonathan ; Omelchenko, YuriNew discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth s magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to significantly improve their parallel performance. The mapping of model to simulation processes is optimized via aggregation techniques, and the parallel runtime engine is optimized for communication and memory efficiency. The net result of the enhancements is the capability to simulate hybrid particle-in-cell (PIC) model configurations containing over 2 billion particles using 512 processors on supercomputing platforms.
ItemSpace–Parallel Network Simulations Using Ghosts(Georgia Institute of Technology, 2004-05) Riley, George F. ; Jaafar, Talal Mohamed ; Fujimoto, Richard M. ; Ammar, Mostafa H.We discuss an approach for creating a federated network simulation that eases the burdens on the simulator user that typically arise from more traditional methods for defining space-parallel simulations. Previous approaches have difficulties that arise from the need for global topology knowledge when forwarding simulated packets between the federates. In all but the simplest cases, proper packet forwarding decisions between federates requires routing tables of size O(mn) (m is the number of nodes modeled in a particular simulator instance, and n is the total number of network nodes in the entire topology) in order to determine how packets should be routed between federates. Further, the benefits of the well-known NIx-Vector routing approach cannot be fully achieved without global knowledge of the overall topology. We seek to overcome these difficulties by utilizing a topology partitioning methodology that uses Ghost Nodes. A ghost node is a simulator object in a federate that represents a simulated network node that is spatially assigned to some other federate, and thus that other federate is responsible for maintaining all state associated with the node. However, ghost nodes do retain topology connectivity information with other nodes, allowing all federate in a space-parallel simulation to obtain a global picture of the network topology. We show with experimental results that the memory overhead associated with the ghosts is minimal relative to the overall memory footprint of the simulation.
ItemEnabling Large-Scale Multicast Simulation by Reducing Memory Requirements(Georgia Institute of Technology, 2003-06) Xu, Donghua ; Riley, George F. ; Ammar, Mostafa H. ; Fujimoto, Richard M.The simulation of large–scale multicast networks often requires a significant amount of memory that can easily exceed the capacity of current computers, both because of the inherently large amount of state necessary to simulate message routing and because of design oversights in the multicast portion of existing simulators. In this paper we describe three approaches to substantially reduce the memory required by multicast simulations: 1) We introduce a novel technique called “negative forwarding table” to compress mutlicast routing state. 2) We aggregate the routing state objects from one replicator per router per group per source to one replicator per router. 3) We employ the NIx– Vector technique to replace the original unicast IP routing table. We implemented these techniques in the ns2 simulator to demonstrate their effectiveness. Our experiments show that these techniques enable packet level multicast simulations on a scale that was previously unachievable on modern workstations using ns2.
ItemScalable RTI-Based Parallel Simulation of Networks(Georgia Institute of Technology, 2003-06) Perumalla, Kalyan S. ; Park, Alfred ; Fujimoto, Richard M. ; Riley, George F.Federated simulation interfaces such as the High Level Architecture (HLA) were designed for interoperability, and as such are not traditionally associated with high performance computing. In this paper, we present results of a case study examining the use of federated simulations using runtime infrastructure (RTI) software to realize large-scale parallel network simulators. We examine the performance of two different federated network simulators, and describe RTI performance optimizations that were used to achieve efficient execution. We show that RTI-based parallel simulations can scale extremely well and achieve very high speedup. Our experiments yielded more than 80-fold scaled speedup in simulating large TCP/IP networks, demonstrating performance of up to 6 million simulated packet transmissions per second on a Linux cluster. Networks containing up to two million network nodes (routers and end systems) were simulated.
ItemExploiting the Predictability of TCP’s Steady-state Behavior to Speed Up Network Simulation(Georgia Institute of Technology, 2002-10) He, Qi ; Ammar, Mostafa H. ; Riley, George F. ; Fujimoto, Richard M.In discrete-event network simulation, a significant portion of resources and computation are dedicated to the creation and processing of packet transmission events. For large-scale network simulations with a large number of high-speed data flows, the processing of packet events is the most time consuming aspect of the simulation. In this work we develop a technique that saves on the processing of packet events for TCP flows using the well established results showing that the average behavior of a TCP flow is predictable given a steady-state path condition. We exploit this to predict the average behavior of a TCP flow over a future period of time where steady-state conditions hold, thus allowing for a reduction (or elimination) of the processing required for packet events during this period. We consider two approaches to predicting TCP’s steady-state behavior: using throughput formulas or by direct monitoring of a flow’s throughput in a simulation. We design a simulation framework that provides the flexibility to incorporate this method of simulating TCP packet flows. Our goal is 1) to accommodate different network configurations, on/off flow behaviors and interaction between predicted flows and packet-based flows; and 2) to preserve the statistical behavior of every entity in the system, from hosts to routers to links, so as to maintain the accuracy of the network simulation as a whole. In order to illustrate the promise of this idea we implement it in the context of the ns2 simulation system. A set of experiments illustrate the speedup and approximation of the simulation framework under different scenarios and for different network performance metrics.
ItemIntegrated Fluid and Packet Network Simulations(Georgia Institute of Technology, 2002-10) Riley, George F. ; Jaafar, Talal Mohamed ; Fujimoto, Richard M.A number of methods exist that can be used to create simulation models for measuring the performance of computer networks. The most commonly used method is packet level simulation, which models the detailed behavior of every packet in the network, and results in a highly accurate picture of overall network behavior. A less frequently used, but sometimes more computationally efficient, method is the fluid model approach. In this method, aggregations of flows are modeled as fluid flowing through pipes, and queues are modeled as fixed capacity buckets. The buckets are connected via pipes, where the maximum allowable flow rate of fluid in the pipes represents the bandwidth of the communication links being modeled. Fluid models generally result in a less accurate picture of the network’s behavior since they rely on aggregation of flows and ignore actions specific to individual flows. We introduce a new hybrid simulation environment that leverages the strong points of each of these two modeling methods. Our hybrid method uses fluid models to represent aggregations of flows for which less detail is required, and packet models to represent individual flows for which more detail is needed. The result is a computationally efficient simulation model that results in a high level of accuracy and detail in some of the flows, while abstracting away details of other flows. We show a computational speedup of more than twenty in some cases, with little reduction in accuracy of the simulation results.