Organizational Unit:
Undergraduate Research Opportunities Program

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 4 of 4
  • Item
    Linear Promises: Towards Safer Concurrent Programming
    (Georgia Institute of Technology, 2022-05) Rau, Ohad
    In this paper, we introduce a new type system based on linear typing, and show how it can be incorporated in a concurrent programming language to track ownership of promises. By tracking write operations on each promise, the language is able to guarantee exactly one write operation is ever performed on any given promise. This language thus precludes a number of common bugs found in promise-based programs, such as failing to write to a promise and writing to the same promise multiple times. We also present an implementation of the language, complete with an efficient type checking algorithm and high-level programming constructs. This language serves as a safer platform for writing high-level concurrent code.
  • Item
    Automatic Future-Based Parallelism in Intrepyyd
    (Georgia Institute of Technology, 2021-05) Sklar, Matthew J.
    Hardware requirements are reaching record highs, but in the modern post-Moore computing world hardware improvements are decelerating. With fields such as Artificial Intelligence (AI) and data analysis requiring increasing code performance, new approaches must be taken to meet their needs. Instead of improving hardware, a new approach that is efficient for aiding AI and data analysis is adding more hardware for programs to run on, instead of better hardware. Adding more hardware has the advantage of allowing independent processes within the same program to be run in parallel with each other on different hardware in a process known as parallelization. Parallelization can greatly increase program efficiency, and is becoming essential for modern programs. In this paper we propose a method to automatically parallelize non-parallel code, and provide an implementation of it in Intrepyyd, a Python extension designed specifically to improve AI and data analysis code. Improvements will depend on code parallelibility and computing systems, but we expect it to always improve unless code cannot be parallelized.
  • Item
    An Analysis of Register Allocation Techniques in the Context Of A RISC-V Processor
    (Georgia Institute of Technology, 2020-05) Viszlai, Joshua
    This research looks at the register allocation phase of a compiler for programs running on a RISC-V machine. Register allocation algorithms were applied to a test program compiled through an LLVM-based toolchain to be run on a RISC-V simulator. Four register allocation algorithms were used in compilation of the libquantum test case from the SPECint2006 CPU test suite. The number of loads and stores when executed on a RISC-V simulator were observed, and the results showed that a large determinant of performance was the extent of saving and restoring registers during function calls.
  • Item
    Single-Job Dynamic Parallelism Scaling through Lock Contention Monitoring
    (Georgia Institute of Technology, 2020-05) Khanwalkar, Mahesh
    Harnessing available parallelism resources is an important but complicated task. Lock contention is one such factor that complicates this task and is of major concern, since locks and locking constructs are used heavily in multithreaded code. When an application experiences changing levels of lock contention, simply allocating a fixed level of parallelism may cause resource waste and performance loss. The work presented here introduces the idea of dynamically scaling up or down the level of parallelism by monitoring the current level of lock contention. This is done by keeping track of lock acquisition failures and using that metric as an estimator for the current level of lock contention. This dynamic scaling approach was evaluated using the parallel Boruvka’s MST algorithm, which exhibits rising levels of lock contention. The algorithm was tested using different input graphs and the speedup of the dynamic scaling was recorded relative to the original parallel version.