Series
Doctor of Philosophy with a Major in Electrical and Computer Engineering
Doctor of Philosophy with a Major in Electrical and Computer Engineering
Permanent Link
Series Type
Degree Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit
1749 results
Publication Search Results
Now showing
1 - 10 of 1749
-
ItemFerroelectric Field Effect Transistors (FEFETs) With Gate Stack Engineering for Embedded and Storage Memory Applications(Georgia Institute of Technology, 2024-12-17) Park, ChinsungAdvanced data-intensive computing models demand memory technologies that exceed conventional von Neumann architectures, requiring high-density and low-power capabilities. Among these emerging technologies, the ferroelectric field effect transistor (FEFET) stands out due to its advantages over other emerging technology devices. However, applying the FEFET to existing logic embedded or 3D NAND memory platforms presents several challenges. Firstly, the FEFET must satisfy logic compatibility for embedded applications, which requires a write voltage of less than 1.5 V, a memory window of greater than 0.5 V, and an endurance of over 1e10. Additionally, to be applied to a 3D NAND flash memory platform, the thickness of the FE stack must be less than 20 nm, and according to triple-level cell (TLC) standards, the memory window must exceed 6.5 V. In the first part of this dissertation, gate stack engineering was utilized to achieve a logic-compatible FEFET by ensuring Vc <1.5 V. This research evaluated the correlation between various parameters of the gate stack (gate metal, ferroelectric layer, interfacial layer (IL), semiconductor, etc.) and Vc using MOS capacitors to identify parameters that enable a logic-compatible low write voltage. As a result, changes in the semiconductor (Ge substrate) and IL engineering (scavenging technique) proved to be the most effective for this purpose. It was also confirmed that the change of substrate from Si to Ge impacted IL reduction, demonstrating that IL has the greatest influence on Vc reduction. Finally, a FEFET with this gate stack structure was fabricated to verify the reduction of Vc. Additionally, evaluations of FEFETs on SOI substrates and the characteristics of Vc and memory window during ferroelectric layer deposition via thermal atomic layer deposition (ALD) and plasma-enhanced ALD were also conducted. The second part of this dissertation focused on the application of a ferroelectric (FE) layer that meets two constraints: a thickness of FE less than 20 nm and a memory window (MW) greater than 6.5 V, within a 3D NAND flash platform. Theoretically, MW increases proportionally with the thickness of the FE layer, but a practical 20 nm FE layer cannot achieve MW > 3 V. Therefore, a structure with HZO thickness fixed at 10 nm and HZO/TiN layers stacked in series was considered, but the memory window could not meet the target (> 6.5 V). Consequently, a scheme was applied that involved inserting an Al2O3 layer into the FE layer. Initially, the validity of the evaluation stack was assessed using MOS capacitors, and then the actual MW was verified by fabricating FEFETs. The results confirmed an increase in MW greater than 7.5 V and a linear increase in MW with the thickness of Al2O3. Thus, this dissertation work explores methods to engineer gate stacks to apply FE layers to two different application platforms. Achieving a low writing voltage for logic applications was addressed by adopting scavenging techniques and utilizing Ge semiconductor. The insertion of an Al2O3 layer was employed to attain a large MW for NAND applications, and these approaches were validated in FEFETs. This research, starting with MOS capacitors and culminating in the fabrication of FEFETs to validate the final gate stack, demonstrates how next-generation ferroelectric devices can be utilized for logic and 3D NAND flash applications. Future studies are needed to improve endurance using the gate stack engineering methods mentioned in this dissertation. Identifying avenues for improvement could lead to broader applications of FEFETs and expedite their commercialization.
-
ItemData and Computation-efficient Deep Learning for Multi-agent Systems(Georgia Institute of Technology, 2024-12-10) Kang, BeomseokThe primary goal of this research is to build data and computation-efficient deep learning methods for multi-agent systems. Multi-agent systems are present in a wide range of domains, from physical systems (e.g., molecules, planets) and biological systems (e.g., host-pathogen interactions, neurons) to social systems (e.g., covid-19 spread, games with human players). Although these systems have significant real-world applications, mathematical modeling of their often unknown dynamics is challenging. Deep learning offers a data-driven approach to modeling these systems without requiring extensive domain knowledge. However, collecting sufficient training data is difficult, as these systems evolve over time, and we may not even detect when the underlying dynamics change. Moreover, multi-agent systems are often driven by a large number of agents, making learning and prediction computationally expensive and inefficient. This thesis explores these challenges by developing innovative algorithms and neural network designs that can efficiently learn representations of the spatial arrangement of agents, forecast their trajectories and state transitions, and uncover hidden interaction graphs in unstructured and structured multi-agent systems, considering data and computation constraints.
-
ItemError Resilient and Adaptive Deep Learning Systems(Georgia Institute of Technology, 2024-12-09) Ma, KwondoAs deep learning systems become integral to a wide array of applications, including autonomous systems, healthcare, and finance, their complexity and deployment in hardware bring new challenges. In particular, the susceptibility of deep learning systems to hardware-induced errors, manufacturing process variability, and resource constraints presents critical obstacles to their reliable and efficient operation. This dissertation addresses these issues by introducing methodologies that enhance error resilience, adaptability, and energy efficiency in deep learning systems. The motivation for this work stems from the increasing integration of deep learning systems into real-world applications where reliability and robustness are paramount. The inherent variability in hardware—such as resistive RAM (RRAM)—and the need for efficient testing and tuning processes highlight the need for adaptive systems that can mitigate the impact of these variabilities. Additionally, the demand for low-power, high-performance hardware accelerators in edge computing environments presents further challenges in balancing computational efficiency and energy consumption. In response to these challenges, this research proposes a signature-based predictive testing framework for detecting performance degradation caused by process variability in hardware implementations of deep neural networks (DNNs). This framework introduces a compact, efficient testing mechanism that significantly improves the ability to identify defective devices during manufacturing, while also adapting to evolving manufacturing conditions through continuous retraining. Furthermore, a learning-assisted post-manufacture tuning framework is developed to optimize the performance of DNN accelerators, ensuring higher yields and greater reliability in fault-sensitive environments. This framework allows the system to adapt its tuning strategies over time, reducing the need for exhaustive retraining while maintaining operational efficiency. The dissertation also addresses the resilience of Transformer architectures to soft errors, a growing concern in high-performance applications such as natural language processing, and vision and image processing. The proposed approach combines error detection and suppression techniques to restore model performance under various error conditions, demonstrating the robustness of Transformer networks when deployed in real-world, error-prone environments. Finally, the work presents a novel energy-efficient DNN accelerator design that replaces traditional multiplication operations with shift-add computations, substantially reducing power consumption and latency. This architecture is particularly suited for low-power applications in edge and Internet of Things (IoT) devices, offering a practical solution for the deployment of deep learning models in energy-constrained settings. Overall, this research makes significant contributions toward improving the reliability and adaptability of deep learning systems, addressing key limitations in error resilience, manufacturing yield, and energy efficiency. These methodologies pave the way for the development of robust, efficient AI technologies capable of thriving in diverse and challenging environments.
-
ItemAssessing viscoelastic properties of skeletal muscle using shear wave elasticity imaging(Georgia Institute of Technology, 2024-12-08) Lee, JeehyunUnderstanding muscle mechanics is critical for studying the progression of neuromuscular disorders such as Duchenne muscular dystrophy (DMD) and recovery from acute muscle injuries. Despite advancements in diagnostic and therapeutic techniques, noninvasive quantification of mechanical changes remains a challenge. Ultrasound shear wave elastography (SWE) offers a real-time, quantitative method to assess tissue viscoelastic properties, addressing this critical need. This dissertation applies SWE to preclinical models to investigate muscle mechanics, focusing on respiratory and limb muscles in the contexts of DMD and cryo-injury. In the diaphragm, longitudinal assessments uncovered viscoelastic changes across early- and late-stage DMD, highlighting disease progression. Additionally, microneedle applications were examined as a preliminary investigation into potential therapeutic interventions. In limb muscles, a cryo-injury model revealed distinct recovery dynamics, with dystrophic mice showing delayed and incomplete healing compared to wild-type mice. By combining SWE with histological and functional evaluations, this research establishes SWE as a valuable tool for advancing muscle research. The findings bridge gaps between preclinical studies and potential clinical applications, laying the groundwork for translational advancements in the diagnosis and management of muscle-related pathologies.
-
ItemGridless Sparse Super-Resolution Direction-of-Arrival Estimation for Arbitrary Array Geometries(Georgia Institute of Technology, 2024-12-08) Govinda Raj, AnupamaThe objective of this thesis is to develop gridless super-resolution Direction-of-Arrival (DOA) estimation methods for arbitrary array geometries which exploit sparsity in the continuous parameter space to eliminate the offgrid problem associated with grid-based compressed sensing methods. The focus is on designing search-free gridless DOA methods that are effective in terms of achieving higher resolution and accuracy. We also focus on designing efficient methods, requiring fewer sensors and bits in data representation to reduce the hardware complexity of sensor arrays. Making use of the periodicity of the array manifold, the dual function for the infinite-dimensional primal atomic norm minimization problem is represented as a trigonometric polynomial via truncated Fourier series. The dual problem is then converted to a finite semidefinite program, and the source directions are recovered through polynomial rooting. The proposed approach is used to design search-free gridless methods applicable for coherent sources, limited snapshots, and one-bit sensor measurements. Extensions to improve the degrees of freedom using spatial correlations and higher order statistics are also developed. The improved performance of the proposed method is demonstrated through computer simulations for various array geometries and parameters.
-
ItemModeling and Simulation of Power System with High Penetration of Inverter-based Resources(Georgia Institute of Technology, 2024-12-08) Cai, SiyaoThis dissertation introduces my research work on modeling and simulation of high IBR-penetration power systems. Grid-forming (GFM) inverter, as the critical device in IBR-dominated system, is modeled in quasi-dynamic domain and protected using the dynamic state estimation-based protection (EBP). The EBP method is evaluated in a real-world PV-integrated distribution system, proving its effectiveness in detecting faults within non-radial distribution systems with bidirectional current flow and low fault current level. GFM inverter is also modeled in time domain to study its performance and limitations in a system with up to 100% IBR-penetration. The proposed GFM inverter is promising in supporting high IBR penetration distribution systems with complex loads at larger scales and maintains voltage stability after losing the synchronous generator. A Transfer Learning (TL) model for battery pack state prediction with battery degradation and different operating conditions considered is also presented. A generalized dataset from the publicly available datasets is generated to train the model. Two prediction models are implemented and compared at cell-level. Then, the model with better performance at cell-level is used as the pre-trained model for the transfer learning model for battery pack-level predictions. The test results indicate that the proposed model can accurately predict the SoC vs. OCV relationship as well as the SoH under different operating conditions. The work proposed in this dissertation paves the road to accurate simulation of high IBR-penetration power systems. The future research to further improve this work is also discussed.
-
ItemSelf-Checking Error Resilient Smart Autonomous System Design(Georgia Institute of Technology, 2024-12-07) Amarnath, ChandramouliThe increasing complexity of intelligent sense-and-control systems of interacting subsystems in safety-critical roles such as autonomous driving has driven research into failure detection, diagnosis and correction methods for reliable operation under stringent safety standards such as ISO 26262. In this work we assert that cross-layer failure detection, diagnosis, and correction are essential for the safety and scalability of intelligent autonomous systems composed of multiple interacting subsystems. This is accomplished using multi-domain, scalable outlier detection driven failure diagnosis and domain-specific failure correction. This work has two key focuses: (1) One key focus of this work is cross-layer synergies for system resilience, enhancing the autonomous system’s safety through information sharing and coordination between its subsystems. On-line detection and adaptation to failures in actuators and sensors as well as errors in control or state estimator computation are investigated and methods leveraging cross-layer subsystem interactions allow for rapid, on-line failure adaptation. (2) The second key focus of this work is secure, resilient execution of machine learning (ML) subsystems in roles such as semantic understanding (image classification) and reinforcement learning based control. Compute errors and security threats to ML subsystems during training and inference are detected in real time using reduced-dimension representations of intermediate features within the deep neural networks that make up these subsystems. Suppression of compute errors for safe execution is done through adaptive statistical thresholding of neuron values followed by zeroing of potentially erroneous values. These two thrusts enable resilient intelligent autonomous system design, using bottom-up cross-layer synergies for subsystem failures and module-level resilience methodologies (concurrent error detection, suppression and security modules) in ML subsystems.
-
ItemAdvancing Distribution Automation through Model-based and Machine Learning Approaches(Georgia Institute of Technology, 2024-12-07) Chen, ZhengrongWith the increasing number of distributed energy resources and electric vehicles, power systems, especially distribution systems, are transforming into active systems with renewable and low-carbon energy resources. Shifting from passive to active is one of the most significant characteristics of distribution systems, allowing bidirectional power flow from renewables. Such change brings challenges to distribution systems, including growing complexity, heightened uncertainty, frequent voltage violations, dynamic load demand, and cybersecurity issues. With the development of advanced metering infrastructures, leveraging extensive sampled and historical data enables real-time monitoring and control to enhance the resilience of active distribution networks under uncertainties and cyberattacks. This dissertation aims to advance distribution automation in terms of protection, control, and optimization to ensure secure, reliable, and resilient distribution system operation. Specifically, this dissertation explores advanced model-based and data-driven methodologies in this framework, including state estimation, fault diagnosis, voltage control, and load management. The main contributions of this dissertation include: (1) the design of the distribution automation platform, which utilizes smart meter data for distribution system applications to enhance resilience; (2) the development of the real-time fault diagnosis framework with high accuracy and robustness; (3) the innovation of the robust deep reinforcement learning method to solve optimization problem under uncertainties; (4) the formulation and mitigation methodology for pricing integrity attacks on the energy market.
-
ItemImproving Power System Approximations Through Machine Learning-Inspired Optimization Methods(Georgia Institute of Technology, 2024-12-07) Taheri, BabakThis dissertation aims to improve electric power system optimization algorithms using optimization and techniques inspired by machine learning. Power system optimization problems are inherently nonlinear and involve large-scale computations, making them challenging for real-time applications and for scenarios requiring complex models, such as bilevel formulations, mixed-integer nonlinear programs, and stochastic programs. To address these challenges, researchers and practitioners frequently simplify these problems using methods like relaxations, approximations, machine learning models, and reduced networks. However, such simplifications often introduce approximation errors, potentially leading to suboptimal or infeasible operational decisions. Drawing inspiration from computational methods used in machine learning, this dissertation develops algorithms to optimize parameter selection in power flow approximations, construct reduced network models, and restore feasibility in alternating current (AC) power flow solutions derived from simplified models. First, an improved version of the direct current (DC) power flow model is proposed. This model adaptively selects coefficients and bias parameters through machine learning-inspired techniques. The result is a significant improvement in the accuracy of the DC power flow approximation while preserving the model's simple structure, enabling seamless integration into existing computational workflows. Next, the dissertation introduces a novel algorithm for network reduction. This method optimizes the process of creating reduced network models, ensuring that the DC power flow solutions for the reduced networks align closely with the AC power flow results of the original, larger networks across a variety of operational scenarios. This advancement enhances the accuracy of inter-zonal flow predictions, providing a more dependable tool for power system analysis. The dissertation also tackles challenges associated with the nonlinearities of the DistFlow model, commonly used for distribution systems. A parameter optimization algorithm is developed to enhance the accuracy of the linearized DistFlow approximation for both single-phase equivalent and three-phase distribution network models. By optimizing the coefficients and bias parameters in the linearized model using sensitivity information, the algorithm reduces errors in voltage magnitude predictions compared to the nonlinear DistFlow model. Furthermore, this work proposes an algorithm to improve the accuracy of DC optimal power flow (DC-OPF) solutions relative to nonlinear AC optimal power flow (AC-OPF) solutions under various operating conditions. Using machine learning-inspired methods, this algorithm adjusts coefficients and bias parameters in the DC-OPF model, yielding more accurate generator set points and better alignment with the AC-OPF results. Additionally, the dissertation enhances the DC optimal transmission switching (DC-OTS) model. Traditional DC-OTS formulations, which simplify the AC optimal transmission switching (AC-OTS) problem into a mixed-integer linear program, often result in suboptimal or infeasible outcomes due to errors in the DC power flow approximation. The proposed DC-OTS algorithm addresses this issue by optimizing the parameters in the DC-OPF model to better represent AC-OPF results. Specifically, it captures both real and reactive power flows, improving congestion modeling and enhancing the accuracy of transmission switching decisions. This improvement reduces approximation errors, ultimately enhancing system reliability and operational efficiency. Finally, the dissertation introduces an AC power flow feasibility restoration algorithm. This algorithm employs a state estimation-based post-processing approach to adjust solutions from simplified optimization problems, ensuring they satisfy the AC power flow equations. By leveraging techniques inspired by machine learning, the algorithm learns the reliability of outputs from simplified optimization models, optimizing weight and bias parameters to improve the accuracy of these adjustments.
-
ItemH-BN integration in III-nitride devices for green hydrogen applications(Georgia Institute of Technology, 2024-12-04) Tijent, Fatima ZahraeThis thesis explores the initial steps in developing an integrated photoelectrochemical (PEC) cell for green hydrogen production using PEC water splitting. The focus is on integrating InGaN (Indium Gallium Nitride), GaN (Gallium Nitride), and h-BN (hexagonal Boron Nitride), which are promising materials for hydrogen production and storage applications thanks to their unique properties. InGaN, for example, offers a tunable band gap, high chemical stability, and good catalytic activity, making it suitable for hydrogen production. However, its efficiency remains low, and production costs are relatively high. To address these challenges, the thesis investigates the integration of h-BN, a two-dimensional material with a layered structure, into InGaN PEC systems. h-BN can reduce production costs by enabling the reuse of growth substrates when used as a release layer. It can also enhance hydrogen production efficiency by reducing the dislocation density when used as an interfacial layer during growth. This research includes first, a study of InGaN multiple quantum well photoanodes grown on hBN/sapphire to generate hydrogen via PEC water splitting, revealing the effect of h-BN on surface morphology and charge transfer kinetics. Second, the efficiency of a nanostructured InGaN photoanode in terms of hydrogen production is evaluated through numerical simulations. The most efficient InGaN photoelectrode identified through these studies is selected for a techno-economic analysis to assess the technology viability at a commercial scale. This analysis evaluates the competitiveness of a fully integrated InGaN nanopyramid photoanode with an h-BN proton exchange membrane (PEM). Additionally, the dual functionality of h-BN is highlighted—not only as a PEM to enhance device lifetime under long-term operational conditions but also as a potential system for hydrogen storage at a micro-scale level via the formation of h-BN bubbles under UV light irradiation.