Organizational Unit:
School of Cybersecurity and Privacy
School of Cybersecurity and Privacy
Permanent Link
Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)
ArchiveSpace Name Record
Publication Search Results
Now showing
1 - 10 of 157
-
ItemRoots of Distrust: Modern Technology and the Impact of a 19th Century Voter Suppression Plan(Georgia Institute of Technology, 2025-01-16) DeMillo, RichardMuch effort is devoted these days to understanding the root cause of distrust in election systems. Little effort is devoted to understanding the relationship between election technology and the historically significant distrust in populations whose rights have been denied. In this talk, I will first draw connections between the modern language used to justify the computerization of elections and the language of the Post-Reconstruction revision of the constitution of the state of Mississippi. I will use this analogy to bolster the argument that in modern times building "trust" in elections is counter-productive and that energy is better spent on developing confidence-building evidence-based methods for reaching agreement on election outcomes.
-
ItemEmpirical Measurements of the Security, Privacy, and Usability of Website Password Authentication Workflows(Georgia Institute of Technology, 2024-07-31) Alroomi, SuoodIn an era where digital interactions are integral to daily life, the security and privacy of online authentication mechanisms are crucial for protecting user data and maintaining trust in web services. Passwords, though decades old, remain the most common form of authentication and are likely to stay ubiquitous. Therefore, the web ecosystem’s security depends on how users and websites handle passwords and manage authentication. Researchers have extensively explored user behavior with passwords, offering insights into how websites should handle authentication and leading to significant updates in modern guidelines. A significant gap remains in understanding how websites handle authentication and whether they adhere to best practices. This dissertation aims to bridge that gap through large-scale empirical measurements of website authentication practices. I develop measurement techniques to systematically evaluate websites’ authentication policies and implementation decisions and apply them at scale to assess their authentication workflows. I reveal the disparity between modern recommendations and real-world implementations. My studies show that while guidelines inform policy decisions, barriers prevent adopting recent recommendations, highlighting the need for education and outreach efforts. Further, I found poor policy decisions aligning with the default configurations of web software, which often compromise security, privacy, or usability. Updating these defaults to match modern guidelines could significantly reduce vulnerabilities and promote best practices. Moreover, incorporating security features such as blocking common passwords and rate limiting could significantly enhance the security of websites, as many are found lacking these defenses. I also identify concerning practices in authentication workflows, such as insecure communication, misconfigured HTTPS deployments, and mixed content vulnerabilities. While TLS deployment has improved, work remains to migrate all sensitive resources to HTTPS. Standardized authentication workflows with centralized security controls and outreach efforts can further mitigate inconsistencies and improve authentication security.
-
ItemHardening and Adapting Trusted Execution Environments for Emerging Platforms(Georgia Institute of Technology, 2024-07-25) Sang, FanThe rise of cloud computing, IoT, and edge computing has led users to often give up data control to third-party providers, raising security concerns. Trusted Execution Environments (TEEs), initially developed for cloud computing, create secure processor areas to protect sensitive data. However, TEEs are not yet integrated into emerging platforms due to their recency and ongoing development. Despite this, increasing security expectations and new privacy regulations necessitate adapting TEEs for these platforms. This thesis focuses on hardening and adapting TEEs for emerging platforms. To harden existing TEEs, this thesis first presents PRIDWEN, a novel framework that dynamically synthesizes a secure TEE program that is optimally hardened against various side-channel attacks (SCAs) simultaneously. This thesis then presents SENSE, an architectural extension that allows TEE programs to subscribe to fine-grained microarchitectural events, thus improving the microarchitectural awareness of TEEs and enabling proactive defenses previously unfeasible. To enable TEEs on emerging platforms, this thesis presents PORTAL, a secure and efficient device I/O interface for Arm Confidential Compute Architecture (CCA) on modern mobile Arm processors. PORTAL addresses challenges due to memory encryption in the architectural trend of an increasing number of integrated devices within Arm processors. By leveraging Arm CCA’s memory isolation mechanism, PORTAL enforces hardware-level access control without memory encryption. PORTAL offers robust security guarantees while eliminating the overhead of memory encryption, maintaining the performance and energy requirement crucial for emerging mobile platforms.
-
ItemAchieving Security and Reliability of Industrial Control Systems Using Data-Driven Models Informed by Physical Domain Knowledge(Georgia Institute of Technology, 2024-07-01) Landen, Matthew D.Industrial control systems (ICS) are responsible for controlling and monitoring critical infrastructure such as power grids, which are critical for national security and public health. Modern ICS are comprised of interconnected information technology and operational technology systems that monitor and control physical processes. Although this increased connectivity provides operators with enhanced monitoring and control capabilities, it also increases the cyber threat surface. Cyberattacks on ICS commonly begin by infiltrating either the supervisory control and data acquisition (SCADA) systems or programmable logic controllers (PLCs) and disrupting process activity. To cause these disruptions, attacks inject malicious commands or falsify sensor data to cause the physical process to deviate away from reliable states. The longer these attacks remain in the system, the more damage they are able to cause to the physical process. To address cyberattacks on ICS, it is critical to detect such attacks quickly and precisely to minimize the amount of damage they cause. It is also necessary to maintain reliable operations during the attack in order for the system to continue functioning properly despite the attack. To address the challenges discussed above, this thesis presents a framework that utilizes structured domain knowledge about the physical process underlying the ICS to inform data-driven models that detect attacks on ICS and maintain reliable operations. In this thesis, we first present Dragon, which applies this framework to the security and reliability of power grids. Dragon aims to maintain reliable power operations while also detecting cyberattacks on the grid by training deep reinforcement learning agents. To train these agents, we designed reward functions that are based on the physical properties of the grid. In an evaluation with independent attacks, Dragon was able to accurately detect attacks and maintain reliable power grid operations for longer than a state-of-the-art autonomous grid operator. The second work of this thesis, Pi-Localize, uses the physics of the power grid to increase the interpretability of attack alerts by localizing attacks to a subset of the grid while adapting to different grid topologies. Specifically, Pi-localize uses a physics-informed graph neural network, trained using a custom loss function defined by the power flow model in addition to training data, to quickly localize attacks and adapt to different topologies. By embedding knowledge of the physics of the grid, the resulting data-driven model is able to transfer knowledge about attack to unfamiliar grid topologies without the need to retrain the model. These two systems demonstrate that infusing physical domain knowledge into data-driven solutions can improve their ability to maintain reliable operations and detect attacks on ICS in an interpretable manner.
-
ItemFuzzing with Advanced Program Exploration and Bug Modeling for Software Security(Georgia Institute of Technology, 2024-06-13) Chen, YonghengFuzzing is a well-received software testing technique. It operates by generating random inputs and then executing these against a given target program, thus probing various program states to pinpoint anomalies. Despite its proven utility, fuzzing has its limitations. Like other dynamic testing methods, it struggles with inadequate exploration of the program state space. This limitation stems from issues such as the unstructured nature of the generated inputs and the inefficient use of computational resources across multiple cores. A more critical shortcoming of traditional fuzzing lies in its approach to bug modeling: it primarily detects bugs through program crashes, overlooking a myriad of bugs that do not crash the program execution but are equally consequential. While the development of dedicated oracles represents a stride toward refined bug modeling, this solution is often impractical due to the high costs associated with crafting oracles that are typically bug-specific or tailored to individual programs. To address these limitations, we propose two-dimensional improvements, which scales the program exploration capability and enhances bug modeling in fuzzing. To explore more program states, we propose POLYGLOT and µFUZZ to scale the program exploration capability vertically and horizontally. Specifically, POLYGLOT utilizes a unified intermediate representation to handle diverse programming languages, effectively generating semantically valid inputs that result in deeper program exploration, finding over 170 new bugs in 21 language processors. µFUZZ, on the other hand, employs a microservice architecture to maximize the efficiency of parallel fuzzing, reducing synchronization overhead and enhancing the utilization of computational resources. More importantly, µFUZZ found 11 new bugs in well-tested popular programs. To enhance bug modeling, we introduce PROPGUARD, a framework that enables the specification and automatic detection of a wide range of bug patterns, moving beyond mere crash detection to identify subtle, non-crashing bugs. By a lowing users to define bug patterns through an intuitive specification language, PROPGUARD facilitates the development of targeted fuzzing oracles, thus significantly broadening the spectrum of detectable software vulnerabilities and finding two new non-crashing issues in open-source projects.
-
ItemLeveraging AI to Combat Misinformation by Empowering Crowds and Evaluating Detectors(Georgia Institute of Technology, 2024-05-20) He, BingOnline misinformation poses a global risk, leading to threatening real-world implications. To combat misinformation, existing research works either focus on leveraging the expertise of professionals including journalists and fact-checkers to annotate and debunk misinformation, or develop automatic ML methods to detect misinformation and its spreaders. However, the efficacy of professionals is limited because their manual processes do not scale with the volume of misinformation; ML methods rely on deep sequence embedding-based classifiers for detecting misinformation spreaders, but their vulnerabilities are rarely examined. To complement professionals, non-expert ordinary users (a.k.a. crowds) can act as eyes-on-the-ground who proactively question and counter misinformation, showing promise in overcoming the limitations of solely relying on professionals. However, little is known about how these crowds organically combat misinformation. Concurrently, AI has progressed dramatically, demonstrating the potential to help combat misinformation. In this thesis, we aim to utilize AI to investigate the aforementioned challenges and provide insights and solutions to empower crowds to better counter misinformation. We first characterize crowds who counter misinformation on social media platforms and how users respond to these counter-misinformation messages, and then assist crowds by generating more effective counter-misinformation replies. We apply advanced AI techniques to characterize the spread and textual properties of counter-misinformation generated by crowds as well as their characteristics during the COVID-19 pandemic. Interestingly, we found 96% counter-misinformation posts are made by crowds, which confirms their prominent role in combating misinformation. We also analyze user responses toward crowd-generated counter-misinformation replies in a conversation to investigate the impact of these counter-misinformation replies. As expected, we discovered that counter-misinformation replies that are polite, positive, and evidenced have a higher possibility of having a corrective effect on users. Our analysis work provides insights into how online misinformation is organically countered by crowds and how users respond to such counter-misinformation. Alarmingly, we also noticed that 2 out of 3 crowd messages are rude and lack evidence, and impolite and non-evidence replies may cause backfire. Generating an effective counter-misinformation response is thus crucial but challenging due to the absence of high-quality datasets and communication theory-backed models. To address these challenges, we first create two novel datasets of misinformation and counter-misinformation response pairs from in-the-wild social media and in-lab crowdsourcing, and then propose a reinforcement learning-based AI algorithm, called MisinfoCorrect, that learns to generate high-quality counter-misinformation responses for an input misinformation post. Our work illustrates the promise of AI for empowering crowds in combating misinformation. On the other hand, deep sequence embedding-based classification methods, which use a sequence of user posts to generate user embeddings and detect malicious users, are also employed to identify misinformation spreaders on social media platforms. Although deep learning models are shown to be vulnerable to adversarial attacks in computer vision and natural language processing domains, the vulnerability of deep sequence embedding-based detectors remains unknown. Thus, we evaluate existing detectors by proposing a novel end-to- end AI algorithm, called PETGEN (PErsonalized Text GENerator), that simultaneously reduces the efficacy of the detection model and generates high-quality personalized posts. Next, to improve the robustness of these detection models against the next post attack, we propose a novel transformer-based detection model. The algorithm first comprehensively encodes the local and global information (i.e., the post and sequence information) by transformer encoder and decoder blocks, and then deploys the contrastive learning-enhanced classification loss to consider the adversarial attack scenario during training. Building on our efforts, we pave the path toward the next generation of adversary-aware deep sequence embedding-based classification models to robustly identify misinformation spreaders. Our AI-based approaches lead to solutions that can empower crowds and better automated detectors for efficiently and effectively combating misinformation.
-
ItemCyberpsychology & Future of Cybersecurity Research(Georgia Institute of Technology, 2022-04-15) Crooks, CourtneyCyberpsychology is the interdisciplinary study of the psychology of cyberspace and those who use the tools of cyberspace. This field identifies and explores the overlap between online and offline life through the application of psychological concepts and research. Key concepts that will be discussed briefly include cyber presence, digital identity, online disinhibition effect, digital deviance, dark personalities, and deception in cyberspace. Psychologically informed conceptualizations of cyber behavior may inform cybersecurity researchers, practitioners, and decision makers with insight about psychological motivations and vulnerabilities, and support better understanding of how to develop and implement effective cybersecurity tools, measures, policy and legislation.
-
ItemProtecting Intellectual Property in Additive Manufacturing Systems Against Optical Side-Channel Attacks(Georgia Institute of Technology, 2022-04-08) Liang, SizhuangAdditive Manufacturing (AM), also known as 3D printing, is gaining popularity in industry sectors, such as aerospace, automobile, medicine, and construction. As the market value of the AM industry grows, the potential risk of cyberattacks on AM systems is increasing. One of the high value assets in AM systems is the intellectual property, which is basically the blueprint of a manufacturing process. In this lecture, we present an optical side-channel attack to extract intellectual property in AM systems via deep learning. We found that the deep neural network can successfully recover the path for an arbitrary printing process. By using data augmentation, the neural network can tolerate a certain level of variation in the position and angle of the camera as well as the lighting conditions. The neural network can intelligently perform interpolation and accurately recover the coordinates of an image that is not seen in the training dataset. To defend against the optical side-channel attack, we propose to use an optical projector to artificially inject carefully crafted optical noise onto the printing area. We found that existing noise generation algorithms can effortlessly defeat a naive attacker who is not aware of the existence of the injected noise. However, an advanced attacker who knows about the injected noise and incorporates images with injected noise in the training dataset can defeat all of the existing noise generation algorithms. To address this problem, we propose three novel noise generation algorithms, one of which can successfully defend against the advanced attacker.
-
ItemAnubis Clock(Georgia Institute of Technology, 2022-03-31) Lakhani, AamirBad guys live forever, and they adapt and become legends. The threat landscape has completely changed. In the last twelve months we have seen supply chain attacks, an increase in ransomware (with an explosion of cryptocurrency value), and a dedication from attackers against industrial control and IoT systems. Attackers are using more sophisticated methods to engage in cybercrime, hacking, and disruption strategies. We will explore behind the curtain and show the techniques on how attackers use technology to attack systems, social engineer, and bypass security defense solutions. We are now studying, working, communicating, and interacting in ways that are different then they have been in the past and attackers are taking advantage of post-pandemic lifestyles. Attackers are targeting VPNs, remote desktop systems, growing more advanced with phishing attacks, targeting home-based IoT systems, remote conferencing applications, and gaming systems. Let me introduce you to the bad guy and how we are all on borrowed time against the attackers and the Anubis Clock.
-
ItemThe Evolving Landscape of Privacy, Technology and Data Governance(Georgia Institute of Technology, 2022-03-18) Brannon, BlakeWhy are all your favorite websites asking you to accept cookies? Why should you use and trust facial recognition software at the airport to help you get through security? How are businesses using your personal data to innovate new cures for complex health challenges? What does it all mean for humankind and the sharing of data? In recent years, the processing of personal data, transparency requirements, and automated decision making has become more heavily governed by an increasing number of global and local privacy laws and regulations. Specifically, the EU’s General Data Protection Regulation (GDPR) went into effect in 2018 and since then, California, Colorado, Virginia, India, China, Brazil, Japan, South Korea, and Canada are actively updating their data protection and usage policies across commercial and public sectors. At the core of it all is a growing set of societal expectations for data privacy and governance. The companies that meet consumer privacy expectations are also those building out bigger and bolder data strategies. Why? Because they recognize that proper and ethical use of data equates to customer and investor loyalty, which in turn creates a competitive advantage and an increase market capitalization. Simply put, doing what’s right for consumer’s privacy does not have to be at odds with using more data. It just means you need to show the value exchange for this data, and ensure consumers that their information is being protected. This is why privacy enhancing technologies and operational processes are fueling the future for how organizations will use and govern data usage. In this session, you will learn about the current landscape of emerging privacy, data governance and localization regulations . We will discuss how organizations are implementing privacy enhancing technologies to safely expand their use of data while respecting individual’s personal data rights.