Let me just add my welcome to everyone to be here today. Or the afternoon. Friday afternoon lecture series, cybersecurity and privacy. As Brandon said, we're really pleased to have Philip Stark with us today. Philip is a longtime friend and collaborator. He's, he's a statistician. He's been an Associate Dean, College of physical sciences, mathematics at Berkeley. And he just told me he's really proud to be a plain old professor again. But Philip is not just a plain old professor. Philip, Philip has been, has been deeply involved in what we're going to talk about today. Elections and election, election security. He, among other things, in his career, had the foresight to, to apply statistics to the question of whether or not an election has been fairly decided and called. And this technique that he, that he invented risk-limiting audits. You read a lot about audits in the news. These days. These are not those kinds of audits. These are odd, it says, As you'll see a real mathematics behind them. So his his method, the risk-limiting audit, has been written into law in a number, how many states? Number another large, large, large number of states. This is a real example of taking, of taking pure scientific research, not only applying it to the real world, but actually following through with the real-world to make sure that it gets applied, applied properly. And he's going to talk to us today about generally the idea of evidence-based elections. And I hope he gets a little bit into the meat of risks because it's really a fascinating topic. So I'm not going to prolong this any further. Welcome. Thank you. Thank you so much, Rich. Thank you for your generous hospitality and inviting me here. I'm very happy to be here. My brain is a little bit out of it because of flight delays. I ended up getting the hotel it about 215 in the morning. So I'm on Google. I'm on West Coast time, but I'll do my best. So I this is not work that I've done alone. Many, many collaborators over the years that I've been working in this area since about 2007. But go on from there. Though. You might know that there is a little controversy over the 2020 presidential election, but he might have heard that possibly without taking any sides on whether the outcome of the election is right or wrong. We have a serious problem that a large fraction of the population believes that it was wrong. And that we have no way to produce convincing evidence otherwise, because of the way we run elections in this country. And what I'm going to try to argue is that there is actually a pretty straightforward way to solve the problem. And it involves things that we already know how to do. We just have to do with them. Um, it doesn't mean we can't use technology and elections, but it means we have to use technology in a way that we can make an end run around it and figure out who really want even if the technology fails. There was malware, misconfiguration, insider attacks or anything else. So I would like to do to enlist all of your help in making sure that in 2024, we really do have strong evidence of who actually won the presidential election. And of course, presidential election is not the only consequential election our country. We should be doing this routine way for elections of all sizes and all jurisdictions. Let's talk about about what the problem is and what we can do with it. I just I guess this is here for entertainment purposes. You might have heard that when would suit 2 in actually the Supreme Court try to overturn election results. And this is a lawsuit against paragraph is for the Georgia Secretary of State. This is something that I find kind of sketchy behavior. This thing on the left was filed and when woods pace and you'll notice it bears a striking resemblance to the document on the right which was filed in currently get all be reference burger in Georgia would took my declaration in that case, strips the case caption off of it, and filed in his own case without talking to me. This is kinda sketchy behavior, but that aside, I'm also talking about Georgia for a moment. A lot has been made of the fact that Georgia counted votes in the election three times. There was an original machine tally, there was a hand tally, there was a second machine, Kali. I don't find any of them persuasive and I can talk a little bit about why. But in particular, Georgia purported to use this method that, that Rich was talking about risk-limiting audits, to check the election and to do a full hand count and this and that. And the way that it was done is leaky, let's say. And so at the end of the day, it really doesn't provide compelling evidence that the results were accurate enough to determine who actually won. I'm not saying that the answer is wrong. I'm saying we don't know what the answer is and nor do I think we ever will because of the way the paper has been handled and so forth. All right. So problem we have is that US elections as currently conducted or neither tamper evident nor resilient, they aren't gone in such a way that you could necessarily tell if someone has altered the software and the devices, misreported results, et cetera. And the resilience aspect is if you do detect that there has been an error, do you have the ability to recover from that error and figure out who really won the election despite whatever failures might have occurred. And they were not doing things that way, but we could. But we need our systems and procedures that can give strong evidence that the reported winners really want strong evidence that the report, when there's really one, is not the same thing as the absence of a smoking gun. But we're not talking about no warning signs. We're talking about actual affirmative evidence that the answer is right. Not just look for this kind of failure and we didn't see that kind of failure. We look for this kinda fail and we didn't see that kind of failure. And this is what I think is doomed about things like the forensic audits that there's a big push for right now, there is a chance that a forensic audit will turn up malware or misconfiguration or something like that. But even if it doesn't, that is not affirmative evidence that the outcome is right because malware can escape detection now where can delete itself after it's done, it's harm. That there's just basically no way to starting with that perspective. There's no way to end up with a primitive evidence that the outcomes, right? So what I want to argue for is a pyramid that has as its foundation, a trustworthy paper trail. It is then taken care of well, verified to have been taken care of well, and then however you tabulate it, whether it's using machines that are programmed by North Korea or whatever, you can still have confidence that the outcome is right. If you deal with the paper properly, you can make an end run around the technology and still have strong evidence that the outcome is right. Though every electronic system, as you know, people here know better than I is, is subject to being subverted or or misconfigured him. All right, so it turns out that paper has wonderful security properties for elections, right? I'm, I, I'm an advocate of paper, not because I'm a Luddite, but because I actually think it is the right pool for this particular job, it's tangible and accountable. Unlike electronic records in general, you really can count it. You can keep track of it if you try hard. I'm not saying it's easy, but it's possible. It's tamper evidence not that you can't erase or change marks. It's just hard to do it without leaving some kind of a trace. And it's very hard to do it on a large scale without leaving some kind of a trace. If you wanted to make a big attack, it would require physical access to the ballots and a lot of accomplices. People aren't good at keeping secrets. Something like that would would, would probably surface if it were to happen. That said it doesn't necessarily take a large attack to change the outcome of a close contest. It's human-readable. We don't have to rely on technology and I need to rely on glasses. But in general, we don't need to rely on on electronics mediate between us. And the record. That said not all paper is trustworthy and part of the mess that our country is digging itself into further and that Georgia is in, up to it's yours, is creating a paper trail that is intrinsically untrustworthy no matter how good care you take of it. And that's the result of putting technology between the voter and the paypal. So Georgia switched after the 2018 election from its paperless DREs to ballot marking devices for all in-person voter universal use ballot marking devices. These are a bad idea. Some of them are worse than others. That panel on the right is about one of the worst examples. That particular device, you interact with the touchscreen to make your selections. And then the machine says, Should I cast this ballot for you, or would you like me to spit it out for you to have a look at? And it does not print your selections until you tell it whether you're going to look at the print out. If you can't see how that could be exploited, you're probably shouldn't be working in cybersecurity at any rate. So those are the devices that have permission to cheat. They basically say, I'm not going to look, print whatever you want. This is a paper with rich and Andrew repel, explaining some of the problems with ballot marking devices and why they really cannot be relied upon to reflect the will of the voters. Some people say, Oh, it's no problem, you just do pre-election logic and accuracy testing or maybe you do forensics on the software later. That paper with she was an undergrad at the time. She's now PhD student at Stanford. Explains why there's basically no way you can possibly do enough testing of the device to tell whether it altered enough votes to change the outcome of an election. It just isn't. The amount of testing that's required is way beyond prohibitive. All right. So to summarize this piece, handler paper ballots or a record of what the boater did, but machine mark paper ballots or record what the machine did. Not the same thing. People talk about voter verifiability and say, well, if you get B&B print out, the voter can verify the print out. And that's it's the same thing, forehand mark paper ballots. The voter needs to verify, verify the hand mark paper ballot. And voters can make mistakes when they can mark paper ballots and they can, they can make mistakes on the market B and B, there's no differences like there's an enormous difference with hand mark paper palettes. If the voter make some mistake, it's on the voter. With machine mark paper ballots. You are relying on the voter to be an essential piece. The security of the system. The voters, the only entity that's in the position to notice whether their votes had been altered and to ask for that to be corrected. But I'll talk about this a little bit more. Most voters don't look. Those who do look often don't notice. Generally don't notice more often than not, don't notice errors in the print out. And then even bigger problem is, if someone does notice an error in the print out, there is no evidence that person can present to a poll worker, election official or report or anybody else to prove that the machine misbehaves. It's the voters word against the computers. Because you're supposed to be voting in private. You only you know what went into the touchscreen, et cetera. The end of the day, the best thing that can happen is for the poll worker to give you another chance to mark a ballot. But there's no way that the poll worker has evidence that the machine is misbehaving and should be taken offline. So unless you think that, that election officials are eager to take machines offline and interrupt elections and start over. Because of course, if you do believe that there's a problem, there's no way to reconstruct what the right answer is. And there's no way to know how many other ballots were affected by whatever problems occurred. Though. It's basically, it is a broken security model. And this is an enormous open question and usable security. How do you design an accessible interface to create a durable paper record that that someone can independently check, reflects their selections and can check in a way that's effective and can generate evidence. Or if the, if it misprints in something that isn't what the voter did, there's a way to prove that the machine misbehave. Open question. All right. This is a summary of a piece of research that was done here in Georgia by this is Haynes and Hood from the University of Georgia. They were commissioned to do this study by the Georgia Secretary of State's office. And they went to some randomly selected precincts in a number of Georgia counties and watched how long people looked, glanced at their ballot marking device print out before casting it. And they or reasons that seem like they can only well, reasons that are suspicious to me. Considered a long time, to be five seconds or more, to looking at the ballot for a long time as looking at it for five seconds, or only less than 20 percent looked at their ballots for five seconds or more, more than 80 percent look less than that. But if you start thinking about how long it would take to verify the selections on one of these ballot marking device print outs. It's a lot more than five seconds. Now these are estimates based on the number of contests in, on the ballot in these various counties. And on the assumption that you have to read at least four words per contest, like the name of the contest and name it a candidate. In order to verify that it's right. This is how long it would take to read the contents. Okay. Time to myself. Just counting the contests on the ballot and a number on a number of piece of B and they print out, and it took me about a 2.5th preconscious to count. So you get up to 20 seconds times a 2.5th. That's ten seconds. That's a whole lot longer than five seconds. All right, So people are not verifying their print out and practice. Rich and Marilyn Marx did a study as well, found essentially the same thing. I think they found even, even fewer people looked. Period. There have also been laboratory studies, one from University of Michigan, one from rice that also find that even when you deliberately in experiments here on actual voters but not an actual election day. Deliberately altered selections. And what was it? Something like 7% of voters of these people noticed in the Michigan University Michigan experiments. They were able to get that up slightly by giving people verbal prompts. Signage didn't really make any difference whatsoever. Reminding the voter to do this just doesn't really help. The only thing that helped was giving people a written slate of candidates to vote for and having them basically check against that Britain slate. That's kinda what I do in California. I mark a valid at home. I use a sample ballot to do it. I check on my stuff and then I deposit the ballot in person. All right. So I would characterize what we have in the US right now is procedure based elections rather than evidence-based elections. So a procedure based election is like a brain surgeon thing. I performed the surgery following the general rules. I maintain the sterile environment. I used a sterile scalpel, therefore, the patient is fine. Just like you might want to look at the patient to decide whether the patient is fine, right? So similarly to the way we run elections right now is we have certified equipment. We have procedures, we follow those procedures and we say I use the right equipment, I follow the procedures, therefore, the outcomes, right? Well, maybe we ought to look at the outcome in more detail to actually figure. Okay, Any way of counting votes can make mistakes. Even hand counts make mistakes. There is no perfect way of counting. And at the end of the day, what we'd really like to do this is like the equivalent of measure twice, cut once and carpentry, right? We want to vote once and count a little more than once, maybe not necessarily all of twice. It's going to help us out a little less than twice generally. But we need to we need to do a little bit more. So the question that I'm trying to address is, did errors, bugs, hacking whatever errors, malfeasance, procedural failures from any cause whatsoever, cause the wrong candidate to appear to. When did it change the electoral outcome? There's no way to get the vote tally exactly right. But you can at the minimum standard for accuracy, I argue, is accurate enough to figure out who really one, right? That's, that's a minimum standard. You might want more than that. So this leads us to this notion of evidence-based elections, though wrote a paper with David Wagner back in 2012 on this. And the thesis is that election should be structured to provide convincing affirmative evidence that the reported outcomes really reflects how people voted. All right, um, and there has been a piece of this more recently with Andrew tell. And we go into a little bit more detail about some things that are important for evidence-based elections. In particular, we'll be talking more about compliance audits was mentioned I mentioned briefly. That's about assessing the trustworthiness of the paper trail. So some of the properties that we need election systems, voting systems to have in order to justify public trust in the outcome had been formalized using, using various terms. One of them is software independence and strong software independence. This was introduced by Rivest and whack. Actually can't remember the year now that they did that. She doesn't she doesn't for. And the idea is that an undetected change to the software should not be able to cause an undetectable change to the election outcome. So that's basically a form of tamper evidence. If you tamper with the software, you ought to be able to catch that in some, in some way. Strong software independence says that in addition, you should be able to reconstruct the right outcome of the election without rerunning it. But that's, that's a resilience aspect. In this paper with rich and Andrew repel, we introduce some new terms that actually grew out of thinking hard about some of the problems that ballot marking devices introduced. So contestability is to address this problem that only a voter isn't a position to know whether Abella marking device misbehaved in practice. Though, election system is contestable if if a voter has evidence or if someone has evidence that there was a problem, there's a way to make public evidence that there was a problem, right? So in for Bella marking devices, there isn't only the voter has that evidence. There's no way to, to, to kinda persuade other people that you saw what you, what you think you saw. Defensibility is in the other direction. That the idea there is that our system should be such that whatever does go wrong if the outcome is still write, the election officials should be able to provide convincing public evidence that the outcomes right? So right now because a ballot marking devices in Georgia and other, and other issues, and not only a Georgia, our systems aren't defensible either. There's no way to provide affirmative evidence that the outcomes right, despite whatever might have gone wrong and something is always wanted to go wrong. Alright? Direct recording, electronic voting systems and devices and online voting have none of those properties. All right. So if we have a trustworthy a record of the votes, we can check whether the reported winter is really one. As a last resort, we could do a careful full hand count of the paper. We might be able to do a whole lot less work than that if we're willing to accept them chance of not correcting wrong outcomes. And that's what risk-limiting audits are about. What I want to talk about, how we can make this technically precise and run elections and audits in such a way that we end up with really quantitative evidence about what happened. But what is a risk-limiting audit? Risk limiting audit is any procedure that has a known maximum chance of not correcting the outcome if the outcome is wrong. All right? And it never makes a correct outcome wrong, and never, never, never alters the correct outcome. And again, out my outcome here, I mean the political outcome, not the, not the exact vote jelly. So that isn't possible unless you have a trustworthy paper trail, can use procedures that would be risk-limiting audits if you were applying them to a trustworthy paper trail, and if you apply them to just a found a pile of paper, you are probably not limiting the risk of anything. This is a place where I am butting heads with some other members of the election integrity community, including some of the people who are peddling and promoting my work on risk-limiting audits because they're promoting the use of these procedures on untrustworthy paper. And then over claiming what the result shows. If you're applying it to untrustworthy paper, you have not established that the outcome is right. You have not presented evidence that the outcome is right. At best, you presented evidence that if the outcome is wrong, it's not because of how the paper was tabulated. Happy because the paper is just not the right paper. All right. So what is the risk Limited and risk-limiting audit? It's the largest chance that your procedure will not correct reported outcome. If they're reported outcome is wrong. And that maximum, this is a, this is a sort of a minimax thing, right? This is, imagine that you have an intelligent adversary who is trying to undermine the outcome of the election in the way that will be most difficult to detect by whatever your auditing method is and you need to be protected against that. So if I say, how about risk-limiting? Odd, if it has a risk, I'm at a 5%. It means that no matter what malware there might be, no matter which machines fail, no matter their one precinct or spread across dozens or hundreds of precincts. If the outcome is wrong, I have a large probability of correcting. That requires a trustworthy paper trail. Establishing whether the paper trail is trustworthy generally requires other processes is, as I mentioned before, the idea of a compliance audit. You've got to start with generating paper and that paper records in a trustworthy rate, which means you can use a ballot marking devices to a minimum should be mostly hand mark paper ballots. But then you got to keep it safe and you've gotta be able to demonstrate that you kept it safe. Among other things, you need a thorough Canvas. You need valid accounting. You need to keep track of how many ballots went to each polling place, how many came back, voted on, voted spoiled. How many ballots went out by male, how many came back? Just the number of voters in a particular polling place to precinct wind up with a number of signatures that you have? Does it line up with the number of people who should be eligible to get a valid of a particular style there all of these accounting checks and paper work in the broader sense that you can do. To ensure that the paper trail was trustworthy. We need to secure a chain of custody. We need to examine custody logs. We need to be doing a lot more physical security work. Because without the physical security were some kind of cybersecurity. Alright? And then of course, if you start with paper that isn't trustworthy in the first place, it doesn't matter how well you take care of it. So what's the basic algorithm for risk-limiting audits? Night, as long as you haven't done a full hand count and you don't have strong evidence that the outcomes right, look at more balance, right? So eventually you have either looked at all the ballots and you know what the outcome is. If you do end up doing a full hand count that replaces the reported result. Or at some point in the process, you have convincing evidence that there's no point going on, but you're wasting your time. This idea has caught on. This is the National Academies report from 2018, if I recall correctly. First recommendation is paper ballots. Second recommendation is risk-limiting audits. So the National Academy is the Presidential Commission on election administration, American Cisco Association, League of Women Voters, bunch of other entities that are concerned around it with Election Integrity at endorsed risk-limiting audits as best practices? There have been i've I've lost track, but there'd been on the order of 60 pilots and a bunch of different states. There's a number of California counties that have that have that have done this on a pilot basis. They'd been routine and Colorado since 2017, Colorado passed a law back and it's like 2012 or 2013, took them a while to implement it for a variety of reasons. There have been statewide audits and a number of other states and there's laws and looks like eight or nine states right now that that refer to your risk-limiting audits. The term is being misused by a lot of people. Some people equate any audit that involves taking a sample with a risk-limiting audit that doesn't meet the definition. The definition is you're limiting the risk that an incorrect outcome won't be correct. It, it doesn't mean I'm using statistics. Okay. So what's the role of math and statistics year? It's pretty easy to have a risk-limiting audit, right? Just do a full hand count. That's a risk limiting audit. If you do an accurate for him count That's arithmetic audit with a limit of 0. That's what Secretary reference for her claims to have done. Here. I can talk about that if you want. So that's not hard. What's hard is actually saving effort when the outcome is right? Oh, what we'd like to do is an intelligent incremental recount. It stops as soon as it's clear that it's pointless to continue, but doesn't stop. And so let's go to this point in this to continue. He's sort of like you're waiting for strong evidence and if that evidence isn't forthcoming, you need a remedy. So this led to framing audits as sequential statistical hypothesis tests. And normally in a hypothesis test, the null hypothesis is, there's nothing going on here. The drug doesn't work. You know, that the thing doesn't make people sick, whatever it is, it's like nothing to see here, go home. What is different about risk-limiting audits is it turns it on its head. And the null hypothesis is, there is something to see here. The outcome is wrong. We have a problem. The null hypothesis is we have a problem. But why do we want to formulate things backwards? Well, in hypothesis testing, the kind of error that we control is the type one error rate, the significance level. The significance level is the chance that you incorrectly reject the null hypothesis. When the null hypothesis is true, The pick, the area that we want to control an election auditing is concluding that the outcome is right when it isn't. So if that's the arrow we want to control than the null hypothesis needs to be that the outcome is wrong. We want to control the era of concluding that it's right when in fact it's wrong. So this is, this is turning things a little bit sideways. M, M, a result that's just a couple of years old. Another paper came out in 2020 shows that we can reduce that question for a broad variety of social choice functions including plurality, multi winner plurality, supermajority, Borda count approval, voting, instant runoff voting. Bunch of other things. Do a course statistical question which is, I have a finite list of bounded numbers, is the mean of that, but it's bigger than a half. And there's lots of, and this means that any statistical method for solving that problem can be brought to bear. We want this to be sequential methods, which means that you can take multiple bites at the apple so that you aren't normally. If I wanted to test a hypothesis, I would say, okay, I want to have this much power against this alternative. A sample size of 270 is going to be big enough. And so I'm going to take a sample of size 270 and make a decision. If you use a method like that, you are not statistically allowed to say, Oh, I didn't get the answer that I wanted. So I'm going to look at another 100 or another 200 or something like that. Because your chance of making a type 1 error, erroneously rejecting the null hypothesis when it's true, it goes up and up and up. The more times you look at the data, if you keep doing that. But there is a class of methods called sequential hypothesis tests allow you to do that and still control the probability of making a type one error that is erroneously rejecting the null hypothesis. Going to walk you through a little bit of the theory of this. This came out in 2020. There's open-source software for it, so I try to put things on GitHub as I'm working on them. I might not make them public immediately, but by the time I published the paper, I try to make it visible. So code there, people want him. All right, so I want to explain how to reduce these different social choice functions to this one. Canonical question of is the mean of some bounded, finite, bounded numbers greater than half or not. Though what I'm gonna do is I'm defining something called an disorder. And it's sort of, it's sorts balance. It puts them into categories. It assigns numbers to ballots. And we're going to talk now about a plurality contests. I've got Alice and Bob are running against each other in a contest. And I want to explain, I'm going to assign numbers to balance so that the mean of the corresponding list tells you whether or not Alice one, we're going to assume that Alice is the purported winter. But I'm going to do is I'm going to assign a number to ballot the eye. And the number that I assigned to this, this indicator function one, can it be I assigns the value one valid if that, if that candidate has a valid vote and 0 otherwise. And now I'm going to define this disorder. So for telling whether Alice beat Bob, the way I'm going to assign a number to ballot is I'm going to give a one if Alice has a valid vote, minus 1. If Bob has a valid the plus one divided by two, okay? If neither, um, has a valid vote, this has the value a half. If there's a valid vote for Alice but not Bob, has a value 0. If there's, I'm sorry Alison about the value is one. And if it's a valid vote for Bob and Alice, the value 0 is assigned a value of 0.5 or one depending on what the, what the votes are on the ballot. Okay. So, alright, if the average of that list of numbers, one for every ballot is bigger than a half. That means Alice got more votes than Bob and Alice really one, right? If it's equal to a half, it's a tie. If it's less than a half problem, right? It's really straightforward. And in this case, the same thing can be generalized to any number of candidates in a plurality contests, including multi winner plurality context. What you do is you take, you make this kind of pair. You look compare every, each winner to each loser. And if the numbers that you get, if the average is greater than a half for that comparison, then every winter really beat every loser. All right, and you're done. Now, normally in hypothesis testing you started. So you'd have to say, Wait a second, you're doing a whole bunch of tests using the same sample of data, don't you don't have to worry about multiplicity, aren't you running up your false positive rate and whatever? And the answer is no. Because of the structure, this you only confirm the election. If you can confirm every one of these assertions, you don't confirm the election. If you if you confirm at least one of the assertions that you don't have the multiplicity problem in that way. It's a conjunction that you're, that you're looking at. Alright, so this is a notation for stuff I've already said in words, but we can look at the average value of this list of numbers. And if the average is bigger than a half an hour, then Alice really be up. This generalizes. So if we have k winners and there see candidates and also we have c minus k losers, then the outcome is correct. If all of these averages are bigger than a half, and they're c minus k, c k times t minus kx inequalities to check. Same approach works for proportional representation, like to haunt Hamilton, Hamiltonian, and whatnot. So elections like the US and Europe, this has actually been used in Denmark. Supermajority, you can construct a similar thing. The way that you assign numbers to balance is a little different. So if you need a fraction f of the votes to win, then, you know, if you have a mark for the reported winner, this is the value that you get. If it, if it has a vote for one of the opponents, it's 0. And if it isn't a valid vote for anybody that, and it's a half. And again, the winner of the super-majority context really is Alice if that, if that value is bigger than him, the same thing works for other social choice function, star voting board account, etc. In general, if you have a scoring rule where each voter can assign points to each candidate up to some maximum number of points. Then everything goes through with the maximum number of points you can assign as S plus. And it's like into the score that Alice got on the ballot minus the score that somebody else got on the ballot, divided by twice this and this upper bound. That'll work. So it works for ranked-choice voting, instant runoff voting. There is one social choice function that is used in political elections in the US and elsewhere for which we do not know how to do it, ended up inefficient audit. That's single transferable vote, sort of multi when a ranked-choice voting. That's, that's an open, another open question. Alright. So how do we establish these things statistically? Well, as is usual in science, you don't prove things are true. You prove things are false. So the idea is, you start with the assumption that the average values of the average of this list of numbers is less than or equal to a half. And you test that is the null hypothesis. And if you get compelling evidence that that is wrong, you conclude that the average is bigger than a half, right? So again, we're kind of testing the complimentary hypothesis. Just like in the audit as a whole, we're trying to find evidence that the outcome is right. And we do that by disproving that it's wrong. By proving that it's right, It's the same, same idea. So if you audit until either you have strong statistical evidence that all of the complimentary null hypotheses or false, or until you've looked at all the ballots, you know the right answer, then you've conducted the risk-limiting audits at level Alpha. You can do this for any number of contexts simultaneously. There are enormous improvements in efficiency. If you have a way of keeping track of which ballot contain which contests. So Georgia has, I think, generally been able to stick with one page ballots or something that one card balance. In California, I routinely get a ballot that has six separate pieces of paper, six separate cards. So if I talk about palate cards rather than ballots, because that's, that's a more general thing about consists of some number of cards. Alright, so how do we do this now? How do we actually conduct a statistical test? So this is where things get fun for me, especially, I'm especially fun for me. This idea has said we're where you can take multiple bites at the apple and keep collecting more data and not pay a penalty when you're not increasing your chance of committing a type one error of erroneously coming to the conclusion that the null is false. The first methods for this republished in 945 by Abraham Wald. You develop the methods during World War II and they were seen as so powerful that they were a military secret. He was not allowed to publish until, until the war was over. The key mathematical object that lies behind this, well, this is not the way he derived it, but in retrospect, it would've been a much shorter proof is called an, a non-negative super martingale and non-negative martingale. What is a martingale or what does this supermarket? So it's a sequence of random variable Z j. So we have some, you know, one random variable, another integral another in a variable. And they have the property that they all have finite expected value. Their absolute values, expected values is finite. Absolute value is finite. And this is the kind of interesting thing. The expected value of the next term in the sequence, given what you've seen so far, is either equal to what you've seen so far for a martingale, or less than or equal to what you've seen so far for super martingale. But this objects kind of originated in studies of a betting, gambling. And the idea is, if you have a fair bet, then what you expect your wealth to be after the next bet is your current wealth, right? You have a 50 percent chance of winning, 50 percent chance of losing, where your chance of winning weighted by what you get if you win, is equal to chance of losing times what you Blue's Clues. So your fortune expects to be where it is right now after the next bet. This turns out to be an incredibly powerful constraint on how rapidly these series can grow. And we'll get to that result in a second. This is the non-negativity. That is the chance that, that each of these things is greater than or equal to 0 is 1 with probability a 100 percent, these numbers are positive, non-negative. Alright, so this is the, the underlying theorem that all of the current stuff is based on. The most modern methods for risk-limiting audits are all based on this. If I have a non-negative super martingale, then for any number between 0, 1 and all, all integers that I like, the chance that the alert that these ever, that any of them is ever bigger than one over alpha is at most alpha times the expected value. Though, this looks a little bit like Markov's inequality to those of you who have seen that. What makes this different is Markov's inequality applies to a single random variable. Y had one random, one non-negative random variable. The chance that it exceeds a multiple of its mean is at most one over that multiple. Okay? The Miracle here is that that doesn't just apply to a single random variable. It applies to a sequence up to the maximum of a sequence of them. That's where the martingale property comes up. All right. So this is actually relatively recent. Like yesterday that I've posted dispersion. This is a family of Martin Gil tests that works for a really broad variety of auditing strategies and social choice functions. And I'll talk about it a little bit, but it basically construct a Martin Gil that. So we'd like to do so right now. Think of theta as being the mean of that list, that finite list of numbers. And that's the thing we want to know whether it's less than or equal to half are not. And some new is. In this case, like the value a half, half, that's the target, the target value. And what I'm concerned about is that theta might actually be bigger than that threshold value. And what I've got is I'm going to be drawing things at random, X1, X2. We see that that's going to be the value of the disorder for the ballot that I'd pull right now for this, this, this piece of the puzzle. So I pull a ballot at random and look at it. I apply the disorder. If it's a vote for Alice, the value of x is 1. If it's about for Bob, a value of X is 0, but it's not about for either of them the values out. In this example we're talking about bug observe some sequence of them. And I'm going to call x sub j is basically my history up to time jet. It's what I've seen up to that point. And this mu j is what is my expected value of the next draw, given what I've seen so far, computed under the assumption that the null hypothesis is true. Hey, Tim. And this eta is going to be basically an estimate of what the actual value is of the mean of the list based on what I've seen so far, perhaps based on what the reported results were or something like that. I'm not assuming it's true. It's, the performance of the test will depend on how I construct this, but whether the test is risk-limiting doesn't depend on this. So this affects how many balance I'm going to have to look at, but it's not going to affect whether I'm there, I'm rigorously limiting the risk. So the idea is this is something that can depend on the data up to just before the jth draw occurs. It can include the election results, it can include believes. It can include whatever you'd like. But the point is it can't depend on the future, can only depend on what you've seen so far. So I start off with a value of the sequence at one as one. And how do I get the next value? I take the previous value, I divide it by an upper bound on the numbers that are on any of the ballots in the case we've been talking about biggest number that your sweater ever scientists one, so you would be one. And then I multiply what I see times this ratio of the alternative to the null. And then I multiply the upper bound minus what I c times a complimentary ratio. And depending on how I tune, how I calculate these etas and so forth, the performance of this can be really superb in different circumstances depending on social choice functions, the nature of error, so forth and so on. So this turns out to be a non-negative martingale. It's relatively straightforward to construct it. And because it's a non-negative martingale, sorry, negative and negative Martin Gail. Under the null that theta is equal to mu. It's a non-negative super martingale if theta is less than the inequality. Because inequality from 1939 can even earlier results holds regardless. Alright? So here it is, pseudo algorithm for conducting an audit using this procedure. So you first need to set what the audit parameters are. You need to pick what your risk, um, it is this is something that should be defined in statute, not in regulation, not left to the discretion of the Secretary of State because it's something that can in some sense be weaponized, right? You can you can then raise or lower standards depending on which conscious that you're, that you're looking at. You to know how many cards are sampling from. Need to Know your sampling method. I'll talk about that a little bit more later. But you can imagine sampling individual balance with replacement. You pull ballot, you put it back in, you store things around again to point out right now, sample without replacement. You could sample clusters of ballots, like all of the ballots that are tabulated on a given device or a past and a given precincts. You can stratify the sample. You could have counties draw their own samples independently of each other. There's all kinds of things that you might, you might want to do. You might want to sample batches with probability proportional to how many ballots from the batch, because that tells you how much error there could be in the batch, right there. There are lots of reasons that you might want to do this. To modify the sampling plan, you need to pick this rule for estimating what the actual averages based on the data that you've seen so far. And the one that I've explored most is to say, let's start with the reported results and then update the reported results based on what the audit actually fines. And you can you know, there's a knob for like, how much weight do you give the reported results? So how quickly do you adapt to the data that you see versus not? And that can affect the performance. And then you need to know, okay, so, alright, so you're starting with a sample of 0. Year's test statistic starts at one. This is the sample, some of what you've seen so far. It's 0 initially. And this is the threshold that you're, that you're interested in is the population mean. If, if the null, this is true, this is the limit of that, the upper bound on that. And what do you do? You draw a ballot. You note that you've looked at another valid, you apply the a sweater to the ballot to get the current value of x j. If the population, if the mean of sort of what's left now is less than 0, then you know the null hypothesis is false. And so you can set c to infinity and reject, reject the null. Otherwise, you update it using this formula that we saw before. If you're sampling without replacement, you adjust the population for the fact that you've pulled this item out of it. So if the null hypothesis is true, then the population mean is half. The population total is the number of ballots times a half number balance divided by 2s. This is the population some, if the null hypothesis is true, this is the sum of what you've actually pulled out of the population so far. Though you're adjusting the total for what you've seen. And then taking the average of that, this is how many things are still left in the balance that you have, the balance or if you haven't looked at yet. So that's how this number could end up being negative up here is if you, if you, if you get to some point in the sampling and the toe, if you had a thousand ballots in the population, you've looked at 300 of them and you will, and the sum of those is already 600 or something like that. You know, that the population total was not APHA number balance though you're done. Okay. So then any point you like, you can stop and conduct a full hand count. Why would you want to do that? Well, some of these sampling methods that involve retrieving individual ballots from stacks of ballots are relatively hard to do. Relatively expensive of human time and concentration. If I'm got a stack of 350 ballots and I'm supposed to pull three particular balance out of that stack, the 17th and the ninth and whatever. That's, that's a difficult that process is a lot of friction compared to just counting all the ballots hung up on all the ballots and stack. And so there's some threshold where if your sample size gets to be a big enough fraction of the population, it's just going to be more efficient to cut to the chase and do a hand count. And anything you do that increases the chance of a hand count does not decrease the risk of it. Actually does not increase the risk, decreases the risk of on it. So so that's that's a that's a completely legit thing to do. All right. So what I've been talking about implicitly so far is it pull a ballot, you look at it, and that's the only information you know about. But that turns out to be a relatively inefficient way to audit. You can audit much more efficiently if you can tell how the equipment interpreted the very same bella. That is a requirement on the voting system that not all voting systems can meet. Though. In particular, for legacy equipment, often it wasn't even possible to run batch subtotals or for things. Or most systems can out put precinct level totals. They can report by political geography. But if the jurisdiction doesn't also physically organize its ballots by physical geography, it doesn't help if you have vote centers and the votes for I don't know, some towns school board are spread across the entire county. Then you have an organized the reporting and the physical or the organization the same way. And it can be, it's just not helpful. Ideally, the system can tell you how it interpreted each piece of paper in such a way that you can find the corresponding piece of paper and see whether the interpretation was accurate or not. But the system's interpretation of an individual piece of paper is called the cast vote record. Some systems can export those, some can't. There are problems around things like ballots cast in person and polling places like, like especially, you know, smaller precincts because when the ballots fall into the ballot box, they tend to fall in the same order in which people are voting. And if you keep track of the order in which people are voting, you could make a pretty good guess as to which casts boot record goes with which voter and break the privacy of the ballot box. So you may be limited in some circumstances to just using the system reports as a total or the batch rather than the individual and individual one that takes some some logistical organization to be able to make this association. Anyway, if you have that information, you can audit in a way that tends to be much, much more effective. On some level. The difference between looking, looking at individual ballots and not comparing them to their interpretation, looking at individual ballots and comparing to their interpretation. The sample size that you need for the first approach, ales like one over the margin squared. For the second approach, it scales like one over the margin. So as the margin gets small, they're enormous efficiency advantages to using the comparison approach. I'm not going to walk you through all of this, but there is a way, even if you're using this auxiliary information, you can still turn it into an instance of the same statistical question of here is a finite list of bounded numbers. Is the mean of this list bigger than a half or not? We can deal with stratified sampling when to leave some time for questions. Alright, so sampling designs, I spoke a little bit about this. We're looking at individual ballots or groups. Do we stratify? There are a lot of reasons that you might want to draw stratified sample instead of and stratified sample. In some states like California, counties are basically empowered to do their own sampling. So that's a stratified sample of each, each county a sampling independently. It you may want to do it because of equipment capability. There have been a number of audits where this strategy of comparison was used or vote by mail ballots where you can associate a cast vote record to an individual piece of paper. But the strategy that doesn't involve comparing human interpretation and this interpretation was used for the polling place balance. And so one way to deal with that is to sample from those, to divide the balance of those two strata sample from an independent way. And then there's a way of combining information from the two. It'll test these hypotheses. Bernoulli sampling is an interesting idea. It hasn't been piloted yet, but that paper on this, it allows you to do the audit in a way that's sort of massively parallel in each polling place. It amounts to saying, I'm going to take a biased coin, a coin, and let's say for the sake of argument it has a chance one in a 100 of landing heads. And for every ballot, I'm going to toss the coin. If the coin lands heads, I'm going to audit the ballot. And if not, I won't. But then that decision is made independently ballot by Dalvik, made independently across polling places, independently across counties, et cetera. And you can still analyze the data in a way that it leads to a risk meaning audit. However, when you start, you don't know what a chance of heads needs to be for that coin to have a big enough sample to have a reasonable chance of confirming the outcome. Because you don't know the margin yet. You don't know how much error you can tolerate. So the, the, the rub in this is that you will probably have to oversample by quite a bit. Or else you may have to go back and collect more ballots. And that can be very difficult once you've already build things up, et cetera. This fully sequential versus escalation schedule. The methods that I've been talking about allow you to stop whenever you want and pull a ballot. You look at it and make a decision whether to stop. You want, you can pull a 100 balance at a time or $1000 at a time, and you can make your decision anywhere along the way provided you enter the data from the ballots into the calculation in the same order as the ballots were selected. But in practice, what people want to do is have the initial sample be big enough to have a good chance of confirming the election outcome if the outcome is right. That, that they don't have to do lots of, lots of steps. It's sorted, everything is just logistical friction. And so it would be probably more typical to say, okay, I want a 90 percent chance of becoming the outcome. If the outcome is right, then I might take one more sample of some number of ballots if I'm not able to confirm at that point, and if that doesn't confirm it, I'm just going to do a full hand count. Though. If you're taking a smaller number of chunks and you pre-specify the sizes of those chunks. You may be able to find something that is more efficient than a method that allows you to make a decision after every dollar, you have fewer opportunities for error that may allow you to be more precise in the decisions that you make. There has been some work in that direction by herbivore of Philip Sikorski and some other folks. It, anyway, it's, it's promising. So what are some open questions? So some of these were actually just published by the CCC, the computing because computing consortium something. Anyway. So what are the limits on the class of social choice functions that this Shangri-La approach of converting things to lists of numbers and the outcome is right up. The average of all of this is bigger than a half. We know that there are a bunch of things we can do it for. We know, for example, that if the social choice function depends on the order in which ballots are cast, then Shangri-La doesn't work. So there are actually some social choice functions where depending on the order in which you count the ballots, you get a different answer. This happens in some single transferable vote situations. I think Ireland has a, has an election rule That's, that's like that. If in Shangri-La you can write down and sufficient conditions, are there always necessary and sufficient conditions? Can you somehow find a set where if all of these things are true, the outcome is right. And if any of them is false, we know the outcome is wrong. Or is there room to sort of optimize sets a necessary and sufficient conditions? I mentioned before this round by round sampling. All right, so what are some of the wrinkles? We still have about 20 percent of US voters not voting on paper at all. Other, a lot of states that are adopting these universal use ballot marking devices, which basically means the papers no longer trustworthy. They're are not adequate rules for chain of custody, valid accounting, public ones that reconciliation, eligibility determination, et cetera. In a lot of places. We need, among other things, a transparent, high-quality source of randomness. And I'll actually show a slide at the end. But one way that some states have dealt with this is they use a cryptographically secure pseudorandom number generator, generator that is seeded by 20 rolls of a 10 sided die that is done in public by members of the community who are participating in the ceremony. So that's a way to know that nobody's gaming anything. The PR and she's algorithm is disclosed. So anybody can verify that you really did, that people really did generate the sequence of numbers that they should. And you also need before you do that, to specify how you're going to map those random numbers to individual ballots are groups of balance, right? You need to sort of happening to be set up, or it could be game shoe. There's issues of you may go to look for Bella, not able to find it. How do you deal with that? So what were their methods for dealing with that? Are there is there a way to produce asked vote records for polling place ballots while still preserving anonymity. There are states like Colorado that redact the cast vote records, which can introduce problems with this matching and auditing. At the end of the day, what we'd like laws and regulations to do is to preserve the privacy of the voting booth, not the privacy of the boat. The boat is public record, right? But, but the privacy of your choices. But ensure that there's public enough public information. The public confirm that the audit did not stop too soon, that that it that if it didn't go to a full hand count, it was because it didn't need to. I'm not because something went wrong with the audit itself. Maybe I'll not talk about this. I'm swinging together with free speech where people I'm Sonya Election Assistance Commission for some procedural violations that were involved in them. Weak, substantially weakening version 2 of the voluntary voting system guidelines. There's lots of open source tools for this kind of thing. At the end of the day, things are really pretty simple. We want to create a complete drove pool verified audit trail him or paper ballots primarily with some accessible accommodations. Local election official needs to care for that audit trail to ensure that it stays completely inaccurate. We need checks of the reported results against the paper in such a way that we have a large probability of correct and result if the result is wrong. This is Ron Rivest, the cryptographers in the audience. I'm sure know who he is. This was an audit and Napa County, California, back. 1011 or something like that. These are translucent 10 sided dice. These were used to generate the seed for the pseudo-random number generator for the audit itself. One gave me these dice, I'm very proud of them, their prized possession. The fact that they are translucent rather than opaque as a security measure, means if somebody loaded the dice, they had to load them with a material that had an index of refraction that matched the surrounding lose sight or whatever you'd be able to see that they were loaded. Thank you very much. Fantastic. Thank you so much for that. I guess we have some few minutes for Q and a. If you're up for that. So any questions from the room? Dark. Go ahead. I'm sorry. I can't quite hear you. Okay. I apologize. My hearing isn't very good to begin with, and then it's worse with a mask on or why did accomplish and work like on a web page or something where you have all of your papers you've written? Yeah. So my CV it is online, but I also have a web page where I try to keep most of my voting related things, including a list of publications, testimony, slides from previous talks and whatnot. I'm pretty findable on the web if you look for dark statistics Berkeley. Yeah. So I'm going to pull a question from online, from our online audience. Someone asked, Have you found that there's any more appetite within state houses to revisit voting system defensibility after the 2020 election? No, I mean, I, I but after 2016, the Republicans didn't want anybody looking at the security to voting systems. And after 2020, the Democrats are wanting to be looking at the spirit of voting systems. It just seems to be, you know, if, if the election, if you're an elected official, that whatever happened must have worked. So it's it's it's a hard thing. I mean, there's been a lot of legislation in the aftermath of the 2020 election, but most of it does not really address election security per se. It's more around while I'm in Georgia, it's actually the ability of the state legislature to take over the operation of elections than individual counties. And, and things like that. Things related to more to access to the polls. Then to the trustworthiness of the election outcome given who shows up concerning other questions from the audience. Yeah. You touched upon different election systems like you talk about runoff. And obviously here we have mainly first-pass the gate in US election. I'm just curious in your research or you come across anything that suggests that or that might indicate that it's the, the nature of the system, that the nature of the voting system itself that causes untrustworthiness or not. I'm so sorry, I'm catching about one more than three posters like variable. I call it I guess my question is, do you think there's anything interviews if you come across anything that's enhanced by the way the system occurs, which is when are like a first-class the gate ethically oriented about the social choice function thing I right? Yeah. So most of the arguments for things like ranked-choice voting, an instant runoff voting, are about whether you can have viable third or fourth or whatever parties make protests votes without throwing your vote away. Things like that. The the technical issues around auditing. At the end of the day revolve primarily, I'm is the margin going to be big or small? The margin suitably defined. And the definition of the margin in some sense is how many pieces of paper would have needed to be misinterpreted to change the outcome. And for something like instant runoff voting, the final margin in the last two elimination round is, turns out just empirically, often to be the actual margin. But sometimes the margin is smaller than that because there's in the elimination of candidates, you can have smaller margins result. It could, if things had gone another way, it would have led to a very different elimination path and potentially to a different way. So there are instabilities in some voting rules like that. Voting rules. Some ranked-choice voting can have very counterintuitive outcomes that the candidate who wins is somebody nobody expected to win. That people, you know, it's, it's hard to think about in some way. It is it isn't harder to audit in the sense that it's the same procedure. Margin might be smaller. Now, a different issue when you get into something like proportional representation like many European countries like India. So there we've done a pilot, we've done a bunch of simulations using. The actual numbers are many and elsewhere. And the vote shares for parties are relatively easy to audit. But within parties for individual candidates who end up getting seats, those margins can be really, really small, like a voter 2. And there's just no way that you're going to be able to audit that statistically. Without looking at virtually every ballot. It's just the margins are ineffective, too small. So when you start giving things up into smaller pieces, it's kind of natural that you'd get smaller differences. The one over here. Thanks, that was great. But you started by saying 50 percent Republicans don't dress the election outcome in then age than people who do their own research by going through some website. That even if you do this, how do we solve the problem? That people are going to believe the results? So that's a really good point. I've kind of, there are two pieces to the puzzle. So one is I mean, there's a Nora O'Neill barrenness and, or O'Neill, she's a British philosopher. And she has a wonderful talk called trustworthiness before trust. And what this is about for me is establishing trustworthiness. How do you turn trustworthiness into trust? Political scientists for some reason like to approach trust first before transferring this. That makes me very uncomfortable. Part of this is figuring out how to communicate why these things work the way they do. So, why should anybody believe that by looking at a few dozen ballots, you can confirm the outcome of an election in which millions of ballots were cast, right? Well, first and foremost, you can't unless the paper trail is trustworthy, right? Then you start thinking that, well, okay, why don't you have to look at, say, 10% of the ballots? Well, let's think about tasting soup to tell whether soup is too salty, right? How do you figure out how salty the soup is? Well, you stir it up well, and you take out a tablespoon any taste the table spin. And that's enough. And it doesn't matter whether it was a one chord sauce pan or a 50 gallon cauldron. A tablespoon is about enough. Now why is it enough? Well, that has to do with the concentration of salts you're trying to measure, right? And our human taste buds and whatnot. Well, similarly, you don't have to look at 10% of the ballots cast an election. You have to look at enough. What is enough? Well, that depends on your tolerance for salt. That's basically depends on the margin. Write that it's the same thing. So trying to convey some intuition for things that are connected to ordinary, everyday experience that make it seem like okay, maybe, maybe it's not crazy that I don't need to drink 10 percent of my 50 gallon cauldron of soup. That's how water, it's too salty. Similarly, you can say, how could a dozen or so be fair? Well, suppose we want to know whether a coin is biased and it is biased in favor of tails. That would be like having an average that's less than a half. Okay? So I'm going to take a coin and I toss it 20 times and it lands heads every time. Every time. That's really strong evidence that the chance of heads for that coin is bigger than a half, right? You don't need to do it a million times. So it kinda depends on what you see when you look. I think we just have time for one quick one. Okay, Thanks. I think this follows on muskox question really well. You started off with a picture of our particular predicament and then this very convincing talk about accuracy. But to what degree is that upset about trustworthiness versus power? Is it about uncertainty or people mad about the outcome? And so, if we think that more certainty that convinces everybody in this room at the Georgia Institute of Technology is the problem then more guarantees should reduce that upset. But it might be the opposite, right? I mean, you could imagine that more sophisticated guarantees actually provide more fodder for telling the big life or create a mobilize up. And he said, like, you know, are we solving the political problem by solving the statistical problem? No, I don't think we are solving the problems at the same time, but I don't think that it is honest or fair to client to solve the political problem without solving a technical problem. First, I don't think we should be asking people to trust things that are not trustworthy. I just don't think it's it's it's not honest. Now. Okay. So push back. That often happens is like you're telling us we can't trust the machines and we can't trust the computer scientists. Why should we trust the statisticians? Your district just going from one talking head to another. And it's like, well, it's different from like trusting the guts of a device that can't actually be inspect. You will never know what software it's actually running, so forth and so on. Now, the theorem is out there, it's published. Somebody can check theorem, the code is out there for the algorithm. Anybody else can replicate it in parallel, implement it themselves and verify that it's doing what it's supposed to do. It's, it's, it's transparent and public, which ought to help. But a completely different. So I was one of the three auditor's in Windham, New Hampshire of a house contests there where we did a forensic audit. It's kind of a long story, but the choice of auditor's ended up being incredibly politicized, that sum that the township, but the way it was organized, they had actually had to pass a law to authorize an audit at all. And according to the law, the town of wind them where the election was conducted, picked an auditor, the Secretary of State, Attorney General, together picked one Auditor, and then those two auditor's picked a third on it. I was the third auditor in that group. There were other people in New Hampshire or other voters who were extremely upset with the town's choice of auditor. They were just absolutely sure person was partisan bias, et cetera. And they wanted their own auditor for partisan reasons, even though their own auditor knows nothing about elections. So what's happening is it's like part of the same polarization of information. What not is that expertise is now seen as disqualifying rather than qualifying. You know, if you know something about elections, you're suspecting. I don't know how to get past that. I don't I mean, other than sitting down and talking to people and Germans, done as well as I can to demonstrate by my accent, I'm really trying to figure out what's happening. I'm not trying to get a pretty good candidate into office. Just really want it. I want us to be able to be comfortable that our boats or are counted as cast as intended and count as guests. It's a hard problem. Elections are really complicated. One of the things that I think is really muddy the waters in the aftermath and 2020, is that a whole bunch of people who have paid no attention to elections in the past or all of sudden self-styled experts. They have no idea what's in there. It's an incredibly complex thing. And so people tend to misinterpret something because the term that's used in the manual or something like that, they think that they know what it means, but it actually means something a little bit different. Or they think that this hash must have been generated here, and in fact it's generated there or whatever it is. And it's just, it's really easy for those things to take on a life of their own. And it's very difficult because there's just, there's an infinite number of opportunities. Make mistakes. I don't know how we get there. I mean, it would be great if we could do something in a bipartisan way. If we go and I'm just, I'm really seriously worried about violence in our streets in January of 2020, five, regardless of who wins. What a note to end on a quick note. So there was still a lot of questions both in the room and online. So if you're online, please feel free to email the lecture series. We'll make sure that your questions get sent over to Dr. Stark. And if you're in the room, you're welcome, of course, to maybe hang out really quick if you don't have to run away. You can also email us, we'll make sure your questions get answered. Thank you so much for this excellent talk. This was absolutely wonderful hospitality. Thank you all for your attention.