So today I'm going to be talking to you about our work on preventing control flow hijacking and in particular our new perspectives on this area so as mentioned I'm currently at Purdue and with Mathias is group. So the root cause of control flow hijacking attacks is memory corruption so because C.N.C. plus plus do not provide memory safety attackers are able to modify the state of the program's memory and create so-called weird machines such as gadgets that allow them to exploit the program and perform arbitrary execution including doing things like popping a shell and gaining you know complete access and control over the system. These vulnerabilities are common in practice one survey found that there are about sixty vulnerabilities and thirty exploits a month there are some famous ones I put the graphics for here on the slide such as heart bleed stage fright shell shock ghost and some of the modern ransomware attacks as well all rely on a memory corruption that leads to a control flow hijack to take over the system and perform their malicious behavior. So some more background data on memory safety and how common these vulnerabilities are on this picture which goes from two thousand and two I think January was when I pulled this data you can see that the stack. Vulnerabilities continue but of course the major concern is heap issues which are this blue bar that you can see is just increasing like crazy in so these problems are real in practice and so what does an attacker actually do with this memory safety violation to corrupt the control flow of the program. In particularly the overwrite code pointers right so you have a pointer that says Go here and execute these bytes next and the C.P.U. doesn't care as long as the bytes that it points to are executable it will happily transfer control there and attempt to start executing them so attackers can either inject code. Or X. or DEP whatever you want to call it or they can reuse existing code sections and it was only they can maintain control over the applications control flow they can you know do Turing complete execution. And we care about these things because C. and C. plus plus are ubiquitous and in particular a high value network facing applications such as your web browsers and your web servers are written in these languages and commonly exploited through these techniques I found two thousand plus code execution C.V.S. just doing a simple search in the miter database. So of course the research community has attempted to address this problem so all give you a brief bit of background on what I would consider the state of the art deployed mitigation for preventing control flow hijacking attacks which is known as control flow integrity so what this is is it's a piece of static analysis that the compiler performs so for each indirect call site so we're using a function pointer a virtual call and so forth that are you know jumps through registers that actually use these code pointers they say what are the set of targets that we think can be reached from here started off being very conservative they said well the beginning of any function and then they started becoming more and more precise to where we're doing base matching based on the function prototypes and the C. plus plus class hierarchy now. But this is all sort of fundamentally over approximate right they have to be conservative because if they miss a target then the program crashes and no one will use a security technique with false positives but the benefit are sort of twofold one it's extremely low overhead it's two percent overhead because they're just doing a very small set check on indirect calls which are fairly rare as it is and to these sets are embedded in the code section of the program so they're already read only so they aren't relying on any sort of state that in attacker can manipulate so to sort of see how this works with the graphic illustration because everyone loves a picture imagine you have a function pointer that's being used for call what see if I does is it removes some set of edges from the target set so here I'm saying the red edges are the ones that are disallowed in the black edges are the ones that are still allowed and just sort of that an abstract level see how this works more concretely imagine that we have some sort of C. plus plus class hierarchy where the details are fairly irrelevant other than the fact that each of these bubbles represents a class and we're imagining that they all have that overriding definition of some virtual function and so the question becomes at a virtual call site how many targets does the mechanism have to allow Well if the call site has the type of this sort of root thing statically that it has to allow all eight targets potentially if the call site is a little bit lower down the hierarchy perhaps it allows fewer targets but of course there's only one target that was intended runtime based on the type of the actual object so say the object was actually this class here with the black dot This is the only virtual function that should be allowed and so can we do things that are more precise than C. If I only allow this one true target rather than a set of targets. So as I mentioned the see if I tradeoffs are that it's fast and that there's no metadata but what they're trading off to get this is security because they are imprecise so to be a little bit concrete about this we looked at Chrome and we found that there's one class hierarchy with a virtual function that has seventy eight possible implementations so this is a rather large target set in practice. And you know we were starting to see attacks that say even if we consider the see if I restricted control flow graph of the program can we still do our attack in the answer is usually yes. So sort of the question for this talk is can we provide greater security guarantees at the same performance overhead as C.F.I. And we're going to explore doing this through partial memory safety techniques so instead of trying to protect all of memory from an attacker which comes with huge overheads we're going to say what are the critical pieces of memory that we really need to be safe and in particular we're going to look at code pointers so we're going to look at return addresses in the later half of this talk and see plus plus virtual tables in the first half. Deep and so with that I think we can go ahead and move into the first project here which is called object type integrity this is a paper of ours that appeared in D.S.S. this year and it's focused as you might imagine from my introduction on C. plus plus and how can we secure c plus plus dynamic dispatch so that when you have an object only one virtual function can be called for it. So sort of a little bit of background here which we've sort of covered but it's worth repeating is that virtual calls have strict semantics at the language level based on the type of the object you know even though it's polymorphic there's a specific virtual function that is supposed to get called and this information sort of gets lost compile time because it's fundamentally dynamic right you can allocate the object as one type and then you're going to upcast it to some base type and use that throughout which is one of the sort of core tenants of object oriented programming but it's creating this information gap that's harmful to security. So in particular what do attacks on this virtual dispatch mechanism look like so I'm going to use this code example a couple of times so it's worth understanding it just a little bit I have a simple class hierarchy here with a base class A the implement some virtual function and then a class B. that inherits from A and overrides this virtual function then I've sort of provided a skeleton main method where I show an object of both class and class B. actually getting allocated and then there's some amount of code in a vulnerable function that allows an attacker to have an arbitrary right so they can you know overwrite any right of all areas of memory and then you call a little example dispatch function on this object and you expect to get the correct virtual function call based on whether you passed in a object or a B. object but this is not always the case to see why this might not be the case consider the actual memory layout so here in the middle with these sort of rounded edge bottom boxes I'm showing the objects on the heap so you have you know an object of class A an object of class B. and their members in particular C. plus plus adds a virtual table pointer to the virtual table which is just an array of function pointers for what is the correct implementation for every virtual function in this class so how does an attack look like well they use their arbitrary right here to overwrite the virtual table pointer and point it either to an existing virtual table they can point it into the middle of some virtual table they can point to an arbitrary region of memory they control see if I really does not constrain them here at all and then you could do the same thing for the other object so that when you get here to the dispatch suddenly you're arriving at the wrong virtual function there was some work on counter for object oriented programming it opened a couple of years ago that shows how attacks of this nature can be used to gain Turing complete execution and pop shells so what are we going to do about this I propose object type integrity which is a new class of defense policies for C. plus plus it's fundamental insight is that by protecting objects and tracking information about them dynamically we can be fully precise whereas see if I was focused on call sites and static analysis so this is sort of the key difference between the two is that we're interested in protecting objects instead of call sites. So because we require objects to have a known type we can detect the synthetic objects that the counterfeit object oriented programming paper made use of and we can imagine some extensions to this technique around protecting dynamic casts some type safety and use after free mitigations might be possible as well again because we have dynamic information about every object and its type. And so throughout this talk I'm going to refer to see fix which is just our compiler built on top of L V M that implements the object type integrity policy so to illustrate what this policy does you can imagine our object A here in memory and its virtual table and we're going to put a lock on THIS IS SO CA Sion between the object and its virtual table or its type. And again with the sort of illustration that I use for see if the difference here is that only one target remains is not a target set but a truly precise individual target So how does our mechanism work first as a tax so here you see the same code set up in memory layout except for We've added in that a data region where we have stored the correct virtual table pointer for each object so when this is assigned by the object's constructor we create a copy in our protected memory region of memory and so what this means is that when an attacker over writes the virtual table pointer and then the Dispatch happens while we're just going to use our protected copy instead of the modified copy and so the correct virtual function gets called and the control flow hijack does not happen this is clear to everyone. So all of this only matters if our meditator region is actually secure and so to provide this we've leverage the hardware extension for those that don't know briefly in P X provides a set of four bounds registers and a hardware primitive to provide upper and lower bounds checks on pointers based on the bound stored in the associated register and so what we have is we have this metadata region depicted here in yellow which is that some arbitrary location in the memory space what we do is we observe that if you rotate the memory space so that our meditator region is on top or equivalently on bottom but we chose on top of the rotating virtual address space you suddenly have enter a sort of disputed by this arrow on the right side of the slide which is everything that you know or write should be allowed to touch that is not our metadata region and so you can just do an upper bounds check on that to ensure that your right is not to our protected region and this has a couple of downsides the main thing is that we end up protecting essentially every right in the program and we have to add new instructions to these so there's some non-trivial code bloat around this and the performance every overhead isn't as bad as you might imagine but it's still more significant than it needs to be you would really like a technique that does the opposite of this where only a special right can write to the metadata region instead of restricting all normal rights in the program but it's worth thinking briefly about how and see if I could be combined and whether you would even want to do this in the first place so something I haven't mentioned is that O.T. I says nothing about which object actually reaches a call site so you might imagine that you have these objects stored in a tree structure say and by manipulating how the tree is put together an attacker can change which object is used in a call side and so because of this they can still cause a different virtual dispatch function to be use which is troubling. So what you can do with this is you can limit this a little bit by adding in C. if I so it at least at the call site the object has to be one in the hierarchy it can't be a completely unrelated object so you can mitigate this a little bit and we did a couple of experiments with the two combined together and found that it worked quite nicely because they're concerned with very different things so the two defense mechanisms don't interfere with each other. So we perform both the security and a performance evaluation sort of the highlight of the evaluation section is that we were able to recompile chromium which is the open source subset of whom it's everything but a couple of media libraries so it's a fully functional web browser for all intents and purposes and we ended up with two percent overhead on the Java Script benchmarks. For security we came up with micro benchmarks that sort of encapsulate five types of attacks that you might imagine so And I should mention what are we comparing ourselves against So we're comparing ourselves against the implementation which is basically the state of the art see if I trust which is an academic work. Does some fancy analysis around the class hierarchy is to limit the sets a little bit pointer separation which simply proposes separating the pointers entirely which was part of the pointer integrity work and will show where we actually improve on this then of course the fix which is our own implementation and so the Texan areas that we mentioned a sort of go down the left hand side of the slide here is let's say we inject an entirely synthetic virtual table well any of these defense mechanisms can deal with that Similarly if you're a little bit clever about the virtual table you inject in trying to match function prototypes with the ones in the real one again this is detectable. Even if you start doing things like exchanging arbitrary existing B. tables so if you don't respect the class hierarchy all defense mechanisms deal with this where the difference starts to pop in is if you do a virtual table exchange of existing ones that are actually within the class hierarchy. As you might imagine D.C. if I techniques cannot deal with this and then where we improve on code for injury separation is that we have a notion that all objects should have an unsigned type so you can't just create an object out of whole cloth under a Whereas you can under C.P.S.. We also of course did the spec binge marks for performance because everyone does these regardless of how useful they might be in practice and so there are seventy plus plus benchmarks in the spec two thousand and six at only three of them actually do any reasonable number of virtual calls in particular. I should mention that the two bars here are the light pink one is our mechanism just by itself and then the dark red one is when you add in the checks to actually protect the meditator region sort of the highlight here is that we end up with two percent overhead just for a mechanism and about four percent if you add in the checks. So in conclusion I've talked to you about object type integrity which is a new class of defense policy and we've shown a prototype implementation of this that we call a C. fix based on the compiler infrastructure this is a deployable defense policy in practice we were able to recompile chrome with it and had a noticeable performance overhead. And it can be combined with C. if i to mitigate you know some more advanced data flow type attacks and as I said This appeared in D.S.S. this year and the code is open source and it's available on the group good and I encourage all of you to check it out if you're interested and please file bug reports we've already had a couple of research groups use it and file reports that were helpful in getting the open source for positive or E. to be in a sane state so are there any questions about this yes. Yes so the T.V. is to my knowledge equivalent to the trust in the other see if I policies they don't have any analysis that is stricter than the class hierarchy based things as I recall so the T.V. was a relatively early see if I work and I think some of the work on protecting C. plus plus has progressed past there and it's mainly been done in L.V. M. and then G.C.C. in the other questions all right then I'll move to the other stacks part of the talk here where we're concerned about protecting return addresses as you might imagine so in the first half of the talk we were concerned with forward age control flow transfers here we're going to be concerned with backwards edge or returns so why do we care about Shadow stacks return addresses are commonly targeted code pointer there are lots of them they're all over the stack there's you know the entire class of code reuse attacks got started with an attack on return addresses and so the entire thing is commonly just known as Rock for return oriented programming so this sort of gives you some intuition that this isn't a common thing in practice. In There are numerous exploits using this for reference I found five Google Project Zero exploits against return addresses in the last year. And their new deployed defenses that actually restrict return address values so we've got this lovely see if I take the you know provides fairly good security and it's actually being used in practice Microsoft is building it into edge you built it into the deployed versions of chrome and so forth but it's really sort of missing any protection for the back edge and so why is this and how can we improve the situation are there questions for this section of the talk. So the state of the fairs is kind of interesting because there's a fairly good technique that the research community has come up with known as shadow stacks to protect return addresses and it's the same sort of trick we played in where they create a copy of the return address somewhere else and then they compare it to the one on the normal stack and if they match great otherwise they detect in the tech. So sort of the design space here for Shadow stacks is determined by this question of how do we do the mapping from the normal return address to the shadow return address and there are two different techniques here so there's the parallel technique where you use some toward a fixed offset from the normal stack to the shadow stack and this is the same across all threads and so forth so it's a little bit restrictive in that you have to find a way to lay out your program in memory such that you can accommodate the six offset all of the time but the benefit is that it's just an immediate value that you're adding and so it should be fairly quick as you're just doing an arithmetic operation defined whereas my shadow address in contrast you can imagine a compact shadow stacked where you actually maintain some sort of pointer to the top of the shadow stack and you do the normal stack mechanics of pushing in popping as calls and returns happen and you have to do a little bit of engineering to deal with say exceptions or set jump in Long Jump where you can unroll multiple stack frames at a time but these are you know not scientific problems they're just engineering issues and they can be dealt with fairly easily and easily but they can be dealt with. So the difference here between the two is that you know compact stacks are trading off a little bit of performance for less memory overhead because the shadow stack is more compact doesn't have to be the same size as the normal program stack it just has to have enough room for all the return addresses whereas the parallel version is doing a full shadow stack that is the same size as the normal stack but trying to get faster accesses into it and so there have been. About three exist or I would say that there are three main shadows tech design mechanisms out there within these two design spaces and then we're going to propose two new ones so for compact shadow stacks you can imagine using a global variable or even you know the region pointed to by a segment register to share your story your shadow stack pointer for parallel ones there's just the technique of using intermediate offset. Our observation here is that X. eighty six sixty four has sixteen general purpose registers and so we explore the impact of dead using one of these and dedicating it to the shadow stack scheme so that you don't have this more expensive storage mechanism so for compact shadow stacks you can use a dedicated general purpose register for the shadow stack pointer and what this gets you is performance it's faster to access is always there you don't have to load it or anything like that and for parallel shadow stacks by storing the offset in a register you suddenly have the OS that become thread local which means that you can change it throughout the program and you can imagine new modifying P. thread creator your favorite threading library to actually set this appropriately based on where it puts the stack in confined space for the shadow stack so in addition to the sort of five Shadow stack mechanisms that we're going to evaluate and show how they stack up against each other and where the sources of overhead are there's also this issue of what do you do in the function epilogue you know so traditionally people have done a comparing conditional jump of the shadow return address in the normal return address but this is a little bit slower than you would like and we show that you can do better and later on I'll discuss the implementations of just directly jumping through the shadow return address in the comparison altogether but think about how we might do better than compare and. Conditional jump so imagine that if we have the return address in the shadow return address in two registers call them are one in our two if you X. or these think two things together you're going to get all the rows if they're the same and some number of one bits if they're different this is just a property of X. or work so I assume everyone here is familiar with. X. eighty six Provides a pop count instruction which tells you how many bits are one in a given register so we can leverage this and the result should be zero if they match and otherwise it's going to be some number up to sixty four and so we can then imagine shifting this to the left so you know the number up to sixty four easily fits in the high order sixteen bits of the program so if you shift the result over and then or it into the actual return address you have a situation where you've changed nothing if they matched and otherwise you've set some number of the high order sixteen bits if it didn't match and this is to your advantage because when you jump through this new register if. You were bits are zero because they match you haven't changed anything the jump works if there was a mismatch then you're going to get a general purpose fault out of the processor so suddenly you're leveraging the processor's normal flow to do the security check for you. I should note this general ideas been used in a couple of papers now so our contribution is really around how do you do the math to set up the situation where the processor can be used to do this OK so the bulk of this paper is evaluation and so our first evaluation point that we're talking about is in the design comparison and so we ran here the spec benchmarks and the results that I'm showing We also did for Onyx and Apache and so forth to look at more relevant user space programs. And sort of the highlight here is that the compact scheme does better than people thought it's not slower and in fact the compact scheme with a register turns out to be faster by a small but in this context significant amount. And then next comes the parallel scheme with the register and then some of the other compact schemes but the high level point here is that doing the compact scheme so that you're saving memory if you're willing to dedicate a general purpose register to it can be as fast as the parallel schemes which is nice because you're getting both a performance and a space savings. So then there's the question of generally what are the sources of overhead in the shadow stack in the If you were to try to optimize the more where should you look the first thing I would note is that changing the return instruction to actually be a pop in a jump adds about half the overhead. The other half of the overhead comes from just maintaining the shadow stack information right there is going to be some cost of these additional rights to memory to mirror the return address somewhere else and so when you put these two together you get the overhead for a shadow return address by which I mean a scheme where you're just directly jumping through the shadow return address without actually comparing it and then when you add in the traditional conditional and jump comparison in conditional jump you end up if the red bar which is you know the full overhead and so you can see here that technologies like Intel C T When it becomes available that just change the call and return semantics to naturally store the return address somewhere separate there's a large potential for optimization here and you would expect to have any overhead out of them is a large source of overhead here is you know having to maintain the shadow stack yourself and just changing the return some antics yet. We recompiled everything for this one. So these compiler changes are fairly small for those that are familiar with we went into the X. eighty six back end there are a couple of functions for emitting the function prologue and epilogue and we modified them to insert our shadow stack in starch and as appropriate and so we have a very robust compiler that we had no issues re compiling in a value waiting software which was a nice you know piece of this work. But the next thing I wanted to talk about was our epilogue optimizations So looking at how much are we actually saving by doing this fancy bit arithmetic and you know using the processor and conditional jump and so you see the overhead numbers here if you do the normal conditional jump thing which I refer to as comparison in this graph the mean is about five percent overhead if you do We had to optimize schemes I described the fault scheme to you because it turned out to be faster than our alternate seem upbeat about three point eight percent overhead so you've gained about twenty percent going from five to a little under four percent which is you know significant in this context because there are billions of function calls over the life of the program so any savings you have around this is significant but it is still cheaper to just jump through the shadow return address and this probably provides equivalent security right so you're not going to crash early you're going to allow the program to continue running you're going to return to the correct place and so this is probably good enough in practice you might actually want to do the comparison if you're in some sort of debug setting where you want to know as soon as possible that and attack as happened. The other piece of our evaluation is how do you actually integrity protect the shadow stack because if you know attacker can just modify your shadow stack instead of the normal stack you've not really gained anything so you can do this through randomisation or information hiding which is sort of the cheapest thing to do there are some usual caveats where this is relatively easy to defeat there's a new scheme called in P.K. which is a hardware extension from Intel stands for memory protection he used the idea is that you can associate every virtual table to one of every virtual page to one of sixteen protection keys and then there's a new instruction to toggle the read read write permissions of you know all pages associated with some key so this you know is fairly nice because you can imagine leaving your meditative table read only most of the time and then only when there's a function called Do I briefly flip it to write I put in the new shadow return address and then I make it read only again unfortunately this has terrible performance there's an outlier here four hundred percent for poverty and the geo mean is something like fifty percent across all the benchmarks so this is just not going to work in practice we also evaluated our strict and it works a little less well across the you know more diverse set of programs than just the C. plus plus ones before but it's still a relatively reasonable overhead if you actually cared enough if you were in some sort of very sensitive setting. So the side has a lot of techs. The key sort of takeaway for it for this setting is that we what we would really like is some sort of privilege move instruction we would really like to be able to say for this region of memory only a move instruction with this key associated to it can write here and that would be quite helpful it sort of would take the best of both worlds where you have thread centric solutions like in P.K. we really like this idea of having keys assigned to different pages of memory but we would really like it to be a little bit more code centric to apply it to a given instruction instead of being a property of the thread so in conclusion we recommend a shadow stack for deployment as you might imagine is the compact register base shadows that we rely on information hiding to protect the shadow stack because in you know the common use case certainly too expensive in even M.P. X. is going to be a noticeable amount of overhead for users so it's going to end up being disabled we end up with a three point six five percent performance overhead far recommended scheme this is still under review so it's not open source yet it's currently at as plus but it will be open source once it's accepted and it will just go on to the groups. So if there yes well so our story would be that you have see if I take needs that are being used to protect the other code pointers and so you're adding this in to beef up the protection for return addresses and stack pointers so that you're providing at least see if I quibble at the level protection for all code pointers and not just for forward edge ones so in conclusion talked about two different partial memory safety schemes that offer. Relatively strong security guarantee against arbitrary reads and writes at reasonable overhead. In particular I've shown that if we're willing to do some dynamic information about the program and track you know either objects or return addresses based on the semantics and what we care about that we can end up with these higher protection guarantees and that how you actually protect either the shadow stack or the metadata region for remains an important and interesting research question. That concludes my presentation by I would be happy to take any more discussions either here with the group or afterwards individually Thank you. So we are removing any gadgets right if you can somehow you know take control over the applications control flow we have it modified the code base at all so any gadget that you would use for code reuse is still there what we're trying to do is short circuit the attack so that the attacker cannot ever take control of the applications control flow in the first place. So. So I would argue that we need to be a little careful with our terms here I would prefer to be use only if you're attacked using overwriting a return address I would prefer to use the term code reuse for more generic techniques that can target any sort of code pointer and yes you're still going to be able to jump to the middle of instructions etc But the core of this work is on preventing the attacker from jumping anywhere we don't want and so the first step of this attack where you use the gadgets or whatever is that you have to be able to control the applications control flow which means you either have to be over to overwrite a pointer which is what we're preventing or you have to change the code pointer that gets used through some other data only technique which we have not prevented but you know we're claiming that we have to protect the forward edge code pointers so this is already vastly restricting where an attacker can jump to and that we're now with the shadow stack thing providing actually stronger protection for return addresses because it's uniquely determining where it can go so we're preventing these code reuse attacks from even getting started by preventing them from diverting the applications control flow in the first place. Is. I'm not aware it up. I'm not aware that is not happened in our experience there could be God knows what self modifying code scenario where that happens in our experience the difficulty lie. Lays with stack and rolling techniques so things like exceptions and C. plus plus where you can jump up arbitrary numbers of code frames and you can mimic that behavior with jump and long jump in C.. So we've take to tested spec from expensive marks and Apache. Up for Onyx it's a set of media and other userspace libraries there's a database in there and some other things so it's fairly representative or the goal of it is to be representative of normal desktop applications and libraries. Are there any other questions while I'm sure. A. Yes but yes in the if that ever got spilled cetera so there are some interesting corner cases where it can you handle unprotected libraries and in particular if you jump to an unprotected library and then it calls back into protected code what happens and so forth so there are a few ifs ands and buts there for our evaluation we just recompiled everything and didn't worry about it. OK. But. So. Are. You have to guarantee the integrity of the pointer as well. Right or you have to worry about how is this designed is it ever actually some place where it could be modified so if you've removed a general purpose register from the pool then without a existing control flow highjacked at which point in time you're toast anywhere you would like to say that an attacker cannot modify this register. All. Yes I agree as soon as the is up on the stack or something else you're very exposed. And he wants you know. When. I. Will. So I actually this summer had an internship where I was able to work on the DARPA Seth program which is sort of asking this question of how can we do hardware and software code design and the protests that I worked on that I think is quite interesting is doing a tag architecture so can we embed you know from software some information in the form of tags about memory and register locations and then we add you know some sort of hardware component that knows how to compute policies based on the tags and allow things to happen or not and so this is an interesting way where you can make the hardware be truly flexible and in some sort of sense programmable to enforce you know any software security policy that you might come up with while you know Also Excel orating things because it's easier to track and propagate the tags that cetera and hardware then you know the sort of things were forced to rely on currently at the software level. If. Yes. That's our goal anyway. So without knowing the specific countermeasures you're thinking of it's a little difficult but in general I can tell you our design philosophy of how we tried to avoid that problem which was simply to be as simple as possible so to look for some antics that were very easy to identify and that ideally happened rarely so that we could track you know some sort of degree of dynamic information about the program so one sort of barrier that see if I take next run into is you end up with aliasing of pointers so they have a very hard time coming up with what is the target set at a particular program and when they start trying to do data and float context and data sensitive analysis on these programs it just explodes out from under them which is part of why the target sets are still so large in the analysis is so imprecise still. Well thank you again everyone.