Yeah I think for the chairman introduction in Atlanta yeah great it's great to be here very excited honestly. Was not at the first party conference but you know the vapid and obvious things are. Very important things also for me and my colleague and the company of a king for we are working for the I heard she that is the R. and D. Institute of the public broadcasters in Germany Austria and Switzerland. And yeah I will talk about the. Framework that's a transcript framework for optic based rendering in browsers. Parts of this very careful been developed in the you from the project Orpheus. Approach last our own to two and a half years moralists at the beginning and the main goal of the project is to develop implement and well a day enter and up to based media chain we have ten partners in the project and. You might know some of them and there are some very valid on company companies and institutes from all over Europe they're dealing with the best audio or spatial audio and such things. So even after a talk this morning of their great keynote from Frank this morning some of you might ask what the hell is based on what you. Base maybe a simple answer. Based audio is not the message for you as it is often equated to it is rather. Audio. Plus measure they took. I mean Frank already told about this keep this rather short but just to get a better understanding of what optic based audio is we can compare it with the situation today. Audio is produced and distributed. In the current broadcast and network and this is the so-called channel based approach so let's say we have to for instance or is this. Capturing them with microphones and recording signals these signals and then go to a kind of post production there so whatever you queueing filtering effects are applied and then there's a mix down produced and this mix tone is. Produced for a very specific target for me in this case it's for a stereo target for Mitt and then the assumption is that the user or the audience has speakers set up that is suitable or not just a little that is exactly the same as is intended to so in this case this uses hysteresis up this very well but if the user has noticed there is sort of but in this case or in this example if I've had one set up it is no more working. No more of wrecking is good. So it's a channel based approach is a means to make a production for one format. And adaptation afterwards is only hardly possible. Sort of look at the two based workflow began Yeah I think during the capturing those not very much change because any signals going to the post production step but then we don't produce a mixdown. Rather combining the older single spaced additional meter data such as position. Semantic information whatsoever and this is then the so-called optic be. This audio scene is then transmitted to the audience and there is a so-called render that produces an audio signal that is suitable for yeah all the target devices that are connected to their interests of this can be five to one stereo headphones whatsoever and the render takes you into counts different situations such as. Misplaced speaker of entrance or if the user is walking the streets so. Based audit was formatted gnostic would provide basically accessibility so one could very easily. Offer in multiple languages or over speak only one production. Personally station in this very big point here and interactivity OK just awfully You got that now let's come to the Baucus frame very the reason I am here on the right side. You can see a diagram and overview of the classes that are implemented in the frame vary and how they are connected to each other and with each other. It's basically a childhood friend very it's published under the MIT license on get up it's written in ECMA Script five standard and uses very very many parts and. Yeah also using some third party lips. For the basic concepts and you can see here on the right third of the screen short code snippets how to integrate it in your H.T.M.L. page and so you didn't need to very much you only need a scene file and in the scene file there's all the relevant info written and can be read from. The framework office basically three options to load or play audio signals. Can be a single audio text that Rather. Shorrosh and there are fires these are then connected with an audiobook a soulless note. And they're loaded with a similar request can be multiple single objects Groot. Same and these can be one or more long objects with the same duration. These can have any duration and can be represented in the file or a stream. This is then connected with a media element sorry snow which So an audio and video. Fife element. Currently implemented of trick descriptors are again position interactive and active. On the slide just to give you an impression how this scene file looks like and what information is written there don't want to go into details just to show you how this looks like it's following the spot if. Format. But this is only for the. Some for the basic concepts. Time changes. Scheduling synchronization and timing therefore every use the rep clock thankfully it's a it's a great library that helps us a lot for the timing here. And there's also a U.I. manager class in the frame very good that provides some very basic functionality for a two dimensional user interface and yeah I thought I already set. A lot of fights and streams. For to be used as all your bets for instance the so called media controller class should be used this new controller class. Uses media buffer so it's not so basically you can connect an audio to video from a five element two to two if to the frame very good but the problem here be experienced. I'm guessing that. Is the channel order of coded multichannel streams or flights. And that's why be implemented a class it's called Channel order test on the right side you can see again to code snippets how to use it class hopes you two to detect automatic Uli the order of the decoded model general checks. For transit you take and therefore some test files are provided the different channel numbers and in each channel there is an increased thunderstorm frequency. Just to yeah how to detect the order of the channel. Decoder afterwards OK very short demo. We. Hear you OK So this is as I said a very very basic use interface just to demonstrate the object based on the approach itself. Just starting. Yeah. That's what it is it sure was. Interesting that it's never. That good if your current theory I think that if you've got your own to it that. They get the. Solo stuff. Playing around and you have to give the user a print of what up to. Basically. OK great it's not working anyway we don't have time. Coming back to the presentation. Just just a short summary. Of the book to its framework has multiple options to log in play only signals that office automatic detection of Decoded multichannel tricorder printers scenes for stereo and Binaural various on moralists all modern browsers that supported. And my plans for the future US are quite something I will implement. In a very potent standard the I two R.B.'s twenty seventy six who for those of you that and that familiar with these standards thing that's the so-called or your definition model that will be become very important in the future hope so and the representation will be no more despotic like but rather cheese chase and representation and you know once it is available I will implement the spatial plan and out the spool offer much more features for me to palatial and positioning things you know into the streaming but capabilities should be a bit extended OK that's it thanks very much. I'm not sure if it got the question. OK. I see OK got it yeah but. So the question was. Whether. We ever experienced or think people experience problems with the amount of state or because of the potentially many channels or many objects or audio signals that have to be downloaded right. Yes that's a very very common question that pops up from time to time and. Not like. How do you know that that we say we need this number it's rather that we're thinking especially for this for this browser case it should be. Possible to make something dynamic Ali that is scalable. In that it just takes into account the computing. Couple that use off of the browser off or that device and they're. Not. Not such experiences yet are numbers that I can say we need at least I don't know. Six objects or twelve very I don't know in this all depends a little bit on the use case and on there the piece of content you are transmitting. I hope this answers your question a little bit. So you ask for it personally sation. So it's not just a start and stop it's also like positioning of the of the process of thought object of course it's like you can make sure to check to research movements of objects of course and you can make them basically you could it's not very much implemented yet in the Schoenberg honestly there are many things will come in the future but basically you this author could just say the user shall have this degrees of freedom for this object or for those objects so this is everything and possible and this will hopefully then come with the yeah I am implementation. Thank you.