[00:00:05] >> Oliver is a pioneer in working in the intersection of AI and human computer interaction and he specializes in designing and building systems and. Specifically to help understand human behavior and responding to them and using data and now and I say this from the sensors and he has a really fruitful career and he has designed system that managed teams that we. [00:00:30] Park in Rio And also he found his own companies. Recent work so he has a lot of experience working on Doppler funded projects so some of the notable ones including the ones that's part of the DARPA Pell program does commercialize as Siri and mentioned com And also and I want on human personality protection part of the Dr Adams program just nationalize it commercialise as part of a I.B.M. Watson and in 2017 all of the joint Adobe Sai architect building on the intelligence systems used by millions of users without further ado I'm going to pass a mike to all the for and talk about context that's big a big round of applause. [00:01:13] Thank you Paula for the introduction and I'm trying to mark was like then. So thanks for having me and I apologize for my voice but you know it's cold season right and I flew here from Silicon Valley and you know in the plane you get all types of jobs right and I got to you know this morning with a little frog in the throat but. [00:01:38] Yes I want to talk about contextually I have the next front here towards human centric artificial intelligence so that's something that's very important to Adobe but also based on some of the rock I did before I joined it will be adopted right so. It's a little bit the outlines I want to motivate what what is that actually human centric AI can take shape I would do we need to mean by that and why is that important and I want to talk about the 3 waifs of AI contextually I in particular and then I want to go give a little bit of an overview of what we do in AI had to do A B. and then offer some some conclusions now when we got to motivation and I mean you guys are studying machine learning as I write you I certainly have seen this that is some criticism that AI today and tomorrow is mostly about coughing up intelligence right so we have very powerful tools that can identify certain classes of objects separate data classified data so it's not real intelligence as we understand it you know as human intelligence is right that we have a very deep contextual understanding and this is also reflected by. [00:02:59] The recent investments that you know we have 2000000000 that lets us feel like you know end of last year last year right investment in the next wave of technology it's all what we see right now in deep in awnings to distinct learning is very very powerful and it's kind of. [00:03:17] Integrating in everyday life in a way but. There's more to come with regard to mimicking or imitating human human AI and. This is kind of a nice light because you know it reflects some of the technologies today that we use that as have been rolled out that use the active Knology right and that's the most Google assistant face authentication and it's drones robots electric cars and all these actually are powered by and so because that you know they are powered by AI but only 33 percent of users believe that. [00:04:00] They're using AI and able to knowledge that's based on some recent studies that we run with pig OK so. In reality 77 percent of users use an AI palettes of his for device so you know what does that get actually tell us Well it tells us that you know a year there is a I that's kind of in the with more and more devices but the users are not but we all fit which also means that they're not really into acting with the eye itself I did something that's kind of in some hidden in some features and so on right but moving forward there is a need for I to interact more with the users and. [00:04:41] We don't want it to be a vision where you know AI is in everything and then it's kind of out ruling us and being more you know more intelligent than us that's kind of the vision that some of my colleagues kind of advocate enough you fall off saying Well eventually it will kill us all I don't believe in this because you know they has. [00:05:02] Several reasons why this is not nice right so 1st of all we're all humans we don't want us to be you know out fooled by technology but then also economically I believe it doesn't make sense because AI technology is very powerful but it has certain strengths and weaknesses that are different and complimentary to human intelligence and if humans in AI can actually interacts and form off a symbiotic system it's economically and economically much more sound and you know what I want to get to here you know with this power like a image to convey this is that you really have the human then they I working together right so you are combining human intelligence but humans are actually good at with AI technology machine learning technology where that is good at but you do this in a way that you interact with me in both I actually interacting in farming in the use of real symbiotic system now let me step back a little bit and talk about the. [00:06:09] Right if you wanted to kind of make this happen we have you right now what has has been happening and what's the future for AI you want to have that symbiotic system you want. Or as I move in life will I Can everybody hear me is OK I didn't get any complaints. [00:06:34] Now OK that's good so 3 by itself if we look at these are the 1st it's handcrafted knowledge the testicle learning and contextual adaptation and contextual contextual adaptation refers to the 3rd of a full fare I can textually I that I want to talk about. Now if you deep ties a little in the different waves right hand crafted knowledge what does that mean so that's actually the wave that was around in AI technology in the 1980 S. to 22 to 2000 so it enables reasoning over narrowly defined problems and there's no learning capability or poor handling of uncertainty and I think most of you can remember this you're too young but you know you have the space missions in the 1980 S. And you had expert systems automated all the processes that had to happen because you could not have a human sitting in the rocket necessary while there were some passengers in there but many things had to happen very quickly automatically so you had to have an expert system that does certain shot offs and certain automations in that process and I mean the famous one that was used as the clips expert system and. [00:07:51] That was something that was based on rules that the engineers actually defined and that were kind of completely defined in advance Another example is actually the chess computer in the end late 9080 S which actually pinnacle been in 1906 where you had the blue beating cusp of but that was also all based on rule rules or would based systems right so you have certain rules the chess is based on and you use these rules to define what you know what should actually kind of happen and it was a very constrained kind of problem so the engine is set and the rules that we present knowledge of the well defined domain. [00:08:34] The structure of the knowledge as defined by humans and the specifics I explored are executed by the machine and it is in itself already quite successful. Until you know around 2004 in 2005 because the handcrafted knowledge kind of hit the limits and some of you might know this you might maybe even learn this in class I don't know the DARPA Grand Challenge 20042005 there was actually the kind of starting point of all that sort of driving costs so the challenge was to have autonomous vehicle to cross the desert and the prize was I think $1000000.00 ride to do that and. [00:09:17] In 2004 right it was the 1st year while you know no team succeeded in doing this so they handcraft that the rules what the how the how the react and so on computer vision algorithms and stuff but there was not enough like. Generality to the model to cover all recognize things that might come up in a random desert somewhere that actually the car had to run through and then in 20055 teams exceed the challenge you know what what changed they I was actually that the 2nd wave of AI happened right so we moved actually from and crafted knowledge to statistical and so he had the engineers created statistical models for a specific problem domain and train them on big data right so you have big data of images from there and so on you train on this and you get a model that is much broader and engineers don't specify what the rules are but the data structure that is in the data defines that and the engine is just have to data pipeline and they configure the models that have been trained and that's the recent success of AI technologies right so all the statistical learning voice recognition and deep phase face recognition in jeopardy right if you remember that. [00:10:39] I'll forego all these things I actually powered by statistical learning so it's known in prediction capabilities but the problem there is that there's no contextual capability in minimal reasoning ability right so you you're training on big data but big data makes that actually a quite natural problem and why does it work so well well you know you. [00:11:02] Have the money for hypothesis they are right so you have you say that the natural data that you have let's say you want to recognize certain obstacles and that there is a so you would assume that the natural. Natural data forms actually structures in in a in a sort of dimension in an imbedding space the manifolds and each money for represents a different entities so it can meet different types of obstacles in the desert and so on and understanding data comes by separating this many falls right so that you can see what is an obstacle and what is not an obstacle I have put a graph from the right side here you have the different kind of data points that you have and you separate them by tolerating you say well I just need to have training and projecting this into Suppan dimension that I can clearly separate this and why does this work so well also with keep running well being because deep learning you have all these different layers so by having the different layers in a deep normal network while you are stretching the data in new dimensions that enable the enclosed manifolds to be isolated so you're just adding layers and layers and layers and you know each each layer stretches and squashes the data space until you can cleanly separate So you're kind of taking the data you are squashing in the different dimensions you get a classification you have a 2nd arrow and then you retrain the whole pipeline in at layers and what's interesting. [00:12:32] That you know the engineer in the beginning that defined these rules while the machine learning program and is now actually designing the network structure of a source before he would define directly the rule for the data now he's just defining the pipeline the network structure with the experience and by trial be a trial and error All right so you're just saying this is kind of the components of the pipeline in the data will actually run through and define it all and that makes I think the core of success and the layering of neural networks provides even even further depth where you say well you half in this isn't all examples of in 2015 right but that's the kind of the de blah ning that classifies an image and gets a caption for image so you have a conman usual nor a network you know 1st that defines what the image content is in a way or image representation and then you couple this with a language recurrent neural network and then you get to capture knowledge and by layering more and more you can actually get more and more fine grained classifications The problem is obviously at yes it's recurrent. [00:13:43] You know the current in our network so. The problem by by actually doing this is that you get that statistically very impressive results but they are individually quite unreliable and I'm sure you have all seen this right as a famous Cuban example but you can actually fool is that rooms quite well because is unclear you know but by and by all these different layers of what is exactly law and what are the features that the data is training there were them on you know that you are we training and retraining what is actually the thing that is trained in here you know you could just add a little noise and then a pond becomes a given you know even visually it looks exactly the same right so you would not see see a difference and even more interesting and that's kind of left 6 at MIT that did this in 2017. [00:14:36] You can actually flew Google's vision engine by saying well you have you stopped of with a picture of rifles and you mix in some noise and then you train and retrain or not you're not retraining but you're basically varying the noise that you are adding and then you wait you know you get it classified as a harmless helicopter or you know row talk craft vessels that it's actually guns and think I have even a demo video on this ride as a little movie with that they had up there but it's kind of just your very being the pixels in the picture all right so you're starting with a dock picture and then you have the cation picture and you mix it in but in the way that the deep neural network is kind of maintaining the classification of a dog and you're maximizing for that but for you visually it's not it's not really changing right because you're kind of gradual mixing it in and what that tells you is that you're not necessarily training on features that are in visually for you perceivable for the human in the in the image but you're just kind of you know training on very big data Another problem is the bias that you might have in the big data that you train on and that also is reflected in the model that you actually train right so here this is this was a problem Google's a long messaging app in 2017 that if you put a toy gun well a recommendation that you get for the next character to put in is a is a man with a turban is the not you know you look at the port is that she pointer here probably not right. [00:16:11] Maybe I can point with my mouse and all this in but OK So you see here right so you have to get a man over to Auburn if you just put a gun and that's it's not an appropriate kind of recommendation to get but that's what you probably train. [00:16:25] From the from the morals similar here gender bias or is Google Translate. Sol I'm not speaking Turkish but I assume that you know it's a it's a gender free language for at least certain things so you can kind of say she is the Dr he isn't a US you translate this into Turkish doesn't have agenda in here right so you take the same input and you know you reverse the genders right obviously he's a doctor and she's in the fire. [00:16:53] You know it it tells you what what by as you have the training data that you actually trained on how to summarize statistical learnings for you you you see it's very successful but restricted Nishabd occasions of the right so have limited context and a patient to context so I have you know tried to to draw this he a little bit while you have a family of face recognition face identification and these are applications that are very powerful but they're not understanding very you exactly using that and what other kind of biases might be there and with the thought of a FULL of a I it's coming up right now and different research programs and so on you know what you what you try to get into is contextual adaptation and intelligence so this is the next construct contextual explanatory models for classes of real world phenomena right so meaning you're trying to actually have a representation that is richer and more data around the user to do as you cation in machine learning what I tried to depict is here in a really like this foetal obviously this is not a real a pick a genre that out there right now but you have someone who's doing in mechanical tasks and you have a robot it's exactly understanding what the current vote flow is the other user is in that Vioxx will and what's the need of the use you know the user here you know doesn't even need to look at the robot but knows he's putting his hand and knows the your robot is providing the right pot that you use a card he needs and it's quiet. [00:18:29] You know quite the vision here you know to follow now what are the properties of of these 3rd wave and I believe you know there's kind of an adobe we believe there's certain properties as a kind of part of quietens that we should have full contextually I don't want to kind of call out for them here so 1st one it's very important is intelligibility So they I system must be able to reproduce and to its uses what it knows how it knows it and what it is doing about it and it's very important right if there's no aspect of the model that is that is train that is inspectable by the user and as picked off with that the user can relate to then that's a problem right because there's very likely to be an arrow and some debugging to do and if the user doesn't understand it and you needed like a machine learning expert to back your kind of product it's not really good then what you also want is that it's the dept adaptive you have to have a certain adaptivity what that means is if I have the Smart home environment in the model that trains my preferences and my behaviors so if I now visit my mom while I want to take my model with me and it should work in my mom's home like in my home meaning you know it the home has a different configuration there might be different sensors in the and there might be certain preference I have that I don't want my mom to see and it would also know that I should actually adapt and this is very challenging it's clear that this right now we don't have something like that but we have we have to have a certain transferability of the different worlds. [00:20:15] Then the 3rd one is very central is the customization in control so if you roll out any AI system that is critical it has to have the user has to have the last word in the control and also the full capability of customization right is certain features the user doesn't like the user should be must be able to train them off and this is obviously important with a fast one because if I don't have intelligibility if me as a user I don't understand what the system is doing I cannot customize it or even overwrite it because I have no clue what the system is so I have to have a certain amount of intelligibility to actually do that and then the US one context we end as its It enables all this by saying well the machine needs to be able to see or write the machine needs to be able to have the same pair of eyes as me as a user has obvious is not the same pair of eyes right but if I have a system that is to provide me with services that are based on audio Well if it does have a microphone it's not going of right it's kind of obvious but often we have systems that just don't have enough sense open perception and this is the human perspective the challenge the requirement they I perspective the models and the methods and I think you guys learn is in class I'm sure you know it but intelligibility this is kind of explainable models and expandability that's a big kind of hot topic I think in AI right now how do you make models explainable but up to videos transfer learning customization and control use a preference modeling model retraining and context around is a sense of fusion sense of huge and these are just some of the models that are used their own in terms of technologies what we also use Adobe In this case is and I think we we want to invest quite a bit in ontologies and taxonomy buildings as a kind of well if you want. [00:22:11] If you have a taxonomy and until on television years a kind of something it's in itself is quite old it's like 30 years old or all of these like we're told it isn't it's on a means it's always been a little dreadful to build all these things but eventually we'll needed to have a common ground between the users and the i system right so you have kind of have a model that is you have references reference point that the user can refer to and then the machine and she also fills or press if I is to and they'll P.D.P. language understanding and human intent modeling So what is the user actually really want to do what does the user really kind of need in the current situation and then finally new H.C.I. paradigms that's you know a very important one and my current role in that will be right where you say roll you want to roll out in the system and you know you all know Alexi and it's very cool to interact with boys but would you do this actually if you are with quill eeks in a workspace and you talk to your computer maybe right but it's disruptive if your colleague also talks to the computer that might be into ferns and might really work very well right so it's kind of like well what's the right paradigm maybe a voice sometimes but not always and gestures might be good and how one of the new paradigms where you mix all these things together so now so far so good now I want to give you a little bit of a glimpse of what we do adore you. [00:23:40] That the doggies coined sensing senses Japanese for the teacher it's all the the view of the dhobi on AI is that it should be something that is helping the use and hence in the use in teaching the use and to towards the goal that the user has I did something that's not not really automating but really more kind of system the use of. [00:24:06] And the feet misery around would require creative intelligence So what does creative intelligence means for here it's all mentation of the creative skills and capabilities the means you know you have a creator the Creator has certain capabilities they should not be automated or replaced but just enhanced. You want to in optimized of articles and you want to enable all to fixing editing replays in order styling order curation right so you want to kind of give the use a new set of tools and enhanced the voxel that the user has so in practice what does that mean so you have deep Condon deep learning a for content understanding so here you know as you might know you have Adobe stock so it's a very very big. [00:24:55] Image database but we also see a little image content so you can do some. Image classifications of 1st of all it's use all you know you detect the faces that's the classic right as though I think in the box once you kind of OK to take the faces you tag it I'd say is there was median family and Z. in some line and so on but then you can go into a space that's more human understandable right more contextual with saying well I want to detect the emotions in the in the picture I want to take happiness laughter and joy right so I want to go into dimensions that are much more relatable by the user then but I also want to go into something that's more a human view of this which is a statics I want to say well how aesthetic is that image are there balancing elements is that color harmony is there content depth of field light is there an object are there partitions in the image and so on. [00:25:57] And then I want to even build some type of Texan on the Gold structure is a what is that actually the category of the image rights is like well broadly you say it's is hobbies and leisure right but then hobbies and leisure but within that it's holy days it's and attainment maybe a little bit of home and home and I think it's not so fitting it was only 26 percent right but then you have people your family lives couples woman is it women are in the and feelings of happiness and you know the tax nominal model is interesting here you know with certain kind of levels of abstraction because it enables users that come on the stock page to search for something content to very easily find exactly the picture they're looking for right because if I want to look at this picture and I don't even have an idea that this is the fitting picture for my task right but I want something that we prisons happiness and family life and holy days and so on and I just specifying this also in the tree I can very easily find appropriate pictures if I just would say free form text I might even have problems to to express what I'm looking for you know another example here is if I we have to look for a more interactive experience right so you can do the search and so on after specify but I actually want to directly interact with the search bar and this one I have to see and I can speak fast enough to what it shows but this is so this prototype here is actually rolled out right now but it's kind of cold Hey sensei So it's an assistant that helps you to find the stock images and the OP that you want to find or Basically what you can do is. [00:27:50] You can just say well hey sense I hope you guys see this is all a sensei and then you just get like some random images priest provide me with a search request and let's say I want to find some abstract for my apartment so it gives you some abstract that we have an Adobe stock but then you want to add some landscape so it shouldn't be too abstract so I want some landscape then I want to add the ocean as a specific aspect of the landscape and then you know I might want to have a little bit more traumatic elements in there right magic you know something out of to get some more emotion in it I do see it's kind of interesting more red color the traumatic I don't know why but it's hard to identify that So you want it even more traumatic now for my apartment cause I want also more red in it so I can actually say in her head right and then you get is actually my favorite is this one this is what I chose in this age but I think you get an idea right because you can use. [00:28:59] Concepts that are human relatable in this and have an interactive experience while use and also use all the AI models and under the hood you can actually stick them together using using it I guess or what we also do is an oral stylization So that is. A little bit of a box application of AI still but you know you see this here we can start with. [00:29:26] A picture of my house and then I have 2 historic right and I say well I want to actually combine those 2 to create a unique painting of my house and given the structure that is in the artworks and in my house it's kind of perfectly blended together right you see like different elements from the different paintings I actually put into this and then I get like a unique representation of my house. [00:29:56] Another one is the deep cut out. So this is actually kind of interesting work because it enables you think of like some flexion products like photoshop right the very important task in Photoshop is to cut out objects where you have pictures and then you have a tourist in your kitchen picture and you want to remove the tourists and stuff like that so how can you get a deep cut out of objects that have meaning for humans and it's not necessarily just by. [00:30:23] Specific features and shades and so on but also by the meaning of a certain kind of area in the in the picture you see like a cat on top of the computer right so the cat might have cheese even have the same color as the computer but just by it being a cat you can kind of pin the separated people from the big round. [00:30:42] And so on so it's kind of quite quite interesting work and you get 90 to 96. Percent crows he what people would perceive as salient object in the foreground with this and it's you know it's it runs pretty fast as a half 2nd to identify this and this is used in the next in one going to show because this one is also I think with audio or. [00:31:09] See I don't have to talk to this one hope. So I think you get the idea right so that we the interesting part is obviously the individual capabilities of the software to enhance your face but also to orchestrate all this together so you can really edit your portrait photograph to be the way you want it and to be also later able to identify the right features you know that you need to get that going to get to them and to be able to use them and that's also in my current kind of role and joked the challenge right how do we get the right features to the right people that need them and how can you quickly orchestrate all of this to get like the best image quality image editing for people now some conclusions and. [00:33:03] So as I kind of said. Hey I needs to evolve in the 3rd wave to respond to the human challenges of intelligibility and up to video customization and control and context awareness and with creative where I would we do it at all be a kind of walking towards that in the examples that I showed and the enabling technologies that we also build out right now and on also high up house and Ellen have to stay is obviously the core one that's enabling most of the things that are going on right now is deepening so you'd be half a dubious stock we have a lot of behavioral data so we have large scale infrastructures that we that we have build and we are building we have since a platform that is rolled out which is our kind of core machine learning platform and then on top of that you know we need representation of knowledge ontologies and examinees and so what is creative intent what is the creativity that people actually want to do with Adobe soft right so we need to build all these models and we have to have the people that A she built these models. [00:34:12] Then and every piece of here you know I think the key thing is deep language understanding and human intent modeling in this and so how when you saw you know this interactive interface is a is a wall give me this and that and you could also use voice for this but well how can I really deeply understand what you want even you you know if you're a beginner you might not even know how to express things how can you translate this tool the feature that the user needs and then you do X. and U.I. paradigms and soul I mentioned voice gestures and what is very essential for us right now is also meant in reality and virtual reality so that's something we invest quite a bit so you know you can say well how can I enhanced the experiences that people have and also the into action to kind of put things you know you can have them into action even with an object that is virtual and that you see on your phone when you can interact with it right things like this we're currently actually exploring quite a bit in the products that are coming out for us and the experience experience cloud side which enable people to to edit these is experience experiences so that's all I wanted to actually share you know I don't know whether we have time for Q. and A. [00:35:34] Also you know I mean I shall also point out that you know there was a in the in light actually link that went out also to connect with Adobe because we're always interested in great talent which I think Georgia Tech has. And if you guys want to get in touch with me obviously you should reach out to me directly but also you know through the website you can get a lot of like. [00:35:59] Interesting information about Adobe and rules and stuff that's now but are there any questions commands. Anything about this but also about Adobe In general I'd say if you want to know more about Adobe. I'm happy to talk about anything else so it will be half. Its Photoshop so this is something that is actually rolled out in for the shop so there are certain plugins that have been in for the shop. [00:36:46] Re We are not partnering for the co-ops like Photoshop I illustrated we're not partnering with Alexa or any of that just because the we don't think that she will voice input is the right one Elegy but we have colorations of corporations with Alexa in the experience cloud and I'm not sure how familiar you are with that but. [00:37:09] You know most people think of Adobe What are you thinking of P.D.F. and Photoshop. And the company is actually has 3 like main kind of pillars right so one is documents so that's P.D.F. and it's very old but it's a very profitable business still right because you know P.D.F. is used for Also things like signing business processes and stuff like this then you have to Creative Cloud so the Creative Cloud is all Illustrator Photoshop spot all the different things to actually create things and create content but then you have the experience cloud and the experience cloud is to design use of facing experiences and that experience so this is things like you know if you if you I think Coca-Cola is here in Atlanta right so if you look at the VIP side of Coca-Cola right and the user experience if you see there who do you think Paolo's the analytics and the experience itself well it's the job the software OK so the design of how the flow is and how all these things work together that's Dolby it's XTI it's experience management. [00:38:17] So because of you you have the actual creativity and the elements but then the whole experience part by by Adobe and we do this for be cooperation like Coca-Cola General Motors and so on but right now recently we are also moving more towards Also those small and medium enterprises with acquisitions last year market 2 and mag into which we are companies that we and it's based on a part of the hope that answered your question and the more questions yes. [00:38:56] I. Always wonder why. Do you want. As full of was saying without you know what are the data scientist positions that Adobe is actually looking for all. Your on and companies in general these days so what we we have in the we have a lot of data just by because everything that you do in apps is locked right so the US Open entries that are going into our clouds and we have a whole that is analytics so what I said with the experience cloud. [00:39:44] So that because the you have to use experience that is rolled out of like let's say Coca-Cola that side but then everything that happens with users in that cloud in that website all the analytics is basically provided by Adobe which is enormous amounts of data scientists crunching these data and building these data and these models and what kind of you know how can you predict church and so on so this is something that is obviously also owned by the actual companies that have at that have that like you know Coca-Cola but the they used to analytics product that we provide so we have to do a lot of these things as a service that we provide right as a as a tool as a software that we sell to his company. [00:40:33] Another question there's a lot of questions today Paula. So this client says this on other things. There's no isolation I think. They work with visualization get to know. So internally. How well you're doing how. Formality. Well you talk about the role. So you know there's a very good question so I usually zation is very central also because you know that there are several aspects to it right so you. [00:41:17] Have people that are very data driven and I assume you guys are very data driven you know doing holes in machine learning and so on but Adobe is a company that does a lot of you know it's in its core a design and marketing company. Which means that many of the people that I actually had the users and the consumers of that the data or the outcomes of the data actually visual people visual meaning you know they want to see a graph they want to actually get a look and feel of the data so if I come with a datasheet and a spreadsheet that's not going to convince them right so the designers and so on they want to have a very crisp and convincing visuals to understand what the what the data is and I think that's also something and I learn around data visualizations that if you have data scientists don't assume that those who are the consumer pulse of you all and their listeners and so on as. [00:42:17] We should up with it's nerdy as you say you know because often you have to sell your results to people that you know executives they have no clue right and they want to actually visuals and you have to have to tools to present the data and the pastry rights you have to have like the right kind of things to do to make it can to to draw and spin a convincing story for someone who only has minute to look at your stuff and if you come up with like it's like a spread sheet and all type of numbers is I don't want to even look at that it's just too overwhelming yes. [00:42:56] It's. It yeah so so the question was What type of data science tools and visualization tools we actually use in Adobe So do we use as quite the normal text text so we use Spock to do and also Tablo and all these type of visualisation So we have I think quite standard things that we're building customized tools for our customers so there's kind of in the analytic space as I said right there are certain tools that we build but internally we're using the normal get a visualization tools in a way the other question. [00:43:55] Wasn't quite a perfect well yeah this is a very good question because I think so far for the expert the question was we had the 1st wave in the 2nd wave of. And whether the tools of the festival for I like expert systems are still widely used or the OS open areas where they are used and you know I think there are some statistics that still for the like i Pod systems nowadays even most of them are expert systems so you know if you think many of the you know ISS things that you actually see deployed. [00:44:52] They are not like deep learning systems but there is just kind of part by a simple rules and reasoning and so on so they are more systems out they are still that part by this and you and you actually might think and if we internally I mean all the stuff we are lucky because we have many many like each data so when it comes to sense or real data like audio data image data deep learning is a very powerful you can because you have also a lot of data to train on so many of the things I showed as I should keep learning mortals but still feel some of the decision making me half the experts just so how did i do we come up with the names and it is a very Christian that was actually before my time going to do. [00:45:45] But I think it's they have a very good that's why you know the D N A is a little bit also a marketing company so they in turn of these head while we have to corn our AI and you get a unique spin to it and then the designers and the marketing department and the machine learn us came up with the notion that's the story that I know it's. [00:46:44] Well it's not so that is it that is a very good questions of the question is whether contextually I doesn't limit the use of a I when you go to a different environment like a different area of the world something like this and I I I think in the definition you know of contextually I should actually empower that So you know if you think of the aspect of adaptivity if you if you say well you want to contextualize a I it's true if you if you incorporate more and more kind of specific specs of the context you know if you say we have like just a I in the box I can easy to transport right so if I have more contextual data that I actually incorporated. [00:47:25] It might limit the use maybe in the beginning but by having also that the ability of a T.V. because you train it to work in your whole and your mom's home and so on so it should also be have the capability to to be adept if enough to work in a different continent and with different people so ultimately that's kind of the goal of it it's a hard problem so we don't we're not the are not there yet but I believe that from the vision and that's that's what it's intended. [00:47:59] And I are in awe. And. Yeah. And. Yeah. So this is a very good question so the question was How do you get to classify all these images also you know how do you get kids to be case in information fall for the images and you know if you say it's like 65 percent of happiness as something right how how how do you come up with these numbers how do you actually get that image image data even labeled So this is a very good kind of questions of what we we spend a lot of money and a lot of time in doing the image labeling so if you if you think about like this the image with the beach right so they have seen at least 56 or more people labeling this image with different labels and different judgments all of what amount of happiness might there be in the image or if any rights will be used different like also for parties services to get this image labeling done because it's objective. [00:50:08] It's gives us some objective but the only way you can get even some type of numbers was a wall you have so much subjectivity will you edging up a lot of subjective judgments and say well you know maybe if if I added up and effect an olive at some point I get something that's commonly accepted as the bias and I'd say we have still a bias and a certain bias that be trained it's a problem with the current of a full faith I that that's still a little part off but by getting a lot of the data on a lot of these labels we think we can't a good kind of just of the images we present with us yes please can you speak a little louder I can't I can't hear so so you mean because they released this and I didn't open source it with it's kind of appropriate Terry thing that's what that's what you what you mean right right or having having that access. [00:51:53] While I think there's still a lot of work to be done on this you know and I'm not sure I completely acoustically understand. Basically But there we are ready for the photograph of the I well I think it's ultimately. The only chance we have right to make a I like that success in everyday life if you if you say you want to have it really in the Voc flaws in everything that we do every day we have to have a way to indeed we actually do actually do so you might have noticed if we don't say I we say powered by Sensi and I mean essentially it's kind of the way I write but we do a lot so we don't have so we don't do the maybe half research on it but we don't do Ganz and stuff like this very kind of say well we are generating we are dreaming up images so right now we don't do that as more like because our user base. [00:53:06] Is. A little bit more conservative if you take designers and so on they don't want to be replaced by generating the. Yes but it's actually good good point so that the house that I showed was ordered generated that combination but the choice of the images forced the user so you can say well you know you could have done it by yourself you know kind of right but you're using that but you're not you're not saying. [00:53:37] Well I just want some kind of odd to walk in and it's making up something that's not even real so that they because the our user base in the design is all a very skeptical that cool yeah yeah yeah thank you.