[00:00:05]
>> Hi everyone This is our That there and I'm very pleased to welcome. Alex Grey from i.b.m. research today as the start of our spring 2021 section of our ideas the I 7 our series and he'll be docking about logical neural networks to us unifying statistical and symbolic and Daleks serve us that we be a foundation that we at i.b.m. and leads are currently a global research program and you're a symbolic a i b m he deceived his bachelor's degrees in from both Leanna p.s.p. and c.s. from CMU and he's an old timer project because he was faculty here before he moved on to start a company and then joined i.b.m. He also worked at NASA is look at on m.l. statistics and algorithms for massive data sets has been honored with a number of research honors including the and it's a courier a blog and a selection of the National Academy of Science and scout the scholar and including a being a member of the current e. then an **** Committee on analysis a masochist So again let's welcome Alex back to Joe that they can really excited to hear what he's been up to Alex and just a reminder to everyone.
[00:01:17]
That if you have any questions in the middle of the dark are you going to be M.P.'s would be to post them and type it out on blue jeans that I'm going to talk about next because he can't hear you and anyone who would do you Alex thanks All right well thanks to the judge and then Thanks Jason for the plan of the station it's a great honor to be back in these virtually.
[00:01:41]
Maybe another time to come 1st and the other faces I don't know from the back in the day. And. Then I'll answer all questions at the end hopefully I'll. Leave enough time. Actually one of the 1st presentations of this was. Different looking forward to your feedback. So this general area neuro symbolic so I was a recent.
[00:02:15]
That mentioned it. At AAA hi in regards gave in May So does own little sort of informal categorize ation which I thought was nice of the different approaches that listed them here 6 or so approaches. Have another. Survey talk where I go into the more detail you can see here but more or less These are all different ways of trying to bridge the neural networks roughly acting as the main representative of the FIs fully i.c. and a logic roughly 1st order logic representing.
[00:03:01]
Different kinds of cases in symbolic ai. Most of them however are just. Reaching techniques. That means in seconds but. They almost always have these kinds of goals so one is understandability you could say this is one of the main things holding back Current say I see faulty high moralists and indeed orning from many more.
[00:03:34]
Such a critical test. Of the ability but understandability is closely related to control ability. And then the 2nd thing is just generalization what sort of generalization plus plus. Not just sort of small deviations from a dataset but larger Jewish is to say. 2 different tasks and. Large changes in the situation and why is because the hope is that by injecting knowledge which is generally cast in terms of logic in the smoky eye days then you can have a more general.
[00:04:24]
Picture of how the world works that you can use in different ways and then finally reasoning which you knowledge and reasoning sort of go together that were roughly speaking out you might define symbolic ai. And. More complex problems with seem to require a plan so. Now however the current batch of things I'll argue were good are missing some some properties.
[00:04:59]
One is a understandability is not arguably not a true understandability just by having the logic side by side more or less with black box model if you still have the black box and almost all of these things really have a change to representations of the same time and.
[00:05:22]
Many of them will see them on not compositional unarguably then they're not modular and you can't reuse knowledge. And thirdly. The reasoning that you get as many of these. Approaches to kill switches is actually not a reason to not rigorously reason like it or work on some logic from the source or logic.
[00:05:54]
Ok so you could say what we're doing here introducing a 2nd type of nurse in volatile mission and which is perhaps a surprising one is to show that there is actually in the quiver this is a very big overlap between neural nets and logic which would be surprising where they are the same and by doing this we'll actually have one representation up to and which will have the capabilities of both or fully will preserve a key capabilities.
[00:06:30]
It will be compositional modular usable because logic is. That and just reasoning as you can do to logic. So that's what we're going to try to do. Right here some more. Information on this year. So let me let me tell you a little story we still take about half the talk and then the rest of the talking.
[00:07:01]
About extensions where I'm going to lead up to the punch line of how exactly it is that we can make we can see that not all nets and logic are actually not you know do things that are wildly opposed there there's a sense in which they are the same Ok.
[00:07:23]
I'm going to take you through 8 easy mental steps to build up to that consciously so. First we go back all the way to the original a neural net paper the very 1st one making 40 McCulloch's and fits but the point was that a sort of threshold type unit.
[00:07:48]
Can do all of the logic gates were present in all of the cities and the 3 ish sort of ones. You need for convenience representations of knowledge are good clear and or an implied in health if I'm not sure how. How you do it is it's kind of a gate now it doesn't it can take any more than 2 inputs.
[00:08:18]
Notice that it doesn't have weights exactly like for modern. Neurons and the activation function those is a step function that's. Ok but it shows something neural like can implement a logic. Basic idea which more or less. Hasn't been directly built on this that. We felt was ought. So.
[00:08:50]
Next in the historical evolution we could say Wait. You still have that function the perceptron. To interpret this as the logic there is still in the chair potations there is a. Region of the way space in which they get accepted and this is another reason for a space in which it acts like an or so on.
[00:09:20]
Then and is where you know all of. You need everything to be true for the output to be true and if even one is not sure. You don't get one with the else but a 2nd set that up with certain qualities All right so so far we still have a connection between neurons and logic.
[00:09:46]
Then of course the celebrated leave is go to multiple layers. Mean need. In need of fact props and you need to be able to have differentiable neurons and so now the step function is replaced by something softer version is this a. Sigmoid for example or more recently the real you were rectified later units the can have derivatives.
[00:10:21]
However from a logic standpoint you now lost something. Which is now the value up with values are somewhere between 0 and one but not your sleeve 0 or services one so you have to do a thinking now about how. You want to if you want to maintain the logic connection.
[00:10:46]
So the sort of gone at this point now what we can do. Is we can say well at least near the end to get the sigmoid at least near 0 and near one with some sort of say for want to enforce this. The seeing acts like a logic gate in those regions merely defined by some distance alpha then you can just enforce these kind of constraints and make it act like classical logic.
[00:11:24]
So the new addition here is now you have some constraints on the weights. But that's not quite enough. We still also want to we still have to think about what happens in between in this. Region and. Ok Well before we do that. As long as we're going to have constraints.
[00:11:58]
Let's add some slack. Typically that you do when you that are some fellows machine learning thing with constraints so here's a version of the optimization problem but lacks like variables and these govern a degree of adherence to logical behavior. If you set the slack parameter i then you don't have to open a logical constraints.
[00:12:34]
Also they feel sort of act as tuning for the wave. Function. So that's just part 2 of setting up with some strange version of a neural net. Now but this through this region between there you may be thinking some of you who've heard of real valued logics hey there are real valued logic so now in fact by real value that means their output per truth not just 0 or one that can be between 0 and one so.
[00:13:16]
Many heard of days of logic. Reasoning fuzzy logic but this is actually true here is an exotic choice in 65 or so. In fact it goes all the way back almost to the beginning of mathematical logic so. In fact Boola has had some version of approaches with logic and self.
[00:13:46]
But here are some common logics these all define the equivalent the analog you could say of and or flies with some function. Ok now they have different behaviors in between 0 and one but all of these have the properties that at 0 or f one there is the same class the larger.
[00:14:16]
The good this is already there is already a rigorous way to talk about logic where values can be easy to do. Let's look at one of these as a very common one Lukash or worse logic now. Here is a certain definition for and or implies no see right and in or are these funny symbols amounts to show that they're not exactly and or the analog of generalization.
[00:14:51]
But if you look at a plot and here for 2 inference variables and the Alphas you happens to be the same as the real you activation function. This is a or 2 it is who incidentally to say. We can already see some kind of our response. Now. Between logic and a neuron However one thing that's missing still is we neuron has a lot of.
[00:15:26]
Problem what it will do is we'll just extend the cashers logic but it can be done with any of these logic. In. In sort of natural way. Which weights on the inputs. Sort of signify the importance of an influence. And you can define a rigorous logical semantics this way.
[00:15:55]
Through natural. And again this can be done for a signal oid this can be done for any of those other real valued logic more or less any common activation function you can make the sort of logic version and any common real valued logic. Also you from sort of make it a.
[00:16:20]
Civil activation function we need one more thing to finish the idea of a logic you need. We need inference rules not just how you are present statements. In the components of statements and but what do you do with a statement to get the truth value of another statement.
[00:16:47]
In classical logic. You may dimly remember from your. Class. Things like modus ponens service you serve an obvious inference rule says if you know p. is true and you know he implies Q is true then you can infer Q Ok So that is going to fly. You know.
[00:17:18]
Because under those Whenever you know c. is true and since I see through the line salute what we mean by inference rules. Logical inference is Ok so great that's classical logic but what are the inference rules for real values logics and when you start to think about inference you want to know for certain correct inference or rigorous inference how do you characterize that so well the typical thing is sound this isn't sweetness.
[00:17:57]
And it turns out this is not going to stablish. Bully for real value for logic so that was a former Well to be realized that. You just might hold it is and and mathematical. So what has been established is. The soundness of your members. Logical system which is the logic and its inference rules is sound different Only if the inference rules if not only valid form of.
[00:18:30]
By applying and whatever the inference rules are in your said you can never get a fix statement that's incorrect it's Leap means any statement that's correct you can get to it eventually with some modification even if it's. So. But it turns out. Wrong completeness has never been established.
[00:19:00]
But. Turns out bronchi can this. Is from an older child but. You can see in this. That it's an archive for reference Ron Fagin's would you ssion then national Channing's summer showed this in the So this is great this gives us a. Foundation for using your values logic.
[00:19:34]
This is a bit of a sidetrack. To do something very fundamental in order for speed on this that applies well beyond this goal which is about the limits. This sort of applies to any situation we went through that we thought you had but. Great now if you look at the paper it shows a decision procedure which is how you establish whether.
[00:20:07]
Whether a. Statement is true or not when you watch a full system and that seeing is. At least for all to common logic to make sense if you're in the program then that's great. News for know how to use half but it's expensive further we want something that's neural net like there was our hypothesis is that there's some version of logical inference that maps to a message passing type.
[00:20:41]
Algorithm like you have in your own it infrastructure torch Ok And so we. Can imagine that. But let's look at what that would take so here is. A neuron Let's pretend it's not Ron is implies some sort of man or 2nd now and let's think about the inference rule modus ponens remember that one so to say there are 2 inputs at the bottom.
[00:21:17]
2 Then what motor spun and says and then the output is if you can fly sees sense if we know p. implies q. is true meaning the top variables wife and we know one of them so if you want to figure out want to say that you are true that's the equivalent actually of going backwards through this neuron you could say because it's.
[00:21:47]
In runs normally go forward. So it's. Asking about the value of q. given the implies Q m p So Ok so here we can do that because you see if your dash activation function that is. Invertible. Then you can just figure out and input. On why now. Ok Now assuming you can do that so your activation function lets you do that then you can just be fine with it means to go backwards and.
[00:22:29]
And then you can define a. Propagation scheme. Show here it goes up up and down through neurons and those equivalents of logical inference Ok. Now there's a there's a copy of here so you depending on your activation function you may not get a unique x. given to life.
[00:23:00]
But you can always get bounce that was sort of. You know. A messy messy again like at 1st but then it realize it's actually kind of a plus because it introduces the idea of balance one truth out. And that turns out to be important. Or real life because you don't always have full knowledge of truth values in fact arguably the most situations you only know a few truth values and most of the rest of the world also about variable to represent most in the world you don't know their true selves.
[00:23:46]
If so. This is during fortune for these. You know somebody talked about this in terms of the open rules assumption meaning that there are a bunch of variables in the world and you don't have to you know and have to know what they are. There abouts. It. Turns out most nursing balik roaches sort of in their formulation have to assume the clothes will sumption which is that if you have a statement that's missing.
[00:24:24]
In the fall and that's of course in this. Now you mention as a side note here. Which is that. You are you may be asking about the wondering about probabilities and. There is some f. ing to probabilities. Come back to that later. Now just sort of has. Sued to get a better picture of what this is look like as a method.
[00:25:05]
You have logic statements you have a bunch of logic statements. Yet some somewhere humans are or else for which we can talk about. Say at the top you basically is saying that a neural network can represent the syntax tree of those statements. And the facts at the bottom.
[00:25:33]
Like say one of these logically most of the middle one is saying something like this x. is born in a place a x. is a person or the phrase a and the place a is part of another place but example a cd being part of the country then.
[00:25:57]
X. is also born in b. so the effect as foreign there to be x. is also born in the United States so that's a logical rule. And the facts sort of that you might observe are various instances say in t.v. P.D.'s which is a knowledge graph of various people going on in different places.
[00:26:27]
And so the input to this model is is sort of relational data like you would have natively in a relational database or knowledge graph data. And by the way regular sort of regular supervised learning Aberlour data is just a special case and so. Now because we have this ability to do inference in any direction so it's not.
[00:27:07]
Necessarily like a constraint to be like a normal food forward. Of a definite outfit node and some definite information any note can be my office node and any subset of most simply by all sorts in any subset of most people use call this any testimony and. Whatever current snapshot of the world and all of this very force.
[00:27:37]
With an Elf which I think is Holly ruled over you for want of a role in the training sets your normal training. Base so. Roughly speaking say this is a generalization of almost supervising learning and then once you revise awnings and there's where there's a you predict everything thank you so what is the optimization problem is like so.
[00:28:10]
Here we right or normal last function which is just left most term you know whatever that is seems lost or where they are and whatever the ox about measures or gap between the target values and predicted values then we have this other term which we add or call contradiction last and it's there to ensure that all of the variables are sort of consistent with each other that our logic statements member logic they must be wrong for for inconsistent with each other I could have a fact somewhere in my.
[00:28:50]
Knowledge basis. For arc Obama with those in the United States for whatever reason and then they can have a Muslim basis don't else as a person. And that's Ok Actually in this model unlike in classical to mollify the word out with grind your whole system to a halt can proceed the logical inference to action Your We simply have sort of minimize the amount of contradiction another way of looking at this is this is sort of what's glued together all of these variables are task this term tries to make sure that they're.
[00:29:37]
Also working with each other. And then. Now of course you have a yeah and you have the some strange the self and before so this is some kind of constrained non-convex optimization. Right so we have a number of different ways to try for the most to move in the very basic vanilla version of the original write up principles.
[00:30:13]
You have a better version of the description method. Utilized in the for the description. And then I'll see a little bit more about this one. Ok so so that's learning them so. So you know we've got a bunch we've got something that if you look at a 1000 on it can learn and weights.
[00:30:46]
Like a neural net but if you look at logic it's good logic is weighted and you're. You're learning. The weights on the logic base so it's like learnable logic. So. Here's the punchline so in this box here so these are some funny activation functions. Or anything their normal activation functions that are used today in modern on that standard deep learning model.
[00:31:29]
Recurrent even. You know with the extra modification of brains although it's on this history that's actually just to make it. Work the classical logic a but it equal the set is equivalent to a set of logic statements in a weighted real valued logic a that's the. Unction line and then inference the other piece of the puzzle standard neural net forward inference plus the addition of a ability so backward information.
[00:32:12]
Is the equivalent of logical inference in a way to do that. So so. It might be surprising but basically saying that you can interpret a neural net at any moment which is already doing logic. But then you can constrain it to act a little bit more like. Classical logic if you watch.
[00:32:41]
The logic and watch a so that's why I say class logic is a special case of Truth a logical reason this is for sure piece another thing that's a special case of forces a regular neural net with know from strengths. On anything. And you can vary between the 2.
[00:33:03]
Parts of the message that are full black box no match. For example lower level than 2 reinforced. With no constraints and you can have parts of it for example upper parts conceptually. That obey logic or sense Ok so we actually haven't played much yet with that hybrid so far as he's just doing everything.
[00:33:35]
Logic. Is out of the interest in areas of floor so. Ok And you can mix precise any precise edges. Now go back and compare other common or symbolic. Ideas. I have the 2 most common ones are. Those the one with the most citations and also who are. Arguably as marks of logic.
[00:34:11]
And that. Was. Early 2000 and. 6 are still. Going a lot bills. Are on that. That's based on you know your other approach is also built on the idea of Markoff random fields and their logic statements are encoded in the mark over on the seat of the it is become a clique a logic say them to come to speak of terms in the market.
[00:34:47]
Now what happens there is yours them see the picture of these 3 pictures are showing the same example of how their photos. You've actually lost in the actual m.r.f. you've lost the logical connectives they're not in the model anymore. You maintain them however somewhere. So that you can understand the variables.
[00:35:17]
In use and if you look at a new data point to see and evaluate whether it's true a mass of that data. That's what I mean by subtly there at x. x. you mean teams who were presentations. But the learnable representations is not actually half of the full information of logic in this.
[00:35:42]
And it's not. It's not compositional any more. This week. 10 different logic statements that all have the same variables in them. So. The other major I.D.'s you could say were common idea. Favors is the idea of. The basic represents a station not being. Not having a one to one mapping between stimulus but a distributor office in stage.
[00:36:27]
Or a. Sometimes it was said as a disentangled. We're talking about email and then just disentangled representations vs names of verses to mark the words or concepts like grandmother is your present somehow or from a symbolic one single mom with a grandmother. And some of that intuition comes from neuroscience and so on but the idea is that a logic statement has a point and.
[00:37:07]
Now what happens though is that. Well at least. A logical inference is the result of some insight and training and you know it sort of becomes the wounds of. Real since moving in the limit as a number of examples force and. So that's why I say it's not quite.
[00:37:36]
Hard for seems to sort of like logic like we're in like so and this should be borne out say in. A study. Circle evaluation is there is a. Problem with this for which we have the people. In bottom one to direct translation between the logic statements and norms disentangled.
[00:38:09]
Truth you're. Ok. So that's the end of that story that's to tell you what an element is and. Why it's different properties and. Advantageous forms. No. The embedding just a side note the embedding idea I think is kind of interesting. That has some interesting. Properties that we do actually want also a model so one of the threads are exploring is adding in and that is as well but exacting the sort of your intuitions you could say.
[00:38:59]
And feeling in the in between where you don't have enough knowledge or something but give still do your logic is flawed as opposed to only relying on these sort of very. Fuzzy concepts and that's a whole you find multiple ways to do that and to fundraise for so let me tell you about the next big thing here big.
[00:39:28]
Thing we're excited about in the way and how we. Show the value of this. So far how we've shown the value of this. One is to take the hard problem that intended to learning for today's default they are really is not very satisfactory for sure why you need this extra power.
[00:39:54]
Of knowledge in these and so here is something that is actually very simple for humans and. It's. An l.t.e.. See friends are harder and harder problems. So question answering this question answering for sort of simple factual questions so obviously n.l.p. can be much more difficult and that's a trifle more difficult and math but perhaps surprisingly this simple question was Roger Federer born in the United States this you know.
[00:40:31]
Take a data set of questions like that. With Dave. And Cindy learning type approaches to terribly well so much. Such a success something like 32 percent this wonderful so we like this kind of things. As a challenge so and even though it may seem. Like an easy question this particular question requires reason.
[00:41:06]
Because. Let's say you have some knowledge you're given the ability to look at a knowledge base in this stuff and say we use the g.p.s. which enclosed in a knowledge crash or triples. The facts and we can see the somewhere in there it says Roger Federer's born in fossil.
[00:41:31]
Somewhere else it says fossil is part of the switch event. But the question is was he born in the United States now. So you have to put together a few facts in the right way to answer no because you have to know that. Well the United States is also a country if you're born in one country you're not born in another country.
[00:42:03]
And if you're born in Basel your born in Switzerland. And and so on and therefore the reasonable Ok so. Now look at intended learning what does it do for such a thing as this kind of a problem if basically selects from Creech and existing census candid answers you can't actually extrapolate **** that don't appear in the training.
[00:42:33]
Set at all are very poorly anyday. And generally systems are demonstrated only one data sets fit basically for one to set and there's generally no reason and you could say there's no understanding because there's no attempt to match. I serve the fine at least intuitively understanding as having some abstract concepts that's forced and there's no no attempt at best generally just maps from the input to the output of the work to the question to the words of the answer so.
[00:43:14]
Further you need to realize on a lot of. A lot of labels examples and in these we have as little as 400. Questions and answers in the training set. And then of course we make even harder and ask the system to explain why it gives it. No way to do that to sisters that I know it's already.
[00:43:48]
Thinking here by the way different kinds of reasoning you could say that if you're in represented by different kinds of questions even simple questions the 1st certain kinds and. So what do we do we take a different approach so Jim doing directly from the form of the input to the form of the output of say we require an intermediate step which used to translate the input which is of words and by the way you can imagine an analog of this for vision or any other mortality beach when I 1st.
[00:44:26]
Map it to the logic underlying concepts that is often called semantic parsing. And then once you have it in logic form understand question in knowledge and form then you can apply reasoning using all the facts you have to answer the question and then we we take is actually.
[00:44:51]
Going to challenge so we use some intermediary some Titian's great game on. Sparkles who. Knows the graphic. Ok And this system here is a modular system and. Involves are allowed to use of the art methods in all of these different parts of the problem and we are so short of have there the.
[00:45:19]
Methods for some of those parts that you want Jam are farce and when they should be. In your studio for interest. You can see here we we achieve a so if you are to be just as you are for. 2 of these a beachy way it's as if the same time so that in itself is surely one system can actually generalize.
[00:45:46]
To more than one dataset. Arguably and that is a big claim but I just haven't seen yet the nurse involved. A system that has a win over his State of the art in that anything that such a benchmark. I want to say hesitantly this is maybe a 1st.
[00:46:11]
In which no. Part of the hope for change here that we. We achieve something that's understandable fights humans as opposed to a 10000 and then they don't so here this is just slashing some steps if you would go through. You know more more comfortably of the human but this is a way of showing the steps of reasoning in which knowledge and challenges incorporated as being relied upon and which reason steps.
[00:46:54]
To answer this question and this is why the same as saying that all. Now we have a lot more there were things to do here is just a 1st cut you would explain. The model and this reasoning process you are a regular human being which sort of inconceivable I think for today's mission learning this is now.
[00:47:28]
A key the more things here so a key aspect of this is the inference process which. Person as you know. Is for logic is expensive so much for it will never cease now however we showing here that we can seem that. Perhaps surprisingly this is learning the face reinforcement learning when we show and benchmarks appearances in which we see are all about.
[00:48:01]
We show that we beat even the truer and crafted theorem says. For the vast. Thanks so. This is a way we'll be approaching the inference problem. And what you're learning here is the surface feature basically. That you have to use with you something to surface to through. Inference in this being just as human sort of learning patterns of how to use.
[00:48:35]
More efficiently than just doing the fix on the toilet search this if this is. Ok. Now another aspect of this is so far all I've shown you is that you come with a knowledge graph and that becomes an illness or whatever knowledge you put into some simple but of course you next want to figure out of ways that chain knowledge which a new node automatically in your network and that becomes your field of logical rules induction.
[00:49:12]
Another name for that is inductive logic programming and so we compare to have a way to do this and for that to Vishal methods of I'll see but also more modern methods most of them based on and things when they show that you sort of get some very difficult to ensure for the rules these examples are of what the ground through food is in a certain problem and you go as Or since a big pile of other fools.
[00:49:43]
Often with these methods will show here that does learn the rule that we learn this approach is axes pre-print very close to the give some small way to some irrelevant terms is quite small. So you actually can learn high quality who. Existed at the arch. In the 1st verses were symbolic message now just a little bit about the learn the optimization problem because it's sexy it's a little bit harder than.
[00:50:20]
Some other learning problems the objective is non-convex or that's common these days but Ellen you these pounds are nonce move on the constraints actually contain non-linear fessing. But here's a. There is a fancy approach to that. Using a. Variation of the medium in. Which by the way as a side effect of making it so hard to be disputed.
[00:50:54]
We show here the art and convergence rate. Scale born with a number of constraints and compared to other optimization approaches for this kind of the set up. Is we are. One more thing I want to show you well I just for a quick but we've just started on the idea of you know if you see if you think of this as an a there's a.
[00:51:28]
Beefing up of people learning. By having knowledge of what would be the deep r.l. version Moses and his holiday logical optimal action and testing it out on the hard problems at tech school so you have to move from that number of actions we want to see Father says is in use knowledge to dramatically reduce the very high cost of enforcement.
[00:51:56]
And here's some initial. Work just results there simple farm life lost world the show. To answer which. Here we're facing the policy as rules and this rule induction as it shows you now let me cancel I do a little bit with this. So. You may or may not have seen a wider war between.
[00:52:31]
Us your banjo and Jerry Marcus not long ago last year. And basically went back and forth about what the properties of an ai system should be. And I put together. A sort of distillation of position things that these huge said and you know it's not rigorous of it's all qualitative or subjective but it's the sort of represent roughly speaking the hardcore neural nets you in the hardcore logic.
[00:53:05]
Or so classically a few If you put together this list is sort of old and there was this is showing it again qualitative arguable debatable. Assessments of. Symbolic Ai along Statistically I long and these 2 major kinds of approaches to you know symbolic. And then l. and then you know course I'm biased but.
[00:53:33]
One reason I'm excited about l And then. Is that it offers a path where almost all of these things so you either very close some of Mark Here are your research have a way for all of them except for the very last ones that I would say were still you know but that's arguably the Holy Grail.
[00:53:59]
Sheet which is true. You know. One shot if you shot one in in very discouraged. From the situation generalized interesting situations that's just for that you seem Ok. And here's some ongoing direction we'll come back to this for anyone interested over you can collaborators. Is a pretty large effort so we have people working on all of these things.
[00:54:35]
Looking for help on all of them. H b c optimization and huge amount of things you want to do along the side. Think there's a way to generalize baseness and got the models disapproved. I mentioned incorporated in beddings of various ways. Many extensions to higher order logic concluding all of the different logic to full major logics if you for supposed to have a major things like.
[00:55:07]
I'm basing someone on to acquire knowledge automatically the best way is by reading documents arguably. By doing some antic parsing. And this sort of he maybe we think for the last that I mentioned learning with me much less see the compositional more successful and Ok so. Philosophical shift here humans and in the sort of standard black box approach really puts his or her effort into labeling day that which is relatively simple and soft freeze.
[00:55:52]
Now the model you know I draw it's. Having a lot of. Again a 5 concepts in it maybe some pieces of it that are little black boxes for certain mappings that can go forth. But that structure comes from ology and human in this model goes back and forth looks at the model it's our most maybe just some knowledge of these where the more knowledge is needed and so on can face a bigger role here that's.
[00:56:25]
Jenny thought of that in the current paradigm. I think it is good because we're not achieving everything we can achieve want to achieve with Ai and having humans maybe not such a bad idea. Just changes the whole relationship relates all the figures around replacing humans are gone as well.
[00:56:51]
So that's it. I fail to leave as much time as I wanted to for questions but. Before people can say a little a few more minutes which is a summary of the the basic ideas and where to find out more and. With that sell stuff here. And I'm looking at the chat question.
[00:57:22]
How we do it we just had one question posted Ok. Hey David. Yes good question the question is how well will all the existing architectures and ideas from the transfer we have been for damage yes and it will be very interesting to see but I anticipate 2 kinds of things one is you just wholesale take.
[00:57:50]
Any existing thing like a transformer architecture or something and uses the 7 Network what you need is a training scheme that allows you to train the whole thing including the parts that need to be constraints and their parts that don't. Only have 11 a physician's team that. Will show.
[00:58:15]
A c. was a helper participation for anyone interested in that here. But the other way is to make a true and then sort of version of that for example Ellis t.m.. You know it uses a maggie of gating truth to make the whole yes or no make the whole thing work.
[00:58:42]
And maybe some of them will work you could say so. Now you know just an idea but that maybe can be true see services to educate. And give an actual interpretation to that data messages so there might be sort of logical versions of. The same architecture for the full facility in defunding or you know.
[00:59:11]
Simply was when that sub network. Is still needs to be seen and seriously that that works well before the full moon. So I thank everyone for your time attention looking forward to anyone who finds any of this interesting or like to chat while I've got 6 everyone and say Thank you Alex and everyone stay tuned for more updates on future seminar series this semester Thanks everyone.