[00:00:05.14] Ok so well comes. [00:00:07.14] [00:00:08.15] First still of all that thank you Richard for [00:00:13.12] [00:00:13.12] introduction and so we'll get started. [00:00:17.02] [00:00:19.01] So the title is faster approximation all boredom sand complex here that this is [00:00:23.11] [00:00:23.11] for the. [00:00:23.23] [00:00:25.07] Design of network systems. [00:00:26.18] [00:00:29.11] First of all introduce what is the control of network control systems and [00:00:34.04] [00:00:34.04] what are the design problems so we always. [00:00:37.14] [00:00:39.17] So these. [00:00:41.19] [00:00:43.04] These this litle off works was dungarees my former colleagues in for [00:00:49.03] [00:00:49.03] the university. [00:00:50.02] [00:00:51.04] And the chat and my Ph d. advisor and. [00:00:55.01] [00:00:56.11] These works are also short for Greece Stacey part of my. [00:01:00.17] [00:01:03.12] Postdoc advisor and also Richard Ok so [00:01:09.10] [00:01:09.10] did the talk consists of 3 parts 1st. [00:01:12.17] [00:01:15.09] Introduced for what his network control system and [00:01:19.05] [00:01:19.05] what are the statistics we care about such systems and [00:01:23.13] [00:01:23.13] how the problems are formulated and then I will talk about 2 of our works [00:01:29.12] [00:01:29.12] which hard leaders which deals with which deal with these 2 [00:01:34.14] [00:01:34.14] set of different problems one is leader selection and our why is education. [00:01:39.13] [00:01:40.23] So I would talk about 2 works one is to minimize coherence which is [00:01:46.10] [00:01:46.10] kind of the sum of variances awful nodes Sue Niederer selections other [00:01:51.10] [00:01:51.10] wind is revising entropy through education and [00:01:57.14] [00:01:57.14] finally I will conclude by talking about sound which are directions. [00:02:02.01] [00:02:04.04] Of oak. [00:02:06.14] [00:02:06.14] So 1st of all network systems are system composed of dynamic units [00:02:11.17] [00:02:11.17] that interact with over a network so by that I mean I mean they there are states. [00:02:17.12] [00:02:18.12] Change their state change by time so for [00:02:22.15] [00:02:22.15] example here's a sensor network here's some formation it work here some. [00:02:27.05] [00:02:29.04] Are composed of different components seen a robot so in these networks. [00:02:35.06] [00:02:36.15] These networks can be large scale and specially distribute it so [00:02:40.11] [00:02:40.11] we usually we need some kind of. [00:02:42.06] [00:02:43.12] Decentralized McCann ism to spread out the information in [00:02:48.09] [00:02:48.09] off each node to everybody know what you're doing. [00:02:52.11] [00:02:53.16] So so average consensus algorithms are actually the most commonly used. [00:02:59.19] [00:03:01.00] Protocol to spread out once information for [00:03:04.17] [00:03:04.17] example here we have a simplest form of it so for. [00:03:08.08] [00:03:09.23] Vertex Exidy it has a state and [00:03:14.12] [00:03:14.12] it opt out operates his state according to the state of its neighbors and [00:03:20.01] [00:03:20.01] itself so if so yes this is easy to understand if the state is. [00:03:25.12] [00:03:26.21] A value is lower than its neighbors to increase these states. [00:03:31.12] [00:03:32.22] And here w i j is just edge way toward cutting streets and [00:03:38.21] [00:03:38.21] this absolute here's a state step size. [00:03:41.02] [00:03:43.05] So Judy we also studied these continuing. [00:03:47.02] [00:03:48.21] Off. [00:03:49.10] [00:03:50.17] Often question so. [00:03:52.15] [00:03:54.08] So why do we care about these continuity version because they are. [00:03:58.11] [00:03:59.13] Simpler and. [00:04:00.14] [00:04:01.15] There are some minor differences between them but it looks like most of the time is [00:04:06.21] [00:04:06.21] enough to study this so from now on I will talk about these companies for action. [00:04:11.13] [00:04:13.23] But in reality. [00:04:15.09] [00:04:16.22] Always are subject to noises or [00:04:21.10] [00:04:21.10] disturbances or some Amado dynamic. [00:04:24.17] [00:04:26.19] In the system so we just see some we some noise [00:04:32.05] [00:04:32.05] with some noise process here so usually this is modeled by a white [00:04:36.23] [00:04:36.23] voice which is a mathematical abstraction in nearing. [00:04:41.08] [00:04:44.11] So in this way we can write these equation as these compact [00:04:49.09] [00:04:49.09] will this is time derivative of the vector x. is equal to the minus. [00:04:54.03] [00:04:55.08] Matrix times hex process noise vector so the law question is just defined as the. [00:05:01.23] [00:05:03.01] Degree matrix minus. [00:05:04.11] [00:05:05.16] The Matrix so on the bag no its degree. [00:05:08.19] [00:05:10.23] Matrix and off. [00:05:12.11] [00:05:13.20] Entries are just equal to the negative adjacency matrix and [00:05:18.14] [00:05:18.14] this matrix can be written as the sum of the passions of every edges [00:05:23.14] [00:05:23.14] of every edge So for example here it can be written as these the sum of these 3. [00:05:28.16] [00:05:30.05] Beatrice's and the speedy is just defined vector with only 2 non-zero entries [00:05:38.10] [00:05:38.10] being corresponding to 2 and vertices off the edge and [00:05:43.23] [00:05:43.23] we assign wind and negative one to them opportunity and. [00:05:49.00] [00:05:50.20] From these 2 we see that it's p s d so so [00:05:55.04] [00:05:55.04] we can defined by just inverting all non-zero matters so [00:06:00.19] [00:06:00.19] we've been talking about these equations so how does this. [00:06:04.13] [00:06:05.20] System evolve so on so [00:06:09.07] [00:06:09.07] this is a simulation we can see that's an average of the whole system treats by [00:06:13.14] [00:06:13.14] the time it's like a random walking one dimensional space. [00:06:16.04] [00:06:17.14] Wow the different colors indicates different nodes so [00:06:22.01] [00:06:22.01] they're all the nodes are banking near is its current Herridge right it [00:06:26.14] [00:06:26.14] behaves like this so we want to study these formally So [00:06:31.16] [00:06:31.16] this is again the equation so it has been shown that under some conditions. [00:06:36.07] [00:06:38.01] X. t. is a Gaussian process so the condition is like. [00:06:42.08] [00:06:43.14] X. the initial state is a constant or. [00:06:46.17] [00:06:48.19] Are. [00:06:49.07] [00:06:51.08] Sample from a. [00:06:52.06] [00:06:54.11] Distribution. [00:06:55.03] [00:06:58.12] So so expectation of this state vector and [00:07:03.16] [00:07:03.16] the covariance matrix are defined just as true to us like this and it was fine. [00:07:09.21] [00:07:11.01] It is so it has been shown that these these [00:07:14.23] [00:07:14.23] statistics they follow these deterministic different equations so [00:07:20.00] [00:07:20.00] it's enough to study these equations to characterize their behavior. [00:07:23.22] [00:07:27.17] So for this system we can find that expectation has a steady state [00:07:32.21] [00:07:32.21] it comes to the average value of the initial state. [00:07:36.19] [00:07:36.19] But from this equation we can see that the covariance matrix is [00:07:43.13] [00:07:43.13] bounded by time bonded I mean value of it to its [00:07:49.04] [00:07:49.04] own body so we can consider some quadratic form of the use [00:07:54.11] [00:07:54.11] of corresponds to all of it or we can find this is always positive. [00:07:59.00] [00:08:02.03] So but we've seen that the behavior of the system so [00:08:05.13] [00:08:05.13] we can defines another like this why t. [00:08:10.20] [00:08:10.20] as projection to like the deviation from off the east [00:08:16.05] [00:08:16.05] every denotes States to the average of the system so again this is a Gaussian process [00:08:23.08] [00:08:23.08] we define its expectation covariance matrix that followed these equations. [00:08:28.06] [00:08:30.08] And this time we get a steady state for the expectation and the and [00:08:35.18] [00:08:35.18] covariance matrix here so the covariance matrix should set his [00:08:40.22] [00:08:40.22] steady state of the covariance matrix at his face these equation [00:08:46.05] [00:08:46.05] this is also called North equation in control and in your control theory. [00:08:50.22] [00:08:55.10] Are So given these equations so [00:08:58.20] [00:08:58.20] we want to optimize the system what do we care about what So [00:09:02.15] [00:09:02.15] 1st of all we care about convergence rate which is given by the. [00:09:07.01] [00:09:10.15] Connectivity of the graph and from this equation we can see. [00:09:14.21] [00:09:15.23] And we we care about steady state variance which is defined as the sum of [00:09:21.02] [00:09:21.02] variances off each node so the this matrix is just solution. [00:09:26.18] [00:09:28.13] For the east for this equation so it's the covariance. [00:09:31.21] [00:09:33.18] Or. [00:09:34.06] [00:09:35.09] Not. [00:09:35.21] [00:09:37.05] I will get there we can design Yeah we can design the nose to control. [00:09:43.06] [00:09:45.10] Our. [00:09:45.22] [00:09:48.00] Yeah yeah. [00:09:48.12] [00:10:00.09] So also from this covariance matrix you can define [00:10:04.20] [00:10:04.20] We can then we know we we know we know it's. [00:10:09.08] [00:10:10.13] Got in process so given time it's a distribution so [00:10:14.10] [00:10:14.10] this determinant gives a volume. [00:10:17.06] [00:10:19.08] Of. [00:10:19.20] [00:10:21.04] Uncertainty volume so it's given by determining the days and. [00:10:28.01] [00:10:29.21] Sousa coffee maker street serum we know that it's [00:10:34.18] [00:10:34.18] correlated to the number of spending trees but question and [00:10:40.07] [00:10:40.07] since given the graphs given a number of nodes these are decided so [00:10:46.12] [00:10:46.12] the network structure affects only these part so. [00:10:50.12] [00:10:52.12] This is searching we want to optimize so sometimes they define [00:10:57.16] [00:10:57.16] these has network and p. as a negative log of that number of spending trees [00:11:02.23] [00:11:04.11] so here's another kind of systems code leader follower systems in [00:11:09.09] [00:11:09.09] such a system some nodes are controlled so they have some assigned. [00:11:13.22] [00:11:15.03] Value So in these or pick a chanst it has different meanings for [00:11:21.04] [00:11:21.04] example here these nodes can be operated you know opinion it works these [00:11:26.09] [00:11:26.09] those can have some fixed value 0 or one meaning [00:11:32.05] [00:11:32.05] they have some polarizing opinions without opinion leaders and. [00:11:36.11] [00:11:38.11] So far worse just updates it states according to [00:11:43.01] [00:11:43.01] its neighbors whether it's the euro or. [00:11:45.10] [00:11:46.21] Follower. [00:11:47.15] [00:11:50.17] So this is a question for the for the for the system. [00:11:55.12] [00:11:56.16] So. [00:11:57.04] [00:11:58.06] This is the simplest case that this leaders have all the leaders [00:12:03.08] [00:12:03.08] have the same state. [00:12:04.15] [00:12:05.21] We can discuss about they have different states but it starts from here so. [00:12:13.11] [00:12:14.19] The followers just operates it states according to [00:12:18.10] [00:12:18.10] the State of the followers and leaders since here are neater so [00:12:23.12] [00:12:23.12] I have a value of 0 so it says this equation so I hope. [00:12:27.22] [00:12:28.23] The whole question is given here again we have this noise factor here. [00:12:33.15] [00:12:36.09] So on so we get an Amex for the follower or the foreign news it's [00:12:43.07] [00:12:43.07] given by this question again we study the code congregants rate and [00:12:49.14] [00:12:49.14] internal energy which is given by the variance some of variance is and [00:12:54.06] [00:12:54.06] the uncertainty value off of the covariance. [00:12:58.06] [00:13:01.09] Ok I think this is a background so I will soon get your [00:13:06.19] [00:13:06.19] definition of the problems and our approach use so [00:13:12.10] [00:13:12.10] the 1st set of problems are co-leaders action so [00:13:17.11] [00:13:17.11] given the graph g. we want to choose the subset of at most k. [00:13:21.17] [00:13:21.17] vertices assets leaders such that for the rest of the nodes. [00:13:26.02] [00:13:27.04] We have these. [00:13:28.05] [00:13:29.12] Matrix l. with the rows and [00:13:32.18] [00:13:32.18] columns corresponds to these the rest of the nodes. [00:13:35.20] [00:13:36.22] And we want to optimize these quantities for [00:13:41.02] [00:13:41.02] is a matrix so they are then that you offer. [00:13:44.13] [00:13:47.23] These subtly tricks and trace of the inverse of this operation truth and [00:13:53.03] [00:13:53.03] and determine the toffs inverse of the separate tricks just. [00:13:57.22] [00:13:59.04] Want to do. [00:13:59.16] [00:14:01.04] The math. [00:14:01.16] [00:14:03.22] On the. [00:14:05.17] [00:14:07.18] Yeah yeah yeah. [00:14:09.15] [00:14:11.20] It's the graph that you combine all the beaters. [00:14:15.00] [00:14:19.12] And. [00:14:20.00] [00:14:23.07] It's it's the sub matrix off all columns and [00:14:28.00] [00:14:28.00] rows corresponds to the rest of the note so you did it always a needle to use. [00:14:33.17] [00:14:35.12] Yeah so basically. [00:14:37.04] [00:14:41.01] You. [00:14:41.13] [00:14:43.15] Are Not one grid duce this case it's it's it's you could use [00:14:48.19] [00:14:48.19] a number of spending trees in the graph that you combine always admired you. [00:14:54.12] [00:15:10.22] Yes this is related to some column in a row section problems I will discuss these [00:15:16.20] [00:15:16.20] near the end of the top are so I will talk about why and. [00:15:22.13] [00:15:23.14] How to minimize these trace of the inverse here. [00:15:26.23] [00:15:29.05] So the problem is defined as given g. we want to choose. [00:15:32.15] [00:15:33.16] K. nodes as leaders as such that for [00:15:39.07] [00:15:39.07] the rest of the nodes that these 7 matrix inverse of sup [00:15:44.21] [00:15:44.21] trace of the inverse of the subway creaks is minimized so when as is. [00:15:52.21] [00:15:54.13] The has only one node here one vertex here this is actually the Sam off [00:16:00.04] [00:16:00.04] effective resistance is from all our other nodes to use a bigger vertex at the and [00:16:08.01] [00:16:08.01] one which in your ass is asking to have to have a has. [00:16:12.22] [00:16:15.02] Many nodes so in this case it's the sum of effect a reason to the combined leader. [00:16:19.16] [00:16:22.10] So but we don't really use the seen that this worked but that's a. [00:16:25.14] [00:16:26.23] Good way to think about this problem so [00:16:30.22] [00:16:30.22] effective resist in the cities defined as voted difference between s. and t. [00:16:36.03] [00:16:36.03] when you need current is applied from AC to t.. [00:16:40.23] [00:16:42.02] And we have these 6 brush in. [00:16:44.20] [00:16:47.12] This is known for a long time and on and [00:16:50.16] [00:16:50.16] this is also related to commute time by a factor of 2 m.. [00:16:55.01] [00:16:58.01] So the algorithms that I talk about today all rely on these. [00:17:04.13] [00:17:06.04] These frame or at least monitor the city and super model are the properties skips [00:17:11.02] [00:17:11.02] us the algorithm of we swam this one over a year [00:17:15.23] [00:17:15.23] proximate ratio so this is a classic result. [00:17:20.13] [00:17:23.21] Ok. [00:17:24.09] [00:17:27.20] So. [00:17:28.08] [00:17:29.08] We 1st show that the this trace of the universe of the sub matrix [00:17:34.11] [00:17:34.11] is monotone as you promote it or. [00:17:38.04] [00:17:39.16] Yeah yeah we prove that. [00:17:41.13] [00:17:42.19] Yes. [00:17:43.07] [00:17:46.08] Yes yes yes. [00:17:49.15] [00:17:58.12] So this multi-city is. [00:18:01.15] [00:18:01.15] It is kind of straightforward in treaty but we were asking for a. [00:18:06.13] [00:18:07.22] Different approach here so. [00:18:11.04] [00:18:12.21] The supermodel Artie's also know from these commute time [00:18:18.02] [00:18:18.02] perspective but here we give our back approach which. [00:18:23.01] [00:18:24.05] Has a stronger result so we actually show that this matrix [00:18:29.07] [00:18:29.07] in this universe of the sub matrix is Enco I supermodel her and [00:18:34.11] [00:18:34.11] why is this useful because this implies that these is also untrue I super [00:18:39.17] [00:18:39.17] model her and this this can be used for side orders or. [00:18:45.09] [00:18:46.17] As a functional or as a function of ast note ass. [00:18:50.22] [00:18:52.08] But with. [00:18:52.20] [00:18:54.16] Every entry of it is often either a trio of them we're talking. [00:19:00.10] [00:19:01.21] With Yeah yeah. [00:19:03.10] [00:19:09.08] We're different. [00:19:09.23] [00:19:12.17] Than. [00:19:13.05] [00:19:21.15] What. [00:19:22.05] [00:19:27.04] You owe. [00:19:28.21] [00:19:30.02] It it's defined for the next in the set for [00:19:34.21] [00:19:34.21] one yeah yeah that's the big one but. [00:19:38.16] [00:19:47.14] Yeah yeah f. f. has different different sizes we've got a lot. [00:19:51.18] [00:19:54.07] Of that you the. [00:19:54.22] [00:19:56.10] Verse 4 and. [00:19:58.00] [00:20:00.11] 4 So given the ass. [00:20:04.01] [00:20:04.01] So we have a fixed hour fast. [00:20:07.02] [00:20:08.07] And. [00:20:09.18] [00:20:09.18] Then take the inverse and every entry of it it's a function of s. [00:20:13.11] [00:20:20.05] 1. [00:20:20.17] [00:20:22.17] 1111 and a yes. [00:20:25.03] [00:20:26.08] Or yes yes but it was. [00:20:29.20] [00:20:31.21] As I'm alone we so [00:20:36.18] [00:20:36.18] we need we studied that after the entry that stealing af. [00:20:40.19] [00:20:53.03] Yes yes. [00:20:53.22] [00:20:59.02] Yes yes. [00:20:59.19] [00:21:01.07] One. [00:21:01.19] [00:21:02.23] With the 110000. [00:21:05.01] [00:21:06.17] 0000 l. [00:21:12.10] [00:21:13.21] entry $11.00 of this hour of his. [00:21:16.09] [00:21:17.14] It's a $11.00 entry of the us. [00:21:20.14] [00:21:25.17] I mean I mean the for the rest of the note [00:21:28.16] [00:21:28.16] it's the now that one is not correspond to a leader it's for the. [00:21:33.07] [00:21:42.04] Maybe yeah. [00:21:42.20] [00:21:44.10] Yeah Ok so how to how do we do in these we 1st define this matrix that you [00:21:51.19] [00:21:51.19] Punk this matrix valued function of these vectors e so. [00:21:57.21] [00:22:00.19] So if. [00:22:01.14] [00:22:03.08] On the Bagnall It's the oppression Matrix it's it's it's the same [00:22:08.10] [00:22:08.10] as the tricks and the often Bagnall entries are defined as these. [00:22:13.19] [00:22:15.02] One menace these are the it's the is a vector with every entry corresponds to. [00:22:19.11] [00:22:20.12] 2 to one who'd So it's defined as these so [00:22:25.02] [00:22:25.02] this matrix has these form if a z. is equal to one. [00:22:29.01] [00:22:30.13] We have a back know here which is given by these l. and [00:22:35.02] [00:22:35.02] if it's not and we have 0 here [00:22:39.13] [00:22:39.13] because these entries are time times with the arrow and [00:22:45.03] [00:22:45.03] this this part if if we desist see is the indicator [00:22:50.01] [00:22:50.01] vector of who are the leaders and who are the followers then [00:22:55.04] [00:22:55.04] this is this part is is the same as the sub matrix. [00:23:00.15] [00:23:04.10] We keep so we consider so so when we find like. [00:23:10.16] [00:23:12.07] The change of them inverse inversed matrix. [00:23:15.14] [00:23:16.17] So we care about this part so. [00:23:20.06] [00:23:30.14] Yeah yeah I just defined for the paragraph and the is n. [00:23:37.03] [00:23:37.03] and vector and then. [00:23:42.15] [00:23:44.09] This is the change of the inverse matrix and [00:23:47.23] [00:23:47.23] we can write it has the integral of the tif and [00:23:51.19] [00:23:51.19] after expanding these we can find that each of these matrices are untrue eyes and [00:23:57.09] [00:23:57.09] on their t.v. So this gives one of them the city. [00:24:01.05] [00:24:04.07] And twice my holy city of these are matrix and then 4. [00:24:09.18] [00:24:12.09] Similarly we can find out that for each of these part these are the same. [00:24:18.11] [00:24:19.14] Lists for. [00:24:20.13] [00:24:22.09] Sat that includes as t. a t. [00:24:25.13] [00:24:25.13] that includes as this is also smaller so and [00:24:30.05] [00:24:30.05] this is not related to ass it's just for always avert old or [00:24:36.05] [00:24:36.05] diverted his so this gives a darn supermodel arity that we need more. [00:24:41.23] [00:24:46.03] Yeah. [00:24:46.15] [00:24:51.09] So then this part is l o f f. [00:24:53.18] [00:25:13.01] E. [00:25:13.13] [00:25:16.04] What is it and it here. [00:25:17.18] [00:25:21.23] Yet it's a indexing problem you mean. [00:25:24.02] [00:25:31.13] Like. [00:25:32.01] [00:25:44.21] I mean every row and columns are assigned a node right [00:25:49.13] [00:25:52.14] before for the last question but. [00:25:54.20] [00:26:02.08] Yet you hear this part. [00:26:03.23] [00:26:19.11] But I guess we use the sale. [00:26:21.19] [00:26:23.16] Yet. [00:26:24.04] [00:26:34.09] Not. [00:26:34.21] [00:26:39.13] So much. [00:26:40.01] [00:27:03.15] Yes. [00:27:04.03] [00:27:23.16] But but these are just that that the this is just a Dyna Bagnall with the degrees. [00:27:29.21] [00:27:36.22] Ok this problem is and you hardly proof did in the prior paper it [00:27:42.09] [00:27:42.09] follows from these particular on through very little graph so I won't explain. [00:27:47.01] [00:27:48.03] It here. [00:27:48.20] [00:27:50.11] So once we get these. [00:27:51.13] [00:27:53.12] Montana a city and supermarket I repeat we. [00:27:56.03] [00:27:57.08] We have these deterministically de Our than for [00:28:00.02] [00:28:00.02] the 1st step we don't have that property but when and how we steal choose. [00:28:05.17] [00:28:06.23] The best choice that minimizes the sum of effective resistance [00:28:12.06] [00:28:12.06] to the node so remember I said these quantities are equal. [00:28:17.11] [00:28:20.08] And in every following steps we. [00:28:24.05] [00:28:25.10] We choose know that gives us the. [00:28:27.12] [00:28:28.18] Biggest marginal gain and include that there would here and. [00:28:34.11] [00:28:35.23] Once we know the best choice we update the inverse of the law to me [00:28:40.02] [00:28:40.02] off the subway tricks by taking this step matrix and this is. [00:28:44.21] [00:28:46.05] This is given by the rock wide blocking verse and this marginal gain is directly [00:28:53.09] [00:28:53.09] derived from this equation and this hour the wrong thing [00:29:00.05] [00:29:00.05] order of time by Missy and super ordinary 2 we know that. [00:29:06.07] [00:29:06.07] It has the separate summation ratio which is essentially why minus one will vary [00:29:10.07] [00:29:10.07] with these factors and. [00:29:13.11] [00:29:14.21] So you order to accept this we want to compute these marginal gang and [00:29:20.03] [00:29:20.03] 1st step which is some of a very good wrist resistance it's fast. [00:29:25.13] [00:29:27.12] So the algorithm ingredients we use are fasts s.t.d.m. Matrix over and. [00:29:35.05] [00:29:37.01] And. [00:29:37.14] [00:29:38.18] Johnson into straws to map it to a lower direction of space so [00:29:42.20] [00:29:42.20] we can solve fewer equations so [00:29:47.12] [00:29:47.12] again this is a deterministic gritty with them and [00:29:52.04] [00:29:52.04] this is the solver we use so it takes a matrix. [00:29:56.08] [00:29:57.10] As just as d.d. a matrix as and a vector b. [00:30:01.09] [00:30:01.09] that solves it solves it which gives the x. that is close to the. [00:30:06.02] [00:30:07.03] Assay inverse p.. [00:30:08.06] [00:30:10.05] And this is error parameter and in terms of complexity it's in the law terms here. [00:30:16.20] [00:30:19.01] And this is. [00:30:19.19] [00:30:21.05] Like random projection we use instead of so instead of solving. [00:30:27.01] [00:30:28.03] And more and. [00:30:29.02] [00:30:30.10] Equations we only need to solve these number of equations [00:30:34.23] [00:30:34.23] to approximate speeds are the result. [00:30:37.09] [00:30:39.03] So from the to this is just a win that we used to estimate so [00:30:44.09] [00:30:44.09] much no gain so origin of you we want to compute these and these for [00:30:50.02] [00:30:50.02] any any you right now we approximate by. [00:30:55.03] [00:30:56.05] By solving every road of these. [00:30:58.13] [00:31:00.02] By Proxy making at the rows of these the these matrices so we solve these. [00:31:07.02] [00:31:08.17] 3 times these number of equations with these every parameter [00:31:12.20] [00:31:16.11] and similarly we can compute to some off effectively distance. [00:31:20.03] [00:31:21.23] In similar ways and this gives us and approximation. [00:31:28.11] [00:31:29.21] The approximate greedy algorithm so [00:31:33.09] [00:31:33.09] even here in the 1st line we give out each the sum off the effective resistance to. [00:31:38.13] [00:31:41.05] Choose. [00:31:41.17] [00:31:44.07] To true the need or note for it every node and [00:31:48.01] [00:31:48.01] choose the best then we repeat the following steps to choose. [00:31:53.02] [00:31:54.12] Every time we should note that gives us the. [00:31:57.12] [00:31:59.00] Biggest margin again and then we get up Ok time so to die in our. [00:32:04.14] [00:32:08.23] Running time algorithm. [00:32:10.02] [00:32:11.11] Which has these approximation retreats. [00:32:13.13] [00:32:14.15] Essentially why Minnes is k. these factor times while over a year. [00:32:20.22] [00:32:22.16] Or so you start as a best choice for [00:32:27.10] [00:32:27.10] your 1st No because the 1st step you don't have one of the city and [00:32:32.08] [00:32:32.08] super what effect it has actually increases the value of the trace from. [00:32:38.02] [00:32:39.06] Interfacing. [00:32:46.08] [00:32:47.14] So we studied this in term of these statistics in the control systems but [00:32:53.21] [00:32:53.21] this is also the same quantity also arse studied in. [00:32:58.23] [00:33:00.14] Social networks it's called current flow close they subtract he or informations and [00:33:04.16] [00:33:04.16] ready so I won't go through the definition so [00:33:09.18] [00:33:09.18] it's essentially defined as an over the quantity we study so [00:33:14.11] [00:33:14.11] minimizing these kind of equivalent to 1000000 to maximizing [00:33:20.01] [00:33:20.01] their that quantity so we do some we did some experiments. [00:33:24.12] [00:33:26.03] On maximizing the same gravity so we see that [00:33:30.02] [00:33:30.02] these greedy algorithms either it's exactly a random the problem perform well. [00:33:35.00] [00:33:36.02] And this is optimal sometimes they overlap. [00:33:39.02] [00:33:40.20] And these obviously are to perform so random strategy and [00:33:44.22] [00:33:44.22] this is some experiments for real world networks. [00:33:47.23] [00:33:49.20] These are on experiments for larger case that we know now we don't [00:33:54.18] [00:33:54.18] have all people on but obviously 8 of performs other curious ticks [00:33:59.18] [00:34:01.14] and this is a running time of the hour with so for [00:34:05.07] [00:34:05.07] these graphs these graphs have. [00:34:08.17] [00:34:09.18] Several 1000 of nodes and vertices so [00:34:14.13] [00:34:14.13] we see that the exact Grady County deal with these graphs and [00:34:19.02] [00:34:19.02] these graphs have millions of nodes and edges so [00:34:24.01] [00:34:24.01] approximation the approximate Grady can. [00:34:28.04] [00:34:30.01] Can solve these problems efficiently so we did all these experiments [00:34:35.10] [00:34:35.10] on a single 3 year compare the speed so you can see that it actually runs [00:34:40.09] [00:34:40.09] faster and the results are close for these small graphs. [00:34:45.06] [00:34:49.10] It's not exact it's exact greedy it's not it's not unique that I'll get there. [00:34:56.20] [00:34:58.19] That's not proximity ratio that's just their ratio Ok. [00:35:03.12] [00:35:07.13] Ok so here so now. [00:35:09.22] [00:35:13.04] So this is the 2nd set of problems we study. [00:35:15.18] [00:35:17.00] It's called education so given the connected graph g. [00:35:20.14] [00:35:20.14] we want to add a set of edges to the graph g. [00:35:24.11] [00:35:24.11] So we have additional set of edges p. [00:35:29.04] [00:35:29.04] that we add to the graph g. and. [00:35:32.22] [00:35:34.00] Such that the performance measures of the system now it's not just up Matrix it's [00:35:38.19] [00:35:38.19] the whole matrix It's the system without the beaters. [00:35:42.18] [00:35:43.21] Are optimized So 1st of all we study the congregants rate which is given [00:35:49.04] [00:35:49.04] by the number 2 off base matrix and internal energy [00:35:54.17] [00:35:54.17] this is the sum of variances of all of our own nodes is and [00:35:59.23] [00:35:59.23] always given by the suing Merce trace of the su doing worse and so [00:36:03.20] [00:36:03.20] on a certain volume is given by these determine if of this [00:36:07.20] [00:36:07.20] matrix so. [00:36:13.10] [00:36:14.12] So this matrix we need to recode that it's related to the number of spending increase [00:36:19.17] [00:36:19.17] we had a. [00:36:20.06] [00:36:30.02] Yes yes P's Bunny by Ok so we are at that most k. [00:36:35.13] [00:36:35.13] edges so this is this this is one of the problem. [00:36:41.23] [00:36:43.15] So for a graph g. we add. [00:36:45.16] [00:36:46.23] A set of edges q. So this cute. [00:36:50.05] [00:36:52.05] Has some one to constrain. [00:36:54.10] [00:36:55.19] So we want to. [00:36:56.13] [00:36:57.18] Maximize the number of spending trees of these g.p.s. [00:37:03.13] [00:37:03.13] this is defined as adding the edges to that graph. [00:37:06.07] [00:37:11.11] So we also have a harness result for [00:37:13.22] [00:37:13.22] these that shows there is no this this is this they are so [00:37:18.09] [00:37:18.09] this is the I'm sorry so are we then that gives then to die plus and [00:37:24.07] [00:37:24.07] prosecutors q. is a can of a sample pages over absent. [00:37:28.23] [00:37:32.12] Negative wind and the need gives approximation ratio of y. [00:37:37.22] [00:37:37.22] minutes while remind us. [00:37:39.13] [00:37:40.23] To close to one nice one very Ok so we also give a harness result for [00:37:49.09] [00:37:49.09] the use that say's there exists a c. that you cannot approximate within this. [00:37:54.20] [00:37:56.01] Absolute constant. [00:37:58.06] [00:38:02.07] It's given. [00:38:02.19] [00:38:03.23] It is a paper it. [00:38:05.05] [00:38:07.09] All Yeah yeah it's kind of we did did not give it [00:38:12.01] [00:38:12.01] the explicitly we give it by defining some meaning or some [00:38:18.01] [00:38:18.01] of a few different parameters we are all on. [00:38:22.23] [00:38:24.00] Constant. [00:38:24.12] [00:38:30.23] Us So then what is weighted number of spending freeze is defined as a sum [00:38:37.03] [00:38:37.03] of the weights of all spending trees all possible spanning trees in the graph and [00:38:41.17] [00:38:41.17] at the weight of every spending tree is defined as a product of all the edge [00:38:46.13] [00:38:46.13] weights so from kickoffs matrix trees theorem we. [00:38:50.19] [00:38:51.22] We know that it is a determinant of any and minus one times and [00:38:56.16] [00:38:56.16] minus one principle may treat a sub matrix of a whole so [00:39:02.01] [00:39:02.01] we just arbitrary fixie to be in. [00:39:05.06] [00:39:08.00] So from the kickoffs. [00:39:09.18] [00:39:10.23] And a matrix determine the lemma. [00:39:12.19] [00:39:15.12] Which is a 101 update for the determinant would know that [00:39:20.03] [00:39:20.03] the number spent in trees in the new graph after adding this edge [00:39:25.02] [00:39:25.02] is the number of the number of spending through seen the or original graph time [00:39:29.19] [00:39:29.19] swine plus the weight off the edge times the effective or distance. [00:39:35.07] [00:39:36.09] Between these un v e before you at that age so [00:39:41.13] [00:39:41.13] by picking log this this is this this is their relation so. [00:39:49.08] [00:39:51.03] So it's obvious that from this equation we can all observe it's some order and model. [00:39:56.05] [00:39:57.10] One at home because these termes always non-negative and [00:40:03.10] [00:40:03.10] and if we add more edges to the graph this term can on a decrease so. [00:40:11.11] [00:40:12.15] In order to get a fast how were we want to approximate these quantity so [00:40:18.10] [00:40:18.10] we would 1st show that it's enough to approximate the c factor of resistance [00:40:23.22] [00:40:23.22] to approximates the whole thing so we need these. [00:40:27.02] [00:40:28.20] These 2 true. [00:40:29.16] [00:40:33.06] So this is. [00:40:34.06] [00:40:35.15] The fast some order. [00:40:38.19] [00:40:39.21] Greedy routine that we use it's given in the speaker and for so [00:40:48.04] [00:40:48.04] 1st of all we need to find entry that has the largest margin 0 gain and [00:40:54.16] [00:40:54.16] define the marginal gain as as a stray showed and [00:40:59.02] [00:40:59.02] then every in every terrorist we actually decrease this ritual and [00:41:04.03] [00:41:04.03] add always the entries that gives. [00:41:06.18] [00:41:08.01] That gives a larger marginal gain than these. [00:41:12.01] [00:41:13.16] And this survey showed So this is algorithms [00:41:19.05] [00:41:19.05] uses this number of queries so Curie is the number of [00:41:24.14] [00:41:24.14] candidate candidate edges so this number of queries for [00:41:29.12] [00:41:29.12] effective reason is the distance to get this approximation ratio ratio [00:41:35.16] [00:41:37.09] so if we can do this these queries fast enough [00:41:42.15] [00:41:42.15] then we get a faster algorithm then k. times. [00:41:47.08] [00:41:49.12] Yeah then there Kate and then k. the k. k. round k. wrong. [00:41:54.05] [00:41:59.07] Agreed so. [00:42:00.22] [00:42:02.08] In order to did do this we use these we need these notion of sure a complement So [00:42:09.07] [00:42:09.07] what we really need here is that sure company give it meant. [00:42:12.23] [00:42:14.04] We're off that supported on the subset. [00:42:16.15] [00:42:18.11] Of the graph the subset the vertex at his c. and the. [00:42:23.13] [00:42:25.10] It's a like a graph supported on these subset offered his is that that [00:42:31.23] [00:42:31.23] meant hands always the effective resistance between all pairs in the or [00:42:37.04] [00:42:37.04] take set and. [00:42:39.20] [00:42:41.01] Just But a should come into tends to be. [00:42:44.01] [00:42:45.18] Dense graph so we need these approximate to a complement to make it sparse and [00:42:51.09] [00:42:51.09] the Tony has these number of add non-zero entries which are. [00:42:56.07] [00:42:58.07] Edges see the graph and it gives us these. [00:43:02.13] [00:43:04.10] The supports mission for say this s.c. [00:43:09.17] [00:43:09.17] is a sure complement of a 0 on the vertex at a c.. [00:43:14.16] [00:43:16.01] And it can be computed in the near the near time. [00:43:20.03] [00:43:22.23] So this is the algorithm that in every wronged [00:43:28.13] [00:43:28.13] $48.00 every I mean for every search showed we check at all q. [00:43:34.06] [00:43:34.06] nodes to decide whether to absent the No Q Q To [00:43:40.05] [00:43:40.05] decide whether to at the edges to the to the networking this wrong so we divide it. [00:43:46.02] [00:43:47.03] Always carry the edge evenly. [00:43:49.15] [00:43:51.07] We size sizes and these that we have to interview. [00:43:57.02] [00:43:58.14] Each one comes. [00:43:59.13] [00:44:01.00] To over 2 edges so. [00:44:03.20] [00:44:04.23] So we find the vertices vertex said you want and [00:44:10.17] [00:44:10.17] beat you as and vertices of edges seemed f. one and f. 2 so. [00:44:15.18] [00:44:17.11] If one of these has at most q. but has this [00:44:22.07] [00:44:23.21] then we do sure a complement of hung these vertex set. [00:44:29.14] [00:44:31.23] To get to get these a proper proximate true complement. [00:44:36.04] [00:44:38.06] So we recursively do this. [00:44:40.04] [00:44:42.00] On these. [00:44:42.16] [00:44:44.05] On these. [00:44:44.19] [00:44:46.04] Yes such as this if one and. [00:44:49.11] [00:44:50.17] And it returns the edges that we need to add so [00:44:54.23] [00:44:54.23] we add these edges to the graph and then do the same thing for the 2nd. [00:45:00.17] [00:45:01.18] For the rest of the edges and then. [00:45:05.06] [00:45:06.10] Finally we return like the old the edges we need to need to add to that [00:45:12.12] [00:45:12.12] graph and we keep recursing on to q. equals one then we get this. [00:45:17.08] [00:45:18.14] We can again compute approximate true accompaniment and [00:45:23.04] [00:45:23.04] to get the 2 by 2 matrix essentially and which gives us. [00:45:26.22] [00:45:28.05] The effective resistance between the end and [00:45:31.22] [00:45:31.22] points of these edge so in this way we get. [00:45:36.23] [00:45:39.23] In this way we get these odds effective resistance computed. [00:45:43.23] [00:45:46.21] So this is a recurring tree so here we have q. [00:45:50.14] [00:45:50.14] edges and every time we divide by hov and. [00:45:55.00] [00:45:56.07] We set set error ever permit or [00:46:00.16] [00:46:00.16] on every of these snowed in the recursion tree to be absent over log q. [00:46:05.19] [00:46:05.19] because that there is that sits q. And so [00:46:10.04] [00:46:10.04] that on every leaf vertex we have we have to guarantee that. [00:46:13.23] [00:46:15.08] Error is at most Absalom and the running time meets these for every wrong [00:46:20.12] [00:46:21.23] so this algorithm success was hyperbole by by pound [00:46:27.09] [00:46:27.09] them by unity among So here's the algorithm So [00:46:32.15] [00:46:32.15] again this is greedy that we that we use a threshold to add. [00:46:37.12] [00:46:39.10] So the edges in. [00:46:41.23] [00:46:44.18] Uniloc lot Absalom like negative log Absalom over absolutely one wrong [00:46:51.05] [00:46:51.05] and here in every wrong we use these to add up all of [00:46:56.10] [00:46:57.10] to decide whether to add the edges to the graphene is wrong. [00:47:01.21] [00:47:04.18] And this gives us an algorithm that wrong seeing nearly time of the number of [00:47:10.01] [00:47:10.01] edges in here is the number of ages in the origin of [00:47:13.13] [00:47:13.13] curious candidate number of a just see in the county they set. [00:47:17.03] [00:47:18.19] And the test they saw proximately racial So we also have a harness proof so do. [00:47:24.23] [00:47:26.05] This is the instance that we use to reduce it so for for a graph that [00:47:31.19] [00:47:31.19] this off a star graph and the obvious require off supported on its Neves. [00:47:36.23] [00:47:38.08] We know that number of spending freeze is maximized if we [00:47:43.03] [00:47:43.03] have a Hamilton house here but obvious that it's that's hard. [00:47:48.01] [00:47:51.10] So we reduce it we can reduce. [00:47:54.19] [00:47:57.11] This the and the hard problem to our problem so. [00:48:01.08] [00:48:02.14] So actually we can get own will we we was able to get some stronger result that. [00:48:10.11] [00:48:11.19] We get these 3 duction from one to t.s.p. to minimum spending meet minimum [00:48:16.13] [00:48:16.13] pass past cover problem and 2 to the number of [00:48:21.03] [00:48:21.03] smacks in my the number of spending freeze so do you want me to expand this a bit or [00:48:25.23] [00:48:27.06] can I say I can't skip this part time I think so this [00:48:32.17] [00:48:32.17] is a result that we have a constant that we cannot approximate with the least cost. [00:48:36.13] [00:48:46.01] That. [00:48:46.13] [00:48:55.16] Is presented. [00:48:56.10] [00:48:57.22] To them it's one of my most why worry. [00:49:00.11] [00:49:05.15] You. [00:49:06.03] [00:49:07.15] Know none of the this see is some small number. [00:49:10.06] [00:49:12.22] Ok. [00:49:13.10] [00:49:20.13] Yeah yeah I think that's interesting that's. [00:49:22.12] [00:49:26.17] Ok so I will conclude by comparing it with some related works. [00:49:31.01] [00:49:32.05] So this is the setting for your 2nd we want given a graph g. [00:49:36.16] [00:49:36.16] we've choose a subset of at most cave vertices as such that for [00:49:41.02] [00:49:41.02] the rest of the nodes I guessed something is missing here so [00:49:46.08] [00:49:46.08] for the sub matrix software passion this matrix we want to [00:49:52.06] [00:49:52.06] maximize the minimize the sand minimize base so I don't I'm not aware of any. [00:49:58.03] [00:49:59.22] Day algorithm was guaranteed for the system that you problem now. [00:50:04.13] [00:50:05.17] And for these. [00:50:07.04] [00:50:09.02] Yes this is our the op in the that we improve upon So [00:50:13.11] [00:50:13.11] this is our result and for this is actually the. [00:50:18.15] [00:50:20.18] Largest simplex problem in some sense but. [00:50:23.17] [00:50:25.07] But in the sense that you are choosing the followers so so [00:50:31.06] [00:50:32.20] I have a slightly I could sure it made her so so did gives this strange. [00:50:38.07] [00:50:39.13] Bond but I think to someone that assists we can get better better than [00:50:44.05] [00:50:44.05] maybe that's possible but I don't. [00:50:47.01] [00:50:48.19] Want. [00:50:49.07] [00:50:51.15] To post I think. [00:50:52.12] [00:50:54.22] And this is at the edge problem is obviously has more related work [00:51:00.12] [00:51:00.12] from experimental design so on so far I'm not you [00:51:05.19] [00:51:06.19] this is our only are within that we know for an arbitrary k.. [00:51:12.06] [00:51:12.06] It gives these spawn and this problem has been show shown to be n.p. hard [00:51:17.22] [00:51:19.23] and this is our all or our result and [00:51:24.21] [00:51:24.21] these are just experiments you know for general experimental design result. [00:51:29.01] [00:51:30.14] I'm sure that are there are more recent result that I did [00:51:34.06] [00:51:34.06] did knowledge include here. [00:51:35.18] [00:51:38.03] But that's that kind of like the context. [00:51:40.18] [00:51:44.06] So conclusion so we give and to Doc am all for them for. [00:51:48.19] [00:51:49.23] And hard and harness for minimizing days and. [00:51:53.18] [00:51:55.12] And the new need a time out for them for and harness result for [00:52:00.23] [00:52:00.23] minimizing the determinant and future directions [00:52:05.05] [00:52:05.05] here are some of the some some some of the things that we can think about. [00:52:09.21] [00:52:11.02] First of all is to improve d.c. if we can maintain these fighting for every few. [00:52:16.02] [00:52:17.23] Efficiently and we also want to improve their proximity to reshow. [00:52:22.11] [00:52:29.11] And also as a Parness result would mean I mean these are not tight for [00:52:34.19] [00:52:34.19] now and these genes can fall and can find a pic you [00:52:40.07] [00:52:40.07] can see in social networks optimization and sparse country designs with some some [00:52:45.22] [00:52:46.23] like every problem has some different things to consider Ok. [00:52:52.03] [00:52:54.20] Thank you. [00:52:55.08] [00:52:56.19] Thank. [00:52:58.06] [00:53:06.03] You. [00:53:07.21] [00:53:12.17] That's that's it if you can change it I'll be true to its comics. [00:53:19.08] [00:53:25.04] Yeah yeah. [00:53:25.20] [00:53:29.16] Well let me just follow. [00:53:31.05] [00:53:39.01] Up on. [00:53:39.15] [00:53:48.05] That. [00:53:48.17] [00:53:50.20] Yes yes I think. [00:53:51.15] [00:53:52.17] All of these works are based on the comics optimize the shing and then rounding by [00:53:58.06] [00:53:58.06] some different because for the most part of getting these y. [00:54:02.13] [00:54:02.13] and then the these results are all based on the comics up in mileage. [00:54:07.06] [00:54:17.01] So Ok so this result says that you can get. [00:54:19.18] [00:54:20.19] One process approximation ratio if the number of edges [00:54:25.22] [00:54:25.22] that you add is bigger than the number so it's not for I have a trick a. [00:54:30.22] [00:54:35.16] Yeah. [00:54:37.03] [00:54:37.03] And this is this is for any k. but you. [00:54:40.23] [00:54:42.08] Hear. [00:54:42.20] [00:54:46.18] What. [00:54:47.06] [00:54:49.06] You thought. [00:54:50.08] [00:54:53.15] Yeah. [00:54:54.04] [00:55:03.14] Yeah yeah yeah yeah you can you can you can figure out. [00:55:06.10] [00:55:07.14] So with noise you can't really say a whole. [00:55:10.06] [00:55:12.02] Life you know if we don't consider noise that we have a steady state for [00:55:17.08] [00:55:17.08] for the whole. [00:55:17.22] [00:55:26.04] Life. [00:55:26.16] [00:55:31.21] What. [00:55:32.09] [00:55:35.04] Is it possible to define something I mean. [00:55:37.15] [00:55:39.07] So so for so I have father thought that. [00:55:42.20] [00:55:44.22] Yeah. [00:55:45.10] [00:55:48.17] Yes. [00:55:50.04] [00:55:50.04] Yes yes for the leader for systems that has. [00:55:53.00] [00:55:54.21] So this say this access is arbitrary you assign for [00:56:00.09] [00:56:00.09] all these years so we have these equations for 4 dart [00:56:05.13] [00:56:05.13] the expectation and covariance matrix so different for these we can see that [00:56:10.13] [00:56:10.13] it's it's not related to this for these we have these equation and [00:56:15.07] [00:56:15.07] what's what is convergence or it is it's given by these so. [00:56:20.05] [00:56:22.02] Yeah yeah yeah it's it's different on. [00:56:24.00] [00:56:27.19] Worst of. [00:56:28.09] [00:56:32.21] That dice you need. [00:56:34.14] [00:56:36.04] That that are Ok with. [00:56:38.21] [00:56:39.22] Or. [00:56:41.09] [00:56:41.09] Without. [00:56:41.21] [00:56:46.03] It I haven't thought about. [00:56:47.18]