tag:blogger.com,1999:blog-3722233.comments2017-01-21T01:31:48.493-05:00Computational ComplexityLance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.comBlogger21282125tag:blogger.com,1999:blog-3722233.post-57373862019290686312017-01-19T09:42:03.115-05:002017-01-19T09:42:03.115-05:00Dear Lance,
The fact you're referring to is ...Dear Lance, <br /><br />The fact you're referring to is proved as Prop 1.14 in Levin, Peres and Wilmer's book "Rapid Mixing in Markov Chains" (2nd edition). They show the slightly more general fact that if <br /><br />\pi(y) = E_z (number of visits to y before returning to z),<br /><br />where E_z denotes the expectation when the chain begins from z, then \pi(y) is a stationary measure for the transition matrix P (assuming it is irreducible and the state space is finite).<br /><br />To make a probability distribution out of this they divide by the expected return time from z to z. In this if you consider \pi(z)/E_z(first return to z) that is naturally 1/E_z(first return to z), which proves the result.Amitabhahttp://www.blogger.com/profile/09342604891643593133noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-40396996469903976172017-01-18T07:52:35.739-05:002017-01-18T07:52:35.739-05:00The issue is simply this: anytime a person of colo...The issue is simply this: anytime a person of color is attributed with any major contribution, points of diminishing comments follow. Exhibit A: So called Pythagorean Theorem named after a Greek mathematician who learned it from the Egyptians. Start there and go to Benjamin Banneker , an ex slave mathematician who laid out Washington DC from memory after the French surveyor left. What subway station is named after him? "L'Enfant Station" not Banneker..Hmmm Racism is a cancer that America in its Bi polar DNA cant cure. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-33056326110977579872017-01-17T19:25:32.631-05:002017-01-17T19:25:32.631-05:00I find it hard to believe an official transcript w...I find it hard to believe an official transcript wouldn't have the name of the college. Can you name some examples?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-87497124614436565882017-01-17T03:11:20.382-05:002017-01-17T03:11:20.382-05:00The engineers in the movie were looking for theore...The engineers in the movie were looking for theoretical solutions rather than iterative numerical approximations - the reference to old math rather than new math. They wanted a human computer who could handle analytic geometry. When she made the suggestion light bulbs went off for the other engineers. She made the intuitive leap. Unlike students who studied Runge-Kutta and wrote programs in the early 60s the IBM 7090 was not online at NASA until 1961 and Fortran programming on the IBM machines was available in 1960. In the time frame of this movie writing Fortran code on IBM machines was not the widespread common practice it would soon become. A year or two makes a big difference in the transition from innovative application to common practice. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-9445410404285280862017-01-16T12:36:34.341-05:002017-01-16T12:36:34.341-05:00I would think having their project decided and eve...I would think having their project decided and even started a little before they show up would make sense but my question is whether you see students taking too long to get started on their eventual project and then end up getting less done. If you do then set the expectation that they all show up ready to start and have everyone have some contact before coming to make that happen.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-10872981385764403642017-01-15T20:29:18.960-05:002017-01-15T20:29:18.960-05:00In the movie it wasn't that she was the only o...In the movie it wasn't that she was the only one that knew about it, she was the one that thought outside the box to use it in this problem. She actually went and got a book off the shelf about it to check herself and help solve the problem. I am sure everyone knew about it, they just didn't think of to calculate the coordinates.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-83177329708958491632017-01-14T18:50:41.459-05:002017-01-14T18:50:41.459-05:00Are you saying that while it's good for people...Are you saying that while it's good for people to know how an African American woman was working for NASA doing math, that if racism or sexism or any number of other things had stood in the way of her having that job that someone else with a math background would have gotten it and also been able to do what she did?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-24649506091065926702017-01-11T20:07:36.148-05:002017-01-11T20:07:36.148-05:00There is no problem in having interacting machiner...There is no problem in having interacting machinery. for example our "Turing machine" (or computer, if you like) can call an arbitrarily complex function by pasting the name of the function into a cell of the tape, and waiting for the answer to arrive in another cell. The processing will be provided by other machinery using these two cells of the tape as shared memory.<br /><br />There has been intense interest in concurrent processing since the late 1960's (Dijkstra, Wirth, for example). In 1964, three men spent a weekend at a motel developing a concurrent operating system for a computer of 4Kbytes. They included the ex-Xerox and Microsoft luminary, Charles Simonyi (then 16 years old), and Per Brinch Hansen, famous in the 1970's for his work on concurrent programming.<br /><br />For this reason, it seems odd to me that Wegner has more recently been promoting interactive computing as a new paradigm. Wegner is another luminary from the early 1970's and is extremely well informed on the Turing-completeness of computing devices.<br /><br />For me, Wegner's comments serve as a coded attack on managerialism. A coded cry against the horrors of bureaucracy.<br /><br />Richard Mullins<br />Brisbane, Australia<br />portal1943@gmail.com<br />wayzata1944@gmail.com<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-47130789404552734592017-01-11T00:45:36.992-05:002017-01-11T00:45:36.992-05:00The unexpected power of interaction...The unexpected power of interaction...Unknownhttp://www.blogger.com/profile/16095579069590680426noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-56464348992097144582017-01-09T21:04:48.242-05:002017-01-09T21:04:48.242-05:00I saw the movie yesterday. As a white Southerner ...I saw the movie yesterday. As a white Southerner I appreciated the struggle that blacks had to endure in the South, which was doubly difficult for black women. That said the only unbelievable part of the movie was the claim that only Katherine Johnson knew anything about the Runge-Kutte Method, or more specific, Euler's Method, which is the reference in the movie. When I heard that in the movie memory bells went off.<br /><br />If you have two functions and they intersect somewhere, the iterative Runge-Kutte Method is a simply way of having increasingly accurate ways of determining the coordinates where they intersect. The intersection in the movie would have been the re-entry point for the space capsule which was the intersection of the elliptical orbit with the parabolic descent path. Basically I wrote the same code as an undergraduate Chemistry student in the early 1960s on the vacuum tube IBM computer at Florida State University using Fortran language that was used by the women in the movie. Ironically I was coding it at about the same time as the time frame of the movie. Everyone studying applied mathematics at that time would have known the Runge-Kutte Method.Mason Kelseynoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-85248923307130912452017-01-06T17:18:29.821-05:002017-01-06T17:18:29.821-05:00Re "how much of an art machine learning still...Re "how much of an art machine learning still is" - a Deep Neural Network has a lot of engineering choices to make, not just gradient descent methods and threshold functions but the large scale architecture of the connections among layers, where there's a CNN, an RNN, an LSTM, etc. In comparison, SVMs, Random Forests, and regression have just a handful of hyper-parameters to tweak.Mitchhttp://www.blogger.com/profile/06352106235527027461noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-35322944953089123012017-01-06T16:08:51.129-05:002017-01-06T16:08:51.129-05:00"Where were all these cool tools when I was a..."Where were all these cool tools when I was a kid?" - When you were a kid, the disgruntled gray hairs were envious of your tools like a keyboard and screen. <br /><br />When 'kids these days' are old they'll be marveling at the youngsters that seem to just twitch their eyelids to pilot their interstellar spacecraft.<br /><br />I'm not saying you have gray hair.Mitchhttp://www.blogger.com/profile/06352106235527027461noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-42777173142937252402017-01-06T13:08:38.333-05:002017-01-06T13:08:38.333-05:00As a follow-up, the above concrete computational c...As a follow-up, the above concrete computational considerations in regard to rank-jumping in tensor network representations are surveyed abstractly in <a href="http://www.scottaaronson.com/blog/?p=3095#comment-1725988" rel="nofollow">"Yellow Book" comment #91</a> on Scott Aaronson's <i>Shtetl Optimized</i> essay "My 116-page survey article on P vs. NP" (of Jan 03 2017).<br /><br />As a followup, I will attempt to compose these two perspectives — abstract-with-concrete and Yellow Book-with-pragmatic — in a <i>MathOverflow</i> and/or <i>TCS StackExchange</i> question regarding "Yellow Book" descriptions of rank-jumping in practical computational simulations. Such attempted compositions — from me or anyone — can rightly be appreciated as tributes to a small-yet-vital community, namely the proprietors of mathematical weblogs.<br /><br />Math weblogs require of their proprietors a sustained personal commitment that (as it seems to me and many) crucially nourishes the vitality of the 21st century's diverse STEAM enterprises. In particular, math weblogs crucially nourish the hopeful enthusiasm of Yellow Book Era STEAM-students — hundreds of millions of 21st century STEAM-students, YIKES! :) — who will inherit and, as we can hope and even reasonably foresee, apply fresh Yellow Book understandings in marvelously extending our 21st century's great STEAM-enterprises. <br /><br />This New Year's appreciation of math weblogs, and heartfelt gratefulness for the sustained efforts of their oft-underappreciated proprietors, is therefore extended.John Sidleshttp://www.blogger.com/profile/16286860374431298556noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-89913104935874971582017-01-06T06:20:11.893-05:002017-01-06T06:20:11.893-05:00------
Lance asks "How many nodes should you...------<br /><b>Lance</b> asks "How many nodes should you have in your network? How many levels? Too many may take too long to train and could cause overfitting. Too few and you don't have enough parameters to create the function you need."<br />------<br />Algorithmic answers to these questions center upon the notion of "rank-jumping" (as at least some portions of the literature call it). <br /><br />Specifically in regard to the rank-jumping literature, a notably student-friendly multi-reference multi-example survey is Vin de Silva and Lek-Heng Lim's "Tensor rank and the ill-posedness of the best low-rank approximation problem" (<i>SIAM Journal on Matrix Analysis and Applications</i>, 2008).<br /><br />The de Silva/Lim survey has been concretely helpful (to me) in upgrading quantum simulation codes that, dynamically and adaptively, raise-and-lower the ranks of tensor representations. Algorithms that once were <i>ad hoc</i>, evolve to be more nearly universal and natural (and stable too). <br /><br />Sweet! Hoorah for "Team Yellow Book"! :)<br /><br />Further suggestions in regard to this "Yellow Book" literature — whether in the language of "rank jumps" or "topological closure" or any other <i>GAGA</i>-esque terminology — would be welcome to me and many. It's been plenty challenging (for me at least) to reduce this literature's beautiful insights to concrete algorithmic practice.<br />John Sidleshttp://www.blogger.com/profile/16286860374431298556noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-82611918315635238932017-01-06T02:41:10.665-05:002017-01-06T02:41:10.665-05:00The idea behind convolution net is as follows: thi...The idea behind convolution net is as follows: think of images and a box. Let's define a feature over the pixels in the box like the existence of a vertical line and have a neural network for it. Now the location of this box doesn't matter for the feature, so if you are looking for vertical lines in an image you can just use the same network for all of them, you can share the weights between networks for the feature. This saves a lot of weights and makes training and inference practical.<br /><br />Many important papers in machine learning are about intelligent ways for saving computation time. You really don't want the number of computation steps to grow superlinearly with respect to network depth, input size, ... that would make training and inference infeasible in practice. CNNs are the reason deep learning worked in practice and beat all previous algorithms in image recognition by a large margin. Machine learning requires a good deal of engineering to have practical algorithms that you can actually run and test, even constant factors matter.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-43069924497994512412017-01-06T02:12:46.103-05:002017-01-06T02:12:46.103-05:00https://arxiv.org/abs/1611.01578
https://arxiv.or...https://arxiv.org/abs/1611.01578<br /><br />https://arxiv.org/abs/1505.00521<br /><br />https://media.nips.cc/Conferences/2015/tutorialslides/wood-nips-probabilistic-programming-tutorial-2015.pdfAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-39317540753339580962017-01-05T23:21:51.586-05:002017-01-05T23:21:51.586-05:00Have you tried TF's playground?Have you tried <a href="http://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.03&regularizationRate=0&noise=0&networkShape=4,2&seed=0.20483&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" rel="nofollow">TF's playground</a>?Yuvalhttp://yuvalpinter.comnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1563602768827240062017-01-05T15:00:38.165-05:002017-01-05T15:00:38.165-05:00"Convolution nets has a special first layer t..."Convolution nets has a special first layer that captures features of pieces of the image."<br /><br />This is not correct. Convolutional neural networks have many convolutional layers, anywhere, not necessarily at the first layer. These layers exploit locality and translation invariance, two important properties of image-like data. Here is a stylized example showing how convolutional neurons can recognize higher-and-higher level abstractions, from edges through noses to faces:<br /><br />https://i.stack.imgur.com/Hl2H6.pngDániel Vargahttp://www.blogger.com/profile/10626372642275036283noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-70954635466084591932017-01-05T11:55:05.605-05:002017-01-05T11:55:05.605-05:00Small correction: recurrent nets represent time de...Small correction: recurrent nets represent time dependencies (or more generally, dependencies along any DAG), not feedback loops. Each computation of a recurrent net can be unfolded into a feed-forward net of depth O(input size).Fernando Pereirahttp://www.blogger.com/profile/05849361902113771573noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-3758496188441460342017-01-04T18:31:13.919-05:002017-01-04T18:31:13.919-05:00The simpsons did it: https://www.youtube.com/watch...The simpsons did it: https://www.youtube.com/watch?v=w4zqR7GhrqQBilly Tehttp://www.blogger.com/profile/11164345728714720283noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-33937077260040506612017-01-04T10:04:06.715-05:002017-01-04T10:04:06.715-05:00AH- in terms of being testable, no, its not a pred...AH- in terms of being testable, no, its not a prediction.<br />To make it into a testable prediction parameters you care about (unemployment rate. inflation, global temp, number of mass shootings) and see if they go in a bad direction. <br /><br />If there was some measure of conflict of interest or corruption then I would use that for my prediction. I think the economy will also go bad so some parameter there also.<br /><br />Sorry- this is still not really a prediction.GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-75677130666149765872017-01-03T14:14:00.282-05:002017-01-03T14:14:00.282-05:00I said it would get worse and wosre. I did not say...I said it would get worse and wosre. I did not say it should be banned. I don't know what the answer is.<br /><br />As for the New York Times vs Pizzagate:<br />The New York Times (and many other Main Stream Media) helped lead us into the Iraq War which was pointless and lead to ISIS. The PIzzagate story only lead to (so far) to one deragned lunatic almost shooting up a pizza place. Gee, if you want to give an provable example of how the NYT and others are fake news, the Iraq War is a far better example then anything about Russia.GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-54168723021246039862017-01-03T13:50:19.494-05:002017-01-03T13:50:19.494-05:00Number 1 isn't really much of a prediction eit...Number 1 isn't really much of a prediction either....Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-16007078939583006452017-01-03T08:50:14.806-05:002017-01-03T08:50:14.806-05:00"Fake News will become worse and worse" ..."Fake News will become worse and worse" - weren't it you who warned that the "fake news" argument might be used to silence (US) dissidents? Did you change your mind?<br /><br />Among the worse "fake news" are those which are coming from US propaganda machines such as NYT, CNN and the like, and smear Russia.<br />Why are you silent about this? American patriotism?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-72735113778380151032017-01-02T16:01:34.195-05:002017-01-02T16:01:34.195-05:00So sad to see Aaronson's blog (and thinking) o...So sad to see Aaronson's blog (and thinking) overtaken by politics this past year. Now this. It's a helluva drug.carlhttp://www.blogger.com/profile/09598085988718342314noreply@blogger.com