Applications Google
Menu principal

Post a Comment On: Backreaction

"The number-crunchers. How we learned to stop worrying and love to code."

23 Comments -

1 – 23 of 23
Blogger Phillip Helbig said...

" I too recall life without Google, when “viral” meant getting a thermometer stuffed between your lips"

Or elsewhere.

4:49 AM, July 20, 2015

Blogger Phillip Helbig said...

At which observatory did she work?

4:49 AM, July 20, 2015

Blogger Phillip Helbig said...

Myers is a self-described “media ecologist” which makes you think he’d have heard of search engine optimization. Unfortunately, when queried “gap generation” it takes Google 0.31 seconds to helpfully bring up 268,000,000 hits for “generation gap.”

An even worse example (but not from media ecologists) is a band (actually a duo) called BOY. They pointed out in an interview that "Boy" is not googlable. Refining the search to "boy band" doesn't help much either. (Actually, the band BOY consists of two girls.)

Ah yes, search-engine optimization. I remember spam emails claiming "we have helped to place thousands of sites in the top 10".

5:53 AM, July 20, 2015

Blogger Sabine Hossenfelder said...

I believe it was Göttingen, at least that's where she lived later, but she moved a lot during the war times and I might confuse the story. (Will have to ask my mom.)

7:26 AM, July 20, 2015

Blogger Sabine Hossenfelder said...

Yep, my mom confirms it was Göttingen.

7:52 AM, July 20, 2015

Blogger Giotis said...

"If a computer came up with just the right string theory vacuum to explain the standard model and offered you the explanation that the world is made of strings to within an exactly quantified precision, what difference would it make whether the headline was made by a machine rather than a human? Wouldn’t you gain the exact same insight?"

It's the journey that is rewarding and not the actual destination. Imagine Ulysses travelling non stop from Troy to Ithaka :-)

"As you set out for Ithaka
hope the voyage is a long one,
full of adventure, full of discovery.
.
.
.
.
.

And if you find her poor, Ithaka won’t have fooled you.
Wise as you will have become, so full of experience,
you will have understood by then what these Ithakas mean. "

http://www.cavafy.com/poems/content.asp?cat=1&id=74

9:43 AM, July 20, 2015

Blogger JimV said...

Elsewhere on the Internet I was having an argument in which one of the responses was that the (gravitational) N-body problem can't be solved by analytical functions but must be computed numerically. I had a similar thought to yours about the difference between analytical functions and numerically-calculated ones. I recalled that when I studied Trigonometry in high school I was given a pamphlet which contained tables of the sine, cosine, and tangent functions (and maybe a few others). We would look up values we needed in the tables and interpolate linearly to get intermediate values. In the computer-code implementations of sine and cosine that I have seen, there are polynomial curve fits for different regions of the curves, perhaps based on those same tabulated values in my pamphlet, which I think were calculated numerically (for the most part).

So perhaps one could say that the simple, 2-degree, ordinary differential equation of the harmonic oscillator must also be calculated numerically.

"Analytic function" means that the function's derivatives and integrals are known in terms of other analytic functions (or as a convergent power series), but I wonder, how do we know that we could not do that for the solutions of the N-body problem, given enough study of them? Isn't that how Bessel Functions and Trigonometric functions were developed and categorized? So I don't see a big difference between analytical functions and numerical calculations, except that we have studied the former more and know more about them.

The other thing your essay reminded me of was that my first boss at General Electric, Doris Clarke, began her career as a "computer". In her day, computers were people who sat in front of a mechanical calculator (a sort of large, noisy typewriter with number keys and function keys, that did mainly additions, subtractions, multiplications, and (slowly and reluctantly) divisions), received numbers from person at a similar desk behind her, did further calculations with them based on the "program" that was running, and passed her results on to a next desk.

10:37 AM, July 20, 2015

Blogger Sabine Hossenfelder said...

JimV: yes, this is exactly what I mean!

10:43 AM, July 20, 2015

Blogger Uncle Al said...

1) HyperChem Lite, costing less than a textbook, mm+ calculates exact 3-D molecular structures in seconds. Terrifically wrong structures for carbonyls and methylidenes can occur. Verify hardware and software with wetware.

2) "If a computer came up with just the right string theory vacuum to explain the standard model and offered you the explanation that the world is made of strings to within an exactly quantified precision" physics and chemistry should still challenge exact vacuum mirror symmetry toward hadronic matter with an extreme chiral probe. Euclid is one of Thurston's eight primary geometries of 3-space. The other seven matter.

11:20 AM, July 20, 2015

Blogger nicolas poupart said...

Even more strange is when mathematics itself requires the use of the computer to validate a theorem like the four-color theorem, first of its kind. Such a solution has widely divided the community of mathematicians, is that still the mathematics when the mathematical demonstration itself is not verifiable by a human? The validation moves from the verification of theorem to the validation of the verification software.

As for thinking machines, in 2011 when IBM Watson became Jeopardy world champion I realized it was the end of the domination of man for thought. Though I know the theory, I never thought to see it in my lifetime. As soon as the machine can read and do math, good luck humanity ; how 1 KHz can compete with GHz?

11:44 AM, July 20, 2015

Blogger Chris said...

Isn't there a more fundamental difference, in term of "complexity class" or "hardness", between problems (e.g. differential equations) having analytical solutions vs. those having none? For example, if the solution can be expressed as y = f(x) with f a polynomial function, we can evaluate it at any point with finite accuracy in polynomial time. Would this be true if we had to perform a numerical integration of the equation instead (requiring the same accuracy in the solution)?

The concept of algorithmic complexity does seem to differentiate the two cases, since in both cases the unknown number is expressed as a very compact string of character (the equation itself or its analytical solution). Is there any other relevant measure of complexity applicable here ?

12:28 PM, July 20, 2015

Blogger nicolas poupart said...

Chris, you not have to go very far in algebra. Matiyasevich theorem (1970) negatively solves the Hilbert's tenth problem.

1:35 PM, July 20, 2015

Blogger Arun said...

You need to be able to prove error bounds on your numerical computation, which is presumably well-understood in the case of standard functions.

I'm assuming that the existence proof (that the computation is meaningful) already exists.

Numerical computations slso need to give insight into limiting cases, asymptotic forms, and how input parameters or boundary conditions change the output.

Also, how would you arrive at the notion of the renormalization group from purely numerical calculations?

2:35 PM, July 20, 2015

Blogger Arun said...

Facial recognition is something your eyes & brain do automagically. But trying to replicate that with computer and camera requires both computation and analytic understanding. E.g., Eigenfaces

2:37 PM, July 20, 2015

Blogger Igor Khavkine said...

A very interesting story about your grandmother. A few years ago I read the fascinating book When Computers Were Human by David Alan Grier. It came out of him looking into an aspect of own his grandmother's past: her casually mentioning studying mathematics as a college student. A highly recommended read!

2:48 PM, July 20, 2015

Blogger Robert L. Oldershaw said...

Slightly off-topic, but not entirely, the movie Ex Machina is highly recommended for artistic elegance and thought-provoking subject matter.

8:25 PM, July 20, 2015

Blogger David Schroeder said...

Quite an interesting story about your grandmother. She would have been about the same generation as my parents. I well remember our dad meticulously balancing his checkbook by hand, back in the 1950s. Even attending High School between 62-65 we did all calculations by hand. When my older brother bought a pocket calculator in the early 70s, for 200 dollars, it was an absolute marvel.

I just finished reading (online) the very nicely written article in Scientific American "A Geometric Theory of Everything" by A. Garrett Lisi and James Owen Weatherall, that covers the evolution of the Standard Model to its present plateau - SU(3), SU(2), U(1). This is followed by an elucidation of how the E(8) Lie group is utilized to embrace both the Standard Model and gravity into a single, unified geometric structure.

The article just skims the surface of how Lie groups apply to the physics of particles and fields. But the full mathematical complexity of this field of endeavor is quite mind boggling, and I could well imagine the need for A1 supercomputers to assist, or even completely take over, the model building process.

5:52 AM, July 21, 2015

Blogger regretacles said...

"Correlation supersedes causation"
someone needs to tell all those machine learning researchers working on causal graphical models they are wasting their time - or, perhaps, now that Causation has become a going concern within wheel houses proximate to Chris Andersen's own, it isn't such a useless thing to study after all. I'm not trying to put him down for it- deciding subjects which don't put food on your own plate are uninteresting is the mark of a professional. A working scientist. A survivor.

We should take that into account when human creatures tell us what is and isn't interesting. They might just be saying, there's no low hanging fruit on that tree for me to eat. Doesn't mean the view from the top of that tree isn't important.

The most obvious subject for Machine Learning to eat would be experiment.

1:32 PM, July 21, 2015

Blogger Michael Musson said...

I read a story years ago (unfortunately I cannot remember the title) that imagined a future where numerical solving programs grew in power to the point where they answered fundamental questions. However, just like Chris Anderson's quote above "... and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.", the numerical results based upon numerical results based upon... led to answers without any understanding or explanation for where it came from. And worse, there were further results appearing to be answers to questions that no one even knew to ask. It was an interesting cautionary tale about tools outpacing understanding and the interplay between answering a question and understanding a question and understanding an answer.

2:19 PM, July 21, 2015

Blogger Wes Hansen said...

Believe it or not I wrote this comment before I read Arun's comment, it's purely coincidental . . .

I thought this issue was settled long ago!?! Numerical methods have really been indispensable to science since the 70's with the parallel advent of more efficient computation and dynamical chaos. How did Feigenbaum discover Universality in phase transitions? And in my opinion it was Feigenbaum's discovery which put the renormalization group on a solid foundation, at least philosophically. And even when Lanford, a pure mathematician, developed a proof, which the community deemed rigorous, the proof depended to a large degree on numerical methods.



I think people who express disdain for numerical methods are just inherently dishonest, with themselves and others. All knowledge is provisional in that it rests on a foundation of induction and numerical methods bring this to the forefront. And that, to me, is a good thing; it dispels dogma! As the early chaos pioneers liked to say, numerical methods develop intuition. Dogmatists quite often see causation where only correlation exists anyway!



This is really what made me wonder if perhaps James Gates hadn't discovered the reason why Universality appears with his adinkras. If you're not familiar with Gates, he works with SUSY and his adinkras are Feynman diagram analogs which represent oftentimes complicated systems of super-differential equations. To evolve the system you fold the adinkra but this folding process can be quite complex and if you're not careful you can lose SUSY. So what Gates did was assign each node in the adinkra a binary word and he discovered, quite by "accident," that the folding process which maintains SUSY conforms to one of Hamming's error-correction codes! So perhaps one sees Universality in phase transitions due to some error-correction process?



http://arxiv.org/pdf/hep-th/0408004v1.pdf



http://www.bottomlayer.com/PWJun10gates.pdf



The last link is to an article which appeared in Physics World, 2014.

3:08 PM, July 21, 2015

Blogger Zephir said...

This is the evolution, which is not difficult to predict (for example here https://www.reddit.com/r/Physics_AWT/comments/2htmk5/science_graduates_are_not_that_hot_at_maths_but) It's the increasing complexity of formal models and decreasing cost of computer time, which will force the physicists to orient itself to numeric calculations and even simulations, despite some of them are still proud of their analytical skills by now.

8:00 PM, July 21, 2015

Blogger Wes Hansen said...

Here's another weird coincidence! I wrote this comment before reading Zephir's:

You know it was this whole train of thought which led me to the idea of bi-simulation on non-well-founded sets - which I have unsuccessfully tried to convey to you and, through you, to Renate Loll. So, I'll express it here and then drop it forever!

As so eloquently expressed in Smolin's Three Roads to Quantum Gravity, the high degree of fine-tuning we witness defies probabilities; this, to me, strongly suggests retro-causation or, in other words, a distinct final condition. So why couldn't you use the adinkras of James Gates to develop a bi-simular model? I don't see why you couldn't because essentially adinkras are analogs to graphs and the folding process establishes relations between nodes:

http://www.cs.indiana.edu/cmcs/bisimulation.pdf

So, you develop one adinkra which evolves from a distinct initial condition "forward" in time and another bi-simular adinkra which evolves from a distinct final condition "backward" in time. These adinkras are not symmetrical, rather, they simulate one another, hence, bi-simulation. At forward time step t = a the adinkra folding "forward" in time may have gone through y folds while the adinkra folding "backward" in time may have gone through xy folds but both processes result in the same system state at forward time step t = a. They simulate one another. Would this not put an interesting constraint on the initial and final conditions? And what if what we think of as initial and final conditions are in actuality phase transitions? Could such a model perchance be illuminating?

Some scientists say non-well-founded sets are incompatible with quantum theory but Ben Goertzel, an expert on non-well-founded sets, dispenses with that myth in chapter seven of his book Chaotic Logic.

In case you missed the subtleties in my last comment, I'm not suggesting that the error-correction takes place in the world we perceive, what you call Minkowski space and Will Tiller calls D-space, rather, the error-correction occurs in Will Tiller's R-space. It occurs in the "electron-clock" described by the Zitter Model of David Hestenes and it's super-luminal. This is why the world we perceive appears coherent and consistent.

2:31 PM, July 22, 2015

Blogger Kaleberg said...

We've been well past the era of closed form solutions for some time. I remember some popular physics book, I think by Steven Weinberg, saying that we can tell the sophistication of a gravitational theory by the N at which the N-body problem has no closed form solution. Newton's fails at N=3, Einstein's at N=2, and many popular quantum gravity theories fail at N=1 or even N=0.

I think a lot of the discomfort is like the Pythagorean discomfort with an irrational square root of two or that in jimv's discussion of the awkwardness of sine and cosine not having simple finite means of evaluation. (BTW, computers use Chebyshev series which start with the Taylor expansion and optimize the coefficients for evaluation across a a small region e.g. from 0 to pi/4). We've grown comfortable with things like sine and cosine and gamma and Bessel functions. We haven't grown comfortable with all the functions that we construct when we solve numerical problems. The problem is cultural. I remember a Czech scientist arguing with me that Monte Carlo integration wasn't real integration, it was American integration. Perhaps he was right.

Back in the 1980s, the computer scientist Gerald Sussman proposed that in the future physics would be done using high powered computers as part of his dynamacist's workbench project. He and Jack Wisdom used a prototype machine to demonstrate that the orbit of Pluto was chaotic. There was some discomfort with their results, but it got swept under the rug of chaos theory.

I remember when the Four Color Theorem was proved in the 1970s. A lot of people were uncomfortable with the fact that a computer had to validate the hundreds of graph configurations. Since the 1980s mathematicians have gotten more and more comfortable using computers for mathematical exploration. It took a month of computing time to prove the Four Color Theorem on a high powered mainframe in the 70s, but now you can run the solution on your desktop in a minute or less.

There is nothing like a computer for banging numbers together at high energies. Insights come from strange observations. For example, some mathematician noticed that an algorithm for calculating the digits of pi seemed to converge more rapidly every few hundred digits rather than more evenly. This led to a formula for the N-th digit of pi, in base 16, but it was a formula that no one had suspected even existed.

I think that in another generation or two physicists are going to think no more of numerical solutions as opposed to algebraic solutions than we do of sines and cosines or negative numbers, but I don't think humans are going to be taken out of the loop. Computers are called thinking machines not because they can think for themselves, but because they let us think better, faster and farther.

12:20 AM, August 02, 2015

You can use some HTML tags, such as <b>, <i>, <a>

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.
OpenID LiveJournal WordPress TypePad AOL