Applications Google
Menu principal

Post a Comment On: Backreaction

"Cosmic rays hint at new physics just beyond the reach of the LHC"

23 Comments -

1 – 23 of 23
Blogger Phillip Helbig said...

As Zeldovich pointed out, the universe is the poor man's particle accelerator. :-)

9:09 AM, December 16, 2016

Blogger Sabine Hossenfelder said...

Ah, it just means we're all rich in ways we don't appreciate.

10:27 AM, December 16, 2016

Blogger Uncle Al said...

"misunderstand chiral symmetry breaking" Postulate exact vacuum mirror symmetry toward fermion quarks then hadrons. Pierre Auger Observatory, baryogenesis (Sakharov parity violation), dark matter (Milgrom acceleration as Noetherian angular momentum leakage re vacuum chiral anisotropy), Chern-Simons repair of Einstein-Hilbert action.

Opposite shoes non-identically embed within trace left-footed vacuum. They vacuum free fall along non-identical minimum action trajectories. Vacuum is not exactly mirror-symmetric toward quarks then hadrons.

http://thewinnower.s3.amazonaws.com/papers/95/v1/sources/image004.png
Eötvös experiments, Test space-time geometry with maximally divergent chiral mass geometries.

Single crystal opposite shoes are visually and chemically identical test masses in enantiomorphic space groups, doi:10.1107/S0108767303004161, Section 3ff.
Calculate mass distribution chiral divergence, CHI = 0 → 1, doi:10.1063/1.1484559
http://www.mazepath.com/uncleal/qzdense.png
http://www.mazepath.com/uncleal/ glydense.png

10:51 AM, December 16, 2016

Blogger Alex Lumaghi said...

How does the Look Elsewhere Effect apply to this type of data as opposed to something like the LHC where you have more places to look? In my layperson's understanding, if you run 100 experiments, then you should not be surprised by a 2 sigma result in one or two of them. If you run one experiment and get a 2 sigma result, it may suggest you are on to something. Does that kind of reasoning come into play with this experiment?

12:16 PM, December 16, 2016

Blogger Sabine Hossenfelder said...

Hi Alex,

The sigmas have nothing to do with the number of experiments directly, but with the amount of data. Since more experiments usually means more data, these are indirectly related though.

The level of significance is a test against the expected random fluctuations that may look just like a signal. If you have less data, fluctuations stand out more strikingly. (There's a name for this which has escaped me.)

Yeah, sure that kind of reasoning comes into play, which is why they calculate the sigmas. Or else, I don't understand your question. For all I can tell their significance is global and hence no look-elsewhere effect. Best,

B.

12:41 PM, December 16, 2016

Blogger Sabine Hossenfelder said...

Ah, now I recall. I think it's called the "law of small numbers." In a nutshell, it's why Iceland seems to stand out in so many statistics. (Few people, large fluctuations.)

12:43 PM, December 16, 2016

Blogger Alex Lumaghi said...

Yes, I was pretty much engaged in exactly the fallacy you describe, so this does clarify it for me. Thank you for the response.

3:52 PM, December 16, 2016

Blogger Unknown said...

"the charged pions create muons which make it into the ground-based detectors." The muons only make it to the ground because of special relativity time dilation. Could the discrepancy be explained by a failure of special relativity?

6:22 PM, December 16, 2016

Blogger andrew said...

"The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations."

From the phrasing it isn't as clear as it might be that 2.1 sigma is about 4%. It is 2.9 that is about 1%.

But, it also bears noting that it isn't clear how the look elsewhere effect works in this context and that while those percentages are technically mathematically the case, in practice, 2 sigma anomalies are generally considered consistent with the underlying theory (given the immense corroboration of the underlying Standard Model and GR) and the rule of thumb is that three sigma anomalies in real life only end up amounting to anything about half the time.

The discrepancy between the math and the reality is partially a product of subtle look elsewhere effects that are hard to define properly since the definition of what constitutes a trial can be muddy, and partially a product of overoptimistic estimates of how low systemic errors are (in part because the unknown unknowns that contribute to systemic error aren't accounted for despite everyone's best efforts to identify them).

In a similar although not quite the same as the look elsewhere effect concept, the raw statistical anomaly percentage doesn't take into account the fact that there are thousands of confirmations of what would naively seem to be the same effects. When you had 1000 positive confirmations (some at 5 sigma plus) and 1 anomaly of roughly similar tests of the same law of physics, the likelihood that a result is a random anomaly rather than a fluke looks a lot different.

Bottom line, "one in a hundred" that it is nothing, in practice vastly understates that true likelihood that a 2.9 sigma result is a statistical fluke. A 2.9 sigma anomaly is interesting and might end up being something real, but a "one in a hundred" figure while literally describing the math, conveys a misleading impression that an effect is almost surely real when 2.9 sigma results inconsistent with the SM and GR later turn out to be nothing all the time.

6:44 PM, December 16, 2016

Blogger TheBigHenry said...

Sabine,

It sounds like the maximum energy at which the LHC operates needs to be scaled up by about an order of magnitude, which presumably implies a similar scaling of cost for the construction upgrade. Do you think there is much hope of getting enough money to enable such a massive venture?

8:02 PM, December 16, 2016

Blogger Beijixiong said...

There is still a "look elsewhere" effect. If the Auger group compared 100 different measurements to expectations, finding a "1 in 100 probability" discrepancy would be expected.

8:55 PM, December 16, 2016

Blogger Arun said...

It is a rather strange result, in my opinion. Why would a discrepancy in the ratio of
π0 to π+/π− show up not at the LHC but at ten times the energy? Surely most of the interactions leading to the pion showers are QCD, and why would one expect QCD to go hayware at beyond-LHC energies?

11:05 PM, December 16, 2016

Blogger Sabine Hossenfelder said...

Unknown,

No. The same time dilatation also applies to all other particles.

3:00 AM, December 17, 2016

Blogger Sabine Hossenfelder said...

Arun,

That's a rather complicated question that I'm afraid I can't fully answer. See, I often hear people say that the standard model explains all the LHC data, but then of course the LHC collides hadrons and most of the particles being measured are also hadrons, and that's all strongly coupled QCD which nobody can calculate from first principles.

The way this works is that on top of the standard model you use two functions that connect the composites to the quark/gluon content which are a) the parton distribution functions and b) the fragmentation functions (for former for the input, the latter for the output). These aren't computed analytically, they're parameterizations combined with a scaling analysis, and the parameters are extracted from already existing data. You literally download them as tables (ask Google).

Having said that, the full energy of the collision is eventually redistributed to much lower energies. In the cosmic ray shower, there's more total energy in the collision, it's a different collision (on nucleus), and you have to trace it for a longer period. It's very possible that this tests different regimes of the parameterization than does the LHC.

What they do in this paper (if I understood correctly), is basically to use the standard codes from the LHC and then applying them to cosmic ray showers, and somewhere along the line something goes wrong. Putting the blame on strongly coupled QCD, hence, things that nobody can calculate with pen on paper, is kind of the first thing that comes to mind. Best,

B.

3:17 AM, December 17, 2016

Blogger Uncle Al said...

Samuel Ting's AMS-02 experiment infers dark matter mass, only lacking more data for more sigmas and contingent ground detection (XENON, LUX, CoGeNT; CUORE and Super-K indirectly).

Ephraim Fischbach’s Fifth Force data was spurious. Pioneer anomaly, exquisite measurement, multiple theories; unequal surface temperatures caused it. OPERA experiment, superluminal muon neutrinos to high sigma, multiple theories; a loose fiberoptic clock connection caused it. US War on Cancer, 50 years ever victorious; now requiring a Moon Shot for victory.

Sometimes a cigar is only a banana. Look elsewhere, too.

10:20 AM, December 17, 2016

Blogger Bill said...

"... just beyond the reach of the LHC."

Richtig? From 14 to 100+ TeV is a bit more than moving the goalposts, I would think.

6:14 PM, December 17, 2016

Blogger andrew said...

While a special relativistic new physics suggested by "Unknown" seems unlikely, the notion that there is a problem with the amount of special relativistic time dilation in the model that is assumed isn't a bad one.

One way to get more muons than expected is for the muons to last longer, which mostly goes to whether the model is accurately measuring the number of hadrons above the 100 GeV energy threshold necessary to prevent the muons from decaying. At the relativistic energies involved here, a difference in kinetic energy that would greatly slow down the decay of the muons which result in almost no discernible different in muon speed to an observer in the observatory.

After all, as you note in your answer to Arun, a systemic error in estimating the number of hadrons above the 100 GeV threshold in a never before tested alternative to measuring the total energy of the event (using a model trained on LHC events in which the total energy of the event was accurately measured) is exactly the kind of error one would expect to see with the new methodology for estimating energy scales. For example, maybe the new methodology dismissed a contribution as minor, because it doesn't contribute a huge percentage to total energy, but the ignored contribution's small contribution to total energy is concentrated primarily in events near the 100 GeV threshold while having little impact on high energy events.

And it is hard to treat the actually discrepancy as more than 30% at 2.1 sigma in any case, as there are two LHC trained models and the fact that they provide very different predictions makes it certain that one is more incorrect than the other and this test shows which is the more accurate (and keep in mind we have only about 600ish events here total that are being analyzed in multiple bins that are smaller than that).

So, the case for new physics isn't huge.

Also, even the paper referenced for "New Physics" is really not so much arguing that Standard Model QCD is wrong as it is that we've overlooked a subtle high energy implication of Standard Model QCD as applied in operational models of it, by proposing a new high energy chiral symmetry restoration phase analogous to other phase transitions in QCD like the quark-gluon plasma and Bose-Einstein condensate phase transitions. This chiral symmetry restoration hypothesis still seems like a stretch, but even if it were right, it wouldn't really be "new physics" in the way that we commonly speak of the term as something that actually changes the underlying equations of the Standard Model rather than merely the way that we do calculations with then as a practical matter, which as you note, in QCD is greatly distanced fro a first principles calculation and instead uses huge data tables to get things like PDFs.

7:57 PM, December 17, 2016

Blogger akidbelle said...

Andrew, Sabine,

if I understand something, the tables should be the same at 10 and 100 TeV (or maybe extrapolated). But nobody can compute something from first principles (the models are "trained") and then, formally speaking, we do not even know if the tables agree with QCD. Right?

J.

6:09 AM, December 18, 2016

Blogger Sabine Hossenfelder said...

akidbelle,

Well, yes, the tables are extrapolated from 10 to 100 TeV, but that isn't what Arun alluded to, or at least I don't think he did. QCD is difficult at low energies, not at high energies. If the extrapolation really does breaks down that would be much more dramatic. Best,

B.

6:49 AM, December 18, 2016

Blogger Jeff said...

"It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons..." - while I know it's a convenient shorthand, it makes me sad when we speak as though theory causes physical phenomena. The masses of nucleon are whatever they are, while "chiral symmetry" is a construct of the human mind intended to describe our observations. Our mental constructs don't determine the behavior of nucleons.

12:08 AM, December 19, 2016

Blogger Sabine Hossenfelder said...

Here is another recent paper with a proposed explanation:

Strange fireball as an explanation of the muon excess in Auger data
Luis A. Anchordoqui, Haim Goldberg, Thomas J. Weiler
arXiv:1612.07328

9:24 AM, December 23, 2016

Blogger Federico vdP said...

Hi,
in the sentence
"Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons."
you certainly ment photons instead of muons.
Best,
Federico

4:23 PM, December 28, 2016

Blogger Sabine Hossenfelder said...

Federico, I meant muons.

1:25 AM, December 29, 2016

You can use some HTML tags, such as <b>, <i>, <a>

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.
OpenID LiveJournal WordPress TypePad AOL