tag:blogger.com,1999:blog-32898255021617183782015-07-10T05:09:50.970-07:00Looking inside the standard modelA tourist guide to the standard model of particle physicsAxel Maasnoreply@blogger.comBlogger78125tag:blogger.com,1999:blog-3289825502161718378.post-9446953779870878532015-07-10T05:09:00.000-07:002015-07-10T05:09:50.991-07:00Playing indirectlyOne of my research areas are <a href="http://axelmaas.blogspot.de/2013/03/un-dead-stars-and-particles.html">neutron stars</a>. To understand them requires to understand how the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">strong interactions</a> behave when the matter is enormously densely packed. A new PhD student of mine has now started to work on this topic, and I would like to describe a little bit what we will be looking at.<br /><br />I have already written in the past that this type of situation is very hard to deal with, because we cannot just do <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">simulations</a>. This is unfortunate, since simulations have been very successful in uncovering what happened in the early universe. In that case, the system is hot rather than dense. Though the reason for the problem is 'just' technical, chances are not too bright to resolve it in the near future.<br /><br />Hence, I had already quite some time ago decided that a possibility is to <a href="http://axelmaas.blogspot.de/2013/01/taking-detour-helps.html">play indirectly</a>. The basic idea is that there are <a href="http://axelmaas.blogspot.de/2012/03/equations-that-describe-world.html">other methods</a>, which would work. The price we have to pay is that we need to make approximations in these methods. But we would like to check these approximations, ideally against simulations. But we cannot, because there are no. So how to break the circle?<br /><br />To escape this problem, we can again use a detour. We did this once, because we hoped that we will learn more about the qualitative features. That we can get some insight into this type of physics. Now, we have a much more quantitative approach. We use theories, which are very similar to the strong interactions (also called <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">QCD</a>), but are not QCD, but which can be simulated. And these we will use to break the circle.<br /><br />Why is it possible to perform simulations for this type of theories? Well, the main reason is the difference between particles and <a href="http://axelmaas.blogspot.de/2011/10/always-opposite-anti-matter.html">anti-particles</a>. In QCD, a quark and an anti-quark are fundamentally very different objects. Hence, a large density can mean two things. A large density could be having many more quarks than anti-quarks, but still have plenty of both. Or it could be just to have many of one type. For a neutron-star both situations are relevant. And thus, there may be actually many more particles present then we would think, just many of them anti-particles. This is at the heart of the problem, that there is so much more than just the superficial number of particles.<br /><br />This problem is evaded by using a theory instead where there are no anti-quarks. To be more precise, a theory in which anti-quarks are the same as quarks. There exists a number of such theories. However, such a change is very drastic. It thus may happen that the so changed theory is so radically different from QCD that any comparison becomes meaningless. Thus, it is necessary to ensure that the theory is close enough to the original.<br /><br />Two candidates for such theories have been identified so far. One is the so-called G2QCD, of which I talked about <a href="http://axelmaas.blogspot.de/2013/01/taking-detour-helps.html">previously</a>. Another one is very close to QCD, but instead of three <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">color charges</a> is has just two different ones. Both cases have their own merits. The first is closer to QCD. In this theory there are <a href="http://axelmaas.blogspot.de/2010/01/forces-of-nature-iii-strong-force-part.html">protons and neutrons</a>. The latter does not have these, but it is very cheap to simulate. Both theories are hence quite different, but actually share both also many other traits with QCD.<br /><br />It therefore stands to reason that whatever approximation describes both well will also work for QCD. Thus, we will now use the simulations of both theories to test the approximations made in the other methods. Especially, we will look at the properties of the <a href="http://axelmaas.blogspot.de/2009/10/let-me-introduce-players-in-standard.html">quarks and gluons</a>. We will then use the insights gained to improve the approximations. Until we describe both theories well enough. Then we will translate the approximations back to QCD. And if everything works out, we will have then an acceptable description of a piece of neutron star matter.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-44340374171361273192015-06-03T00:14:00.000-07:002015-06-03T00:14:25.427-07:00The nature of particlesI have <a href="http://axelmaas.blogspot.de/2013/02/almost-nothing-is-forever-decays.html">written some time ago</a> that most of the particles we know decay, i.e. after some time they fall apart into other particles. Probably that is not to surprising. After all, essentially everything we know tends to fall apart after a while. Hence, we can think of these particles being made out of the particles into which they decay. Such particles made up out of other particles are called bound states or composite particles. The particles into which it decays are called decay products, but here I will just use particles. Actually, even the particles into which the composite particle decays may in turn decay further. But for the things I want to write about in this entry, this will not matter. Thus, I will just talk about a composite particle and the particles it decays into. <br /><br />But there is an important difference between usual things falling apart and particles falling apart.<br /><br />Think about a tower made from wood logs, like a child's toy. You build it from the logs, and after some time it will break down again into the logs. Especially, when a child is around to kick it. But the logs themselves remain intact. So far, this is the same with composite particles. You start with a composite particle, it then decays into other particles. You can rebuild your tower from the logs. This is also possible with the particles. The decay products can be refused into the original composite particle.<br /><br />But now there is a difference. When you build the tower, the logs keep there identity. If you look close enough at the tower, you can still see the individual logs you used to build the tower. That is not so simple with particles. This is best seen by a specific example. Start with two particles, and fuse them to a new composite particle. So far, nothing new. But then it may happen that this composite particle decays into entirely different other particles then the original ones, or it may decay into the original ones. The expression we use is that the composite particle has different decay channels. It is not that all the possible particles are stored in the original particle, it really changes its identity. It would be like the wood logs turn into plastic ones while being in the tower.<br /><br />Describing such a spontaneous change is not simple. We have become quite expert in modeling the starting composite particle, and then perform at some point an explicit change into the different particles. But that is a little bit like taking the tower and, very nifty, exchanging each log while it is inside the tower from wood to plastic. What we would like to be able is to have this as a dynamical process. Without our interference, the structure of the composite particle changes, and thus decays differently as it has been formed.<br /><br />We actually know <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">how to simulate</a> this. But there we can just observe that this happens. We also would like to know how this proceeds inside the structure of the composite particle itself, what governs this process in detail.<br /><br />Learning this is another project which I now supervise as a PhD project. We will use the so-called <a href="http://axelmaas.blogspot.de/2012/03/equations-that-describe-world.html">equations of motion</a> to dissect the process. For this, we will be looking at a very simple particle, the so-called (charged) pion. It is a composition of two <a href="http://axelmaas.blogspot.de/2009/10/let-me-introduce-players-in-standard.html">quarks</a>, but can also decay into <a href="http://axelmaas.blogspot.de/2009/10/let-me-introduce-players-in-standard.html">an electron and a neutrino</a>. Choosing this particular composite particle has a number of reasons. One is that it is very well studied both experimentally and theoretically. We can therefore concentrate on the new aspects, the change of identity of the constituents. The decay is also rather slow, ad therefore technically easier to control. And finally, quarks, electrons and neutrinos are very different particles. As theoreticians, we can use this fact by modifying their properties, and therefore switch on and off various features of the process. And finally, though the pion is made up (sometimes) of quarks, it can actually not really decay into them, due to <a href="http://axelmaas.blogspot.de/2012/04/why-colors-cannot-be-seen.html">confinement</a>. Therefore, we need only to consider the change inside the pion, but not outside. This also reduces the technical challenges.<br /><br />Solving this question, we will continue on to more interesting composite particles, like <a href="http://axelmaas.blogspot.de/2012/11/a-higgs-and-higgs-make-what.html">bound states of the Higgs</a>. But this project is an enormously important first step on this road.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-8188579935629049342015-05-08T03:50:00.004-07:002015-06-03T00:16:35.090-07:00A model for a modelOne of the more disturbing facts of modern theoretical particle physics is complexity. We can formulate the standard model on, more or less, two pages of paper. But to calculate most interesting quantities is so seriously challenging that even an approximate result takes many person-years, and often even much, much more.<br /><br />Fortunately, the standard model is in one respect kind: For many interesting questions only a small part of it is relevant. This does not mean that the parts are really independent. But the influence of the other parts on the subject in question is so minor that the consequences are often much smaller than any reasonable theoretical or experimental accuracy could resolve. For example, many features of the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">strong interactions</a> can be determined without ever considering the <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs</a> explicitly. In fact, it is even possible to learn much about the strong interactions just from looking at the <a href="http://axelmaas.blogspot.de/2009/10/let-me-introduce-players-in-standard.html">gluons</a> alone, neglecting the <a href="http://axelmaas.blogspot.de/2009/10/let-me-introduce-players-in-standard.html">quarks</a>. This reduced theory is called Yang-Mills theory. It is a very reduced model for the core features of the strong interactions.<br /><br />Unfortunately, even this theory, which contains only a single of the particles of the standard model, is very complex. One of our <a href="http://axelmaas.blogspot.de/2012/04/groundwork.html">lines of research</a> is dealing with the resulting problems. One of these problems has to do with the properties of the local symmetry of this model, the so-called <a href="http://axelmaas.blogspot.de/2010/10/electromagnetism-photons-and-symmetry.html">gauge symmetry</a>. This feature leads to certain, technically necessary, <a href="http://axelmaas.blogspot.de/2013/09/blessing-and-bane-redundancy.html">redundancies</a>. But when doing calculations, we need to do approximations. This may mess up the classification of what is redundant and what is not. Getting this straight is important, and this is the research topic I write about today.<br /><br />And it is here where the title comes into play. Even if the theory of only gluons is much simpler than the original theory, it is still so complicated that the redundancies are pretty messed up. Therefore, we decided by now that it would be better to understand first a different case. A case, in which the same redundancies appear, but all the rest is simpler. A (simpler) model for a (more complicated) model.<br /><br />This strategy creates the bridge to my previous entry on <a href="http://axelmaas.blogspot.de/2015/04/a-partner-for-every-particle.html">supersymmetry</a>.<br /><br />Theories which have this supersymmetry are, in almost all cases, much simpler than theories without. As I wrote, there are different levels of supersymmetry. In its simplest form, supersymmetry relates the kinds of possible particles, and constrains a few interactions. In the maximum version, essentially the whole structure of the theory, and almost all details, are constrained. These constrains are so rigid and powerful that we can solve the theory almost exactly. Nonetheless, this theory has the same kind of redundancies as Yang-Mills theory, and even the full standard model. Thus, we can study what approximations do to these redundancies. Especially, using the exact knowledge, we can reverse engineer essentially everything we want.<br /><br />In fact, we make a kind of theoretical experiment: We take the theory. We treat it with a method - in our case we use the so-called <a href="http://axelmaas.blogspot.de/2012/03/equations-that-describe-world.html">equations-of-motion</a>. We know the results. Now, we perform the same type of approximations we do in the more complicated models, or even the full theory. We see how this modifies the results. Well, actually we will see, since we are still working on this bit. From the change of the results, we will learn a lot of things. One is which kind of approximations make a qualitative change. Since any qualitative difference compared to the exact result will be a wrong result, we should not do such approximations. Not in this theory, and especially not in more complicated theories. Just small quantitative changes are probably fine, though there is no guarantee. And we can explicitly see if the approximations start to mix redundant parts such that they are treated wrongly. From this we will (hopefully) learn more about how to correctly treat the redundancies in the more complicated models.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-65820748886515193212015-04-13T08:31:00.003-07:002015-04-13T08:31:30.392-07:00A partner for every particle?A master student has started with me a thesis on a new topic, one on which I have not been working before. Therefore, before going into details about the thesis' topic itself, I would like to introduce the basic physics underlying it.<br /><br />The topic is the rather famous concept of supersymmetry. What this means I will explain in a minute. Supersymmetry is related to two general topics we are working on. One is the quest for what comes <a href="http://axelmaas.blogspot.de/2012/05/above-and-beyond.html">after the standard model</a>. It is with this respect that it has become famous. There are many quite excellent introductions to why it is relevant, and why it could be within the LHC's reach to discover it. I will not just point to any of these, but write nonetheless here a new text on it. Why? Because of the relation to the second research area involved in the master thesis, the <a href="http://axelmaas.blogspot.de/2012/04/groundwork.html">ground work about theory</a>. This gives our investigation a quite different perspective on the topic, and requires a different kind of introduction.<br /><br />So what is supersymmetry all about? I have written about the fact that there are two very different types of particles we know of: <a href="http://axelmaas.blogspot.de/2012/01/bosons.html">Bosons</a> and <a href="http://axelmaas.blogspot.de/2012/01/fermions.html">fermions</a>. Both types have very distinct features. Any particle we know belong to either of these two types. E.g. the famous <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs</a> is a boson, while the <a href="http://axelmaas.blogspot.de/2009/12/forces-of-nature-ii-electromagnetism.html">electron</a> is a fermion.<br /><br />One question to pose is, whether these two categories are really distinct, or if there are just two sides of a single coin. Supersymmetry is what you get if you try to realize the latter option. Supersymmetry - or SUSY for short - introduces a relation between bosons and fermions. A consequence of SUSY is that for every boson there is a fermion partner, and for every fermion there is a boson partner.<br /><br />A quick counting in the standard model shows that it cannot be supersymmetric. Moreover, SUSY also dictates that all other properties of a boson and a fermion partner must be the same. This includes the mass and the electric charge. Hence, if SUSY would be real, there should be a boson which acts otherwise like an electron. Experiments tell us that this is not the case. So is SUSY doomed? Well, not necessarily. There is a weaker version of SUSY where it only approximately true - a so-called <a href="http://axelmaas.blogspot.de/2010/05/symmetries.html">broken symmetry</a>. This allows to make the partners differently massive, and then they can escape detection. For now.<br /><br />SUSY, even in its approximate form, has many neat features. It is therefore a possibility desired by many to be true. But only experiment (and nature) will tell eventually.<br /><br />But the reason why we are interested in SUSY is quite different.<br /><br />As you see, SUSY puts tight constraints on what kind of particles are in a theory. But it does even more. It also restricts the way how these particles can interact. The constraints on the interactions are a little bit more flexible than on the kind of particles. You can realize different amounts of SUSY by relaxing or enforcing relations between the interactions. What does 'more or less' SUSY mean? The details are somewhat subtle, but a hand-waving statement is that more SUSY not only relates bosons and fermions, but in addition also partner particles of different particles more and more. There is an ultimate limit to the amount of SUSY you can have, essentially when everything and everyone is related and every interaction is essentially of the same strength. That is what is called a maximal SUSY theory. A fancy name is N=4 SUSY for technical reason, if you should come across it somewhere on the web.<br /><br />And it is this theory which is interesting to us. Having such very tight constraints enforces a very predetermined behavior. Many things are fixed. Thus, calculations are more simple. At the same time, many of there more <a href="http://axelmaas.blogspot.de/2012/04/groundwork.html">subtle questions we are working on</a> are nonetheless still there. Using the additional constraints, we hope to understand this stuff better. With these insights, we may have a better chance to understand the same stuff in a less rigid theory, like the standard model.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-40319702501206845812015-03-05T09:41:00.001-08:002015-06-03T00:17:03.482-07:00Can we tell when unification works?Some time ago, I wrote about the idea that the three forces of the standard model, the <a href="http://axelmaas.blogspot.de/2009/12/forces-of-nature-ii-electromagnetism.html">electromagnetic force</a>, the <a href="http://axelmaas.blogspot.de/2010/02/forces-of-nature-iv-weak-force.html">weak force</a>, and the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">strong force</a>, could all be just different parts of one <a href="http://axelmaas.blogspot.de/2014/05/why-does-chemistry-work.html">unified force</a>. In the <a href="http://axelmaas.blogspot.de/2014/06/building-group.html">group I am building</a> I have now a PhD student working on such a theory, using <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">simulations</a>.<br /><br />Together, we would like to answer a number of questions. The most important one is, whether such a theory is consistent with what we see around us. That is necessary to make such a theory relevant.<br /><br />Now, there is almost an infinite number of versions of such unified theories. We could never hope to check each and every one of them. We could pick one. But hoping it would be the right one is somewhat too optimistic. We therefore take a different approach. We aim to get a general criterion such that we can check out many of the candidate theories at the same time.<br /><br />For this reason, we ignore for the moment that we would like to reproduce experiments. Rather, we ask ourselves what are common traits of these theories. We have done that. What we are currently doing is to construct the simplest possible theory which has as many of these traits as possible. We have almost completed that. This reduced theory will become indeed very simple. Of known physics, it contains the weak force and the Higgs. As with every unified theory, it also contains a number of additional particles. But they are not dangerous, if they will be too heavy to be visible to us. At least, as long as we do not have more powerful experiments. The last ingredient are the interactions between the different particles. That is what we are working on now. Having the simplest possible theory has also another benefit - it demands small enough computer resources to be manageable.<br /><br />After fixing the theory, how do the questions look like? One of the traits of such theories is that there are many new particles. What is there fate? How is it arranged that we cannot see them? If we think of the theory describing only rather small changes to the standard model, we can use <a href="http://axelmaas.blogspot.de/2012/01/perturbation-theory.html">perturbation theory</a>. With this, we would just follow pretty old footsteps, and the answer can essentially be guessed form the experience of other people. The answer will be that all the surplus stuff is indeed very, very heavy. In fact, so heavy that our experiments will not be able to see it in any foreseeable future, except as very indirect effects. We get out what we put in.<br /><br />But here comes the new stuff. As I have <a href="http://axelmaas.blogspot.de/2015/01/what-is-so-important-to-me-about-higgs.html">described earlier</a>, there are many subtleties when it comes to the Higgs of the standard model. But in the end, everything collapses to a rather simple picture. Almost a miracle. Almost, but not quite. The reason is the structure of the standard model, which is very special in the number and properties of particles. The other one is that the parameters, things like masses, just fits.<br /><br />The natural question is hence: Does the miracle repeats itself for this type of unified theory? Is the new stuff really heavy? Is the known stuff light enough? If the almost-miracle repeats itself, the answer is yes. Should it repeat itself? Well, we will test under which conditions it repeats itself, by playing around both with the number of particles, their structures, and the parameters. We assume right now that we can get it to work, but that we can also break it. And we would like to understand very precisely when it breaks and why it breaks. And finally, the most obvious question, do we want that it repeats itself? Probably the most obvious question, arguably the hardest to answer. If it does not repeat itself, a whole class of ideas becomes more problematic. Ideas, which are conceptually pretty attractive. So, in principle, we would like to see it repeating itself. But then, would it not be more interesting if we needed to start afresh? Probably also true. But in the end, our preference should not play a role. After all, nature decides, and we are just the spectators, trying to figuring out what goes on. And our preferences have nothing to do with it, and therefore we should keep them out of the game.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-37611232054602916112015-02-10T05:26:00.003-08:002015-02-10T05:26:57.282-08:00Take your theory seriouslyI have published a <a href="http://arxiv.org/abs/1502.02421">new paper</a>. This paper has a somewhat simple message, even if it is technical. This message is just: Take your theory seriously. This may seem obvious, but it is not necessarily so. The reason is that if a theory works spectacularly well, if you do not take it serious, why should you care? And if I mean spectacular, I mean an agreement with experimenter where any deviations are smaller than any background noise we could not yet eliminate. The theory which works so well is, of course, the standard model.<br /><br />The paper is a follow-up of the <a href="http://arxiv.org/abs/1410.2740">proceeding</a> I have <a href="http://axelmaas.blogspot.de/2014/10/challenging-subtleties.html">discussed last year</a>. The upshot of this proceeding is that we use <a href="http://axelmaas.blogspot.de/2012/01/perturbation-theory.html">perturbation theory</a> to describe the physics we measure, e.g., at the LHC at CERN. However, perturbation theory is not taking the theory too seriously, but it works so very well. The reason for this can be understood: It is an almost miraculous coincidence that taking the theory seriously gives almost the same results. We have checked this in <a href="http://arxiv.org/abs/1312.4873">very great detail</a>.<br /><br />But understanding what is going on in the standard model is one thing. One of the big aims in modern particle physics is to understand <a href="http://axelmaas.blogspot.de/2012/05/above-and-beyond.html">what else there could be</a>.<br /><br />Now, comfortable with the success within the standard model, we have for a very long time assumed that taking the candidate theories for new physics also not seriously should work out. Of course, there have been <a href="http://axelmaas.blogspot.de/2012/05/technicolor.html">some exceptions</a>, where we knew it should not work. But by and large, the success with the standard model made us comfortable with using the same techniques.<br /><br />In the proceeding, I already raised some doubt whether this would be justified when there should be a <a href="http://axelmaas.blogspot.de/2014/04/news-about-state-of-art.html">second Higgs</a>. Under certain conditions, this may not be correct. In the full paper I now extend these doubts also to <a href="http://axelmaas.blogspot.de/2014/05/why-does-chemistry-work.html">other theories</a>, and even conclude it may be necessary to rethink even the cases where we thought we were careful.<br /><br />What is the reason behind this departure? Why should it not work? Well, I do not state that it will not work, just that it might not work. In the standard model, we were in the comfortable situation that experiments told us that it does work. Now, in the absence of experimental results, we are left to theory to tell us where to look. So we do not know whether taking the theory not seriously works out, and may be misguided if it does not.<br /><br />But why should it fail this time, when it works so well for the standard model? A legitimate question. It requires to understand why it does work for the standard model. Looking at it in detail shows that it requires two conditions to work. One is that the relative masses of the particles lie within a certain range. That is satisfied by the standard model. The second is that the relative number of particles is just right. Both conditions are or may not be met by the theories we have for new physics. In the paper, I give particular examples for several theories, and formulate requirements which have to be met for things to work out.<br /><br />So is the paper now killing of models? No. At least not yet. I only formulate conditions and requirements. Whether a particular theory meets these is a question to the theory. I give some examples where the situation is very much on the borderline to have a starting point where to check. But what actually happens requires a calculation. A calculation in which we do take the theory very seriously. This will be very complicated, and in the end we may just figure out that it was unnecessary. But then we can be sure to be right, and the theory gives us the leeway to not take it too seriously. And being sure is a basic requirement when one ones to explain the unknown.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-52432190275216233202015-01-09T05:41:00.003-08:002015-01-09T05:41:26.422-08:00What is so important (to me) about the Higgs?In a few weeks, I will give my inaugural lecture at the University of Graz. I have entitled it "The Higgs as a touchstone of theoretical particle physics". Of course, with this title I could easily give a lecture on many of the problems the Higgs present us with which are <a href="http://axelmaas.blogspot.de/2012/05/above-and-beyond.html">begging for new physics</a>. I could write about how we have no idea why the Higgs has a mass of the size it has. How we do not understand why it does what it does. And so many other things.<br /><br />But this is not what I want to write about. Nor is it what i want to talk about. It is rather something more mundane but also much more subtle. It is something for which I do not need any new physics to worry about. I can very well worry just about the known physics.<br /><br />It has something to do with <a href="http://axelmaas.blogspot.de/2013/09/blessing-and-bane-redundancy.html">redundancy</a>. In theoretical physics it is often very convenient - well, often indispensable - to add something redundant in our description. A redundancy is like writing instead of 2 2+0. Both statements have the same meaning. But adding zero to two is redundant. In this example, the zero is just redundant, but not useful. In particle physics, what we add is both redundant and useful.<br /><br />It is this redundancy which helps us disentangling complicated problems. Of course, we are not allowed to change something by introducing the redundancy. But as long as we respect this requirement, we are pretty free what kind of redundancy we add. In the previous example, we could just write 2+0+0, and have another version of redundancy.<br /><br />OK, so what has this to do with the Higgs? Well, if we measure something, it is of course independent of these redundancies - after all, they are man-made. And nothing made by us should influence what we measure. But if we look at the ordinary version of how we describe the Higgs, than there is a slight mismatch. In our theoretical description of the Higgs, there is some remainder of the redundancy still lingering. It is, like 2+0.001 pops up. Nonetheless, our theoretical description of the Higgs is spot on the experimental results. But how can this be if there is still redundancy polluting our result? It is this where the Higgs becomes a touchstone of understanding theoretical particle physics: In explaining why this is not correct and correct at the same time.<br /><br />As always, the answer appears to be in the fine-print. In the standard way how we approach the Higgs, we are not doing it exactly. Well, to be honest, we could not do it exactly. We make some approximations. The consequences of these approximations is the appearance of the residual redundancy. Since our calculations are so spot on, these approximations appear to be good. However, after performing these approximations, we have now way short of experiment to confirm our approximations. That is highly unsatisfactory. We must be able to do it such that we can predict whether the approximations work. And understand why they work. It is in this sense that the Higgs is a touchstone. If we are not even able to answer these questions, how can we expect to solve the many outstanding questions?<br /><br />This question has bugged people already 35 years ago, and some understanding of why it works was achieved. But not of when it works, at least not in the form of numbers. We have made some progress with this, <a href="http://arxiv.org/abs/1412.6440">especially recently</a>. The amazing result was that it appears to work only in a very limited range of masses of the Higgs - with the observed Higgs mass essentially right in the middle of the possible range. This is even more surprising as already <a href="http://arxiv.org/abs/1410.2740">slight modifications</a> of <a href="http://axelmaas.blogspot.de/2014/10/challenging-subtleties.html">how many Higgs particles there are</a> seem to change this. So, why is this so? Why is the Higgs mass just there? And what would happen otherwise? Understanding these questions will be very important to go beyond what is known. Without understanding, we may easily be fooled by our approximations, if we are not that lucky next time. This is the reason, why I think the Higgs is a touchstone for theoretical particle physics.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-64311562221420790272014-12-05T09:15:00.000-08:002014-12-05T09:15:01.187-08:00Support, structure, and studentsThis time it will again be a behind-the-scenes entry. The reason is that we got just our <a href="http://physik.uni-graz.at/itp/doktoratskolleg/">graduate school</a> prolonged. This is a great success. 'We' are in this case the professors doing particle physics here at the University of Graz, in total five. With this, we are now able to support nine new PhD students, i.e. give them a job during the time they are doing their PhD work, and giving them the opportunity to travel to conferences, or to invite people for them to talk to.<br /><br />You may wonder what I mean by 'giving a job'. PhD students in physics are not only students. They are beginning researchers. Each and every PhD thesis contributes to our knowledge, and opens up new frontiers. In the course of doing this, the PhD students are guided and supported by us, their supervisors. The goal is, of course, that at the end of their thesis they have matured into equal partners in research. A goal, which is satisfyingly often achieved. And hence, they are not only studying but indeed contributing, and thus they also do a job, and should get paid for the work they are doing. And hence having PhD positions is not only nice - it is required already out of fairness. And therefore this success means that we can now accompany nine more young people on their way to become researchers.<br /><br />But this is not everything a graduate school provides. A graduate school is also providing the infrastructure too provide advanced lectures by world-leading experts to the students. But here one has to walk a thin line. What we do not want is that they just soak up knowledge, and then reproduce it. This can never be how a PhD education should be. The aim of the PhD studies must always be that the students learn how to create, how to be creative, and how to think in directions nobody else did before. Especially not their supervisors. Providing a too much formalized education would quell much or all of this.<br /><br />On the other hand, it cannot work without some formal education. While creativity is important, (particle) physics has become a vast field. As a consequence, almost every simple idea has already been found decades ago by someone else. Knowing what is known is therefore already important to avoid repeating the same things (and often the same mistakes) others did. At the same time, knowledge of general principles and structures is important such that one's own ideas can be embedded into the big picture. And in the course, checked for technical consistency. Without knowing about technical details, this would be hard to achieve. One could then easily loose oneself in pursuing a chain of technical points, leading one far astray. It is especially here where it shows that theoretical particle physics is nowadays an enormous collaborative and worldwide effort. None of the problems we are dealing with can be solved by one person alone. It requires the combined knowledge of many people to make progress.<br /><br />Knowing what other people did - and do - is therefore of paramount importance. Here, the graduate school helps also in another way. It provides the PhD students with the possibility to travel themselves, meet people, and go to conferences. We also can make it possible for them to stay abroad for up to half a year at a different institution to work with different people on a different project. They can thereby substantially broaden their horizon, and learn how to cooperate with different people.<br /><br />So, are there any downsides? Well, not for the students. Except that they may at times have to go a lecture or talk, which they otherwise would not go to. Most of the downsides are hitting us supervisors, because there is a lot of additional administrative work involved. However, this is easily outweighed by the possibility to have more PhD students to work with, and with their ambition achieve something new.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-71459198648697966392014-11-04T09:45:00.001-08:002014-11-04T09:46:20.350-08:00More on big blobs and little blobs<a href="http://axelmaas.blogspot.co.at/2014/09/big-blobs.html">Two months ago</a>, I have introduced you to what I called big blobs. In the end, just a big heap of particles, which act in unison. As announced there, I have meanwhile produced <a href="http://arxiv.org/abs/1410.7954">new results</a> on this topic. So, what did I find?<br /><br />In this investigation I tried to disentangle what relevance big blobs of <a href="http://axelmaas.blogspot.co.at/2010/01/lthe-forces-of-nature-iii-strong-force.html">gluons</a> have for a single gluon. To do this, I somehow had to get my hands on blobs. To do this, I performed <a href="http://axelmaas.blogspot.co.at/2012/02/simulating-universe.html">computer simulations</a> of the strong force. Without any further modifications, this would deliver a mixture of small and large blobs, many, many individual gluons, and everything rather unorganized. This would not help much.<br /><br />Fortunately, clever people have found a way how to isolate the blobs from this mixture. This is a method which is nowadays called smearing or cooling. The names are not quite accurate. What is actually done is to remove anything which even remotely resembles single gluons at high energies. This is really hand-waving, and nobody should take this too literal. But it gives a good idea, and avoids a lot of technicalities. In the end, the important thing is that this gave me a lot of blobs.<br /><br />But the blobs alone were not interesting for me. Also, many people have studied them in the last forty years or so. I wanted something different. So I took the blobs, and then injected a single gluon into this heap of blobs. Then, I checked how the gluon behaved.<br /><br />The first result was that at short distances, much shorter than the size of the blobs or the distance between them, the gluon did not feel anything. It just behaved as it would travel through empty space. This was not yet too surprising. After all, the big blobs are separated, and as long as the gluon did not crash into one, how should it know about it.<br /><br />The second result was also not too surprising. If I let the gluon travel very far, it behaved essentially as if I would not have filtered everything out but the blobs. It was just plain normal. Also this makes sense. If the distances become much longer than the blob's size and their separation, the gluon just gets an average picture. And this picture should, and seems to be, not too different from the real thing.<br /><br />But then came something, which surprised me at first. Though I later learned that somebody else has anticipated it long ago. If the gluon travels distances roughly of the size of the blobs, it behaved substantially different than normal. This behavior was actually what one would expect in the first place for something which is so <a href="http://axelmaas.blogspot.co.at/2010/01/lthe-forces-of-nature-iii-strong-force.html">strongly interacting</a> as gluons do. That would be quite reassuring, as it was exactly this behavior which has been looked for in gluons since a long time. This would mean that just all the stuff which I have filtered out to get to the blobs would normally obscure it.<br /><br />Since this sounds to good to be true, it probably is. Hence, a necessary next step must be to check this result, in some way. In the manuscript, I have developed some ideas, but none of them will be easy. They are thus part of future research. But it must be checked. After all, its science, and one should always check and try to falsify the results. And this one certainly deserves to be checked.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-70269920873637262452014-10-15T10:03:00.001-07:002014-10-15T10:03:14.244-07:00Challenging subtletiesI have just published a <a href="http://arxiv.org/abs/1410.2740">conference proceeding</a> in which I return to an idea of how the <a href="http://axelmaas.blogspot.co.at/2009/10/let-me-introduce-players-in-standard.html">standard model of particle physics</a> could be extended. It is an idea I have already <a href="http://axelmaas.blogspot.co.at/2014/04/news-about-state-of-art.html">briefly written about</a>: The idea is concerned with the question what would happen if there would be twice as many <a href="http://axelmaas.blogspot.co.at/2010/03/higgs-effect.html">Higgs particles</a> as there are in nature. The model describing this idea is therefore called 2-Higgs(-doublet)-model, or for short 2HDM. The word doublet in the official name is rather technical. It has something to do with how the second Higgs connects to the <a href="http://axelmaas.blogspot.co.at/2010/02/forces-of-nature-iv-weak-force.html">weak interaction</a>.<br /><br />As fascinating as the model itself may be, I do not want to write about its general properties. Given its popularity, you will find many things about it already on the web. No, here I want to write about what I want to learn about this theory in particular. And this is a peculiar subtlety. It connects to <a href="http://axelmaas.blogspot.co.at/2013/05/what-could-higgs-be-made-of.html">the research I am doing</a> on the situation with just the single Higgs.<br /><br />To understand what is going on, I have to dig deep into the theory stuff, but I will try to keep it not too technical.<br /><br />The basic question is: What can we observe, and what can we not observe. One of the things a theoretician learns early on that it may be quite helpful to have some dummies. This means that he adds something in a calculation just for the sake of making the calculation simpler. Of course, she or he has to make very sure that this is not affecting the result. But if done properly, this can be of great help. The technical term for this trick is an auxiliary quantity.<br /><br />Now, when we talk about the weak interactions, something amazing happens. If we assume that everything is indeed very weak, we can calculate results using so-called <a href="http://axelmaas.blogspot.de/2012/01/perturbation-theory.html">perturbation theory</a>. And now an amazing thing happens: It appears, like the auxiliary quantities are real, and we can observe them. It is, and can only be, some kind of illusion. This is indeed true, something I have been working on since a long time, and others before me. It just comes out that the true thing and the auxiliary quantities have the same properties, and therefore it does not matter, which we take for our calculation. This is far from obvious, and pretty hard to explain without very much technical stuff. But since this is not the point I would like to make in this entry, let me skip these details.<br /><br />That this is the case is actually a consequence of a number of 'lucky' coincidences in the standard model. Some particles have just the right mass. Some particles appear just in the right ratio of numbers. Some particles are just inert enough. Of course, as a theoretician, my experience is that there is no such thing as 'lucky'. But that is a different story (I know, I say this quite often this time).<br /><br />Now, I finally return to the starting point: The 2HDM. In this theory, one can do the same kind of tricks with auxiliary quantities and perturbation theory and so on. If you assume that everything is just like in the standard model, this is fine. But is this really so? In the proceedings, I look at this question. Especially, I check <a href="http://axelmaas.blogspot.de/2013/12/knowing-limits.html">whether perturbation theory should work</a>. And what I find is: This may be possible, but it is very unlikely to happen in all the circumstances where one would like this to be true. Especially, in several scenarios in which one would like to have this property, it could indeed be failing. E.g., in some scenarios this theory could have twice as many weak gauge bosons, so-called <a href="http://axelmaas.blogspot.de/2009/10/let-me-introduce-players-in-standard.html">W and Z bosons</a>, as we see in experiment. That would be bad, as this would contradict experiment, and therefore invalidate these scenarios.<br /><br />This is not the final word, of course not - proceedings are just status reports, not final answers. But that there may be, just may be, a difference. This is enough to require us (and, in this case, me) to make sure what is going on. That will be challenging. But this time such a subtly may make a huge difference.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-82394817488706913642014-09-05T03:44:00.000-07:002014-09-05T03:44:32.490-07:00Big blobsOne of the things I have discussed in my blog is how <a href="http://axelmaas.blogspot.de/2011/01/fields-waves-particles-and-all-that.html">particles arise in quantum theories</a>. Putting it into one (hand-waving) sentence, then particles are just isolated peaks in the quantum fields which fill up the universe. But is this all that there is possible?<br /><br />The answer is no, and I had to do with the alternatives several times in my own research. But what are these alternatives?<br /><br />Particles, I said, are isolated peaks. They are, what we call localized - existing at a single place. A single, and very slender, peak on a background of (nearly) nothing else. Of course, there are also bound states, like the hydrogen atom, and <a href="http://axelmaas.blogspot.de/2012/11/a-higgs-and-higgs-make-what.html">other such objects</a>. These are two, or more particles, being close to each other, and which move in the same direction. However, in this case the individual particles are still, more or less, distinct.<br /><br />Here, I want to introduce another concept. It arises, when one takes many particles, and puts them very close together. Then the peaks start to overlap, until it is impossible to say where one starts, and where another ends. In many cases such a bunch of particles is just unstable, and the particles fly apart pretty quickly. But several theories, most notably the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">strong interactions</a>, provide another option. When carefully balancing how the particles are together, they form a super-particle, and the whole bunch behaves almost like one big particle. This is different from the bound states, because the particles are no longer individually detectable inside, it is just one big blob. Of course, it is possible to disassemble this blob, and the original particles come out. Hence, such blobs are not called particles, but pseudo-particles. A more fancy name for them is 'topological excitations'. This name has been given to them because of certain properties linked to the mathematical field of 'topology'. One of the particularly important features of these blobs is that they are, without external disturbance, extremely stable. The reason is, pictorially speaking, the way the particles are interwoven makes knots, which do not open.<br /><br />So aside from the fascinating fact that these things exist, what is their use for physics? They play especially a role in theories where everything interacts strongly with each other, like the strong force. It is hypothesized that in such theories blobs emerge easily, and may even play the most important role. This would mean that effectively not the original particles, but the blobs are the usually encountered objects. And how they interact makes up the phenomena we see in experiments. Single particles are then just some minor disturbance to the game of the big blobs. The blobs become what physicists call the 'effective degrees of freedom', meaning the important players.<br /><br />Is this true, especially in the strong interactions? It depends. We do not have an equivalent formulation of the theory in terms of blobs instead of particles, so we do not know for sure. We do know that several features, like <a href="http://axelmaas.blogspot.de/2011/10/mass-from-strong-force.html">mass generation</a>, can be very simply explained just by using the blobs. There, it helped us a lot in understanding what is going on. Other features, like the famous <a href="http://axelmaas.blogspot.de/2012/04/why-colors-cannot-be-seen.html">confinement</a>, turn out to be a much tougher cookie. We still are not sure, whether it is really possible.<br /><br />Finally, what are my stakes in the blobs? One of the questions to be posed is, whether the properties of remaining individual particles are determined by the what the blobs do. Is their movement constrained by them? Are their interactions mainly with a blob involved, rather then directly between the particles? I am trying to answer these questions by <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">simulations</a>. Some preliminary findings are already <a href="http://arxiv.org/abs/0811.2730">available</a>, but there will be more to come.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-80604248806839027902014-08-13T07:30:00.006-07:002014-08-13T07:30:59.329-07:00Triviality is not trivialOK, starting with a pun is probably not the wisest course of action, but there is truth in it as well.<br /><br />When you followed the various public discussiosn on the <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs</a> then you will probably have noticed the following: Though finding it, most physicists are not really satisfied with it. Some are even repelled by it. In fact, most of us are convinced that the Higgs is only a first step towards <a href="http://axelmaas.blogspot.de/2012/05/above-and-beyond.html">something bigger</a>. Why is this so? Well, there are a number of reasons, from purely aesthetic ones to deeply troubling ones. As the latter also affect my own research, I will write about a particular annoying nuisance: The triviality referred to in the title.<br /><br />To really understand this problem, I have to paint a somewhat bigger picture, before coming back to the Higgs. Let me start: As a theoretician, I can (artificially) distinguish between something I call classical physics, and something I call quantum physics.<br /><br />Classical physics is any kind of physics which is fully predictive: If I know the start conditions with sufficient precision, I can predict the outcome as precisely as desired. Newton's law of gravity, and even the famous general theory of relativity belong to this class of classical physics.<br /><br />Quantum physics is different. Quantum phenomena introduce a fundamental element of chance into physics. We do not know why this is so, but it is very well established experimentally. In fact, the computer you use to read this would not work without it. As a consequence, in quantum physics we cannot predict what will happen, even if we know the start as good as possible. The only thing we can do is make very reliable statements of how probable a certain outcome is.<br /><br />All kinds of known particle physics are quantum physics, and have this element of chance. This is also experimentally very well established.<br /><br />The connection between classical physics and quantum physics is the following: I can turn any kind of classical system into a quantum system by adding the element of chance, which we also call quantum fluctuations. This does not necessarily go the other way around. We know theories where quantum effects are so deeply ingrained that we cannot remove them without destroying the theory entirely.<br /><br />Let me return to the Higgs. For the Higgs part in the <a href="http://axelmaas.blogspot.de/2009/10/let-me-introduce-players-in-standard.html">standard model</a>, we can write down a classical system. When we then want to analyze what happens at a particle physics experiment, we have to add the quantum fluctuations. And here enters the concept of triviality.<br /><br />Adding quantum fluctuations is not necessarily a small effect. Indeed, quantum fluctuations can profoundly and completely alter the nature of a theory. One possible outcome of adding quantum fluctuations is that the theory becomes trivial. This technical term means the following: If I add quantum fluctuations to a theory, the resulting theory will describe particles which do not interact, no matter how complicated they do in the classical version. Hence, a trivial quantum theory describes nothing interesting. What is really driving this phenomena depends on the theory at hand. The important thing is that it can happen.<br /><br />For the Higgs part of the standard model, there is the strong suspicion that it is trivial, though we do not have a full proof for (or against) it. Since we cannot solve the theory entirely, we cannot (yet) be sure. The only thing we can say is that if we add only a part of the quantum fluctuations, only a part of the so-called <a href="http://axelmaas.blogspot.de/2012/09/what-means-radiative-correction.html">radiative corrections</a>, the theory makes still sense. Hence it is not trivial to decide whether the theory is trivial, to reiterate the pun.<br /><br />Assuming that the theory is trivial, can we escape it? Yes, this is possible: Adding something to a trivial theory can always make a theory non-trivial. So, if we knew for sure that the Higgs theory is trivial, we would know for sure that there is something else. On the other hand, trivial theories are annoying for a theoretician, because you either have nothing or have to remove artificially part of the quantum fluctuations. This is what annoys me right now with the Higgs. Especially as I have to deal with it in my own research.<br /><br />Thus, this is one out of the many reasons people would prefer to discover soon more than 'just' the Higgs.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-2452002644033803222014-07-17T08:18:00.004-07:002014-07-17T08:18:44.060-07:00Why continue into the beyond?I have just returned from a very excellent <a href="http://ichep2014.es/">37th International Conference on High-Energy Physics</a>. However, as splendid as the event itself was, it was in a sense bad news: No results which hint at anything substantial beyond the standard model, except for the <a href="http://axelmaas.blogspot.de/2012/12/enthusiasm-vs-statistics.html">usual suspect statistical fluctuations</a>. This does not mean that there is nothing - we know there is more for <a href="http://axelmaas.blogspot.de/2012/05/above-and-beyond.html">many reasons</a>. But in an increasingly frustrating sequence of years all our observational and experimental results keep pushing it beyond our reach. Even for me as a theorist there is just not enough substantial information to be able to do more than just vague speculation of what could be.<br /><br />Nonetheless, I just wrote that I want to <a href="http://axelmaas.blogspot.de/2014/05/why-does-chemistry-work.html">venture into this unknown beyond</a>, and <a href="http://axelmaas.blogspot.de/2014/06/building-group.html">in force</a>. Hence it is reasonable - in fact necessary - to pose the question: Why? If I do not know and have too little information, is there any chance to hit the right answer? The answer to this: Probably not. But...<br /><br />Actually, there are two buts. One is simply curiosity. I am a theorist, and I can always pose the question how does something work, even without having a special application or situation in mind. Though this may just end up as nothing, it would not be the first time that the answer to a question has been discovered long before the question. In fact, the single most important building block of the standard-model, so-called Yang-Mills theory, has been discovered by theorists almost a decade before it was recognized to be the key to explain the experimental results.<br /><br />But this is not the main reason for me to venture into this direction. The main reason has to do with the <a href="http://axelmaas.blogspot.de/2014/01/for-each-yes-and-no-there-is-perhaps.html">experience</a> I made with <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs physics</a> - that despite appearance there is often a second layer to the theory. Such a second layer has in this case shifted the perception of how things we describe in theory correlate with the things we see in experiment. Since many proposed theories beyond the standard model, especially such as have <a href="http://axelmaas.blogspot.de/2014/04/news-about-state-of-art.html">caught</a> my <a href="http://axelmaas.blogspot.de/2014/05/why-does-chemistry-work.html">interest</a>, are extensions of the Higgs of the standard model. It thus stands to reason that similar statements hold true in their cases. However, whether they hold true, and how they work cannot be fathomed without looking at theses theories. And that is what I want to do.<br /><br />Why should one do this? Such subtle questions seem to be at first not really related to experiment. But understanding how a theory really works should also give us a better idea of what kind of observations such a theory can actually deliver. And now it becomes very interesting for an experiment. Since we do at the current time not know what to expect, we need to think about what we could expect. This is especially important as to look in every corner requires much more resources than available to us in the foreseeable future. Hence, any insights into what kind of experimental results a theory can yield is very important to select where to focus.<br /><br />Of course, my research alone will not be sufficient to do this. Since it easily can be that I am looking at the 'wrong' theory, it would not be a good idea to put too much effort in it. But, when there are many theoreticians working on many theories, and many theories all say that it is a good idea to look into a particular direction: Then we have a guidance for where to look. Then there seems to be something special in this direction. And if not, then we have excluded a lot of theories in one go.<br /><br />As one person in a discussion session (I could not figure out who precisely) has put it aptly at the conference: "The time of guaranteed discoveries is over.". This means that now that we have all pieces of the standard model, we cannot expect to find a new piece any time soon. All our indirect results even tell us that the next piece will be much harder to find. Hence, we are facing a situation as was last seen in physics in the second half of the 19th century and beginning 20th century: There are only some hints that something does not fit. And now we have to go looking, without knowing in advance how far we will have to walk. Or in which direction. This is probably more of an adventure than the last decades, where things where essentially happening on schedule. But is also requires more courage, since there will be much more dead ends (or worse) available.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-15395053270894403162014-06-11T04:51:00.005-07:002014-06-11T04:51:44.915-07:00Building a groupThose who follow my <a href="https://twitter.com/axelmaas">twitter feed</a> have already seen that I will become a full professor at the Institute of Physics at the University of Graz in Austria from October on. This also implies that I will be building up a group to continue my research on the standard model of particle physics and beyond.<br /><br />I would like to use this opportunity to write a little bit about what happens behind the scenes right now, rather than something about the outcome of the research we are doing. Such a 'behind-the-scene' look is also interesting, I think, since it shows how research is done, not only what it finds. Hence, I may write such entries more often in the future. If you have any comments or thoughts on this, I would be happy to read your opinion. <br /><br />One of the major tasks right now for me is to decide what will be the research focus of this group, and how I will organize the resources I will have for this purpose. These are the most important steps, as I have to decide which kind of positions I will open (especially for PhD and master theses), as well as which kind of computers I have to arrange for. Since I will now have the resources to work on more projects than before, this means to structure my activities, such that I get not lost. It would not do to think that now that I have more possibilities I should just jump into many new fields, putting each and every member of the group on a separate topic. In the long run, I will have to take care that <a href="http://axelmaas.blogspot.de/2012/01/tools-of-trade.html">methods and techniques</a> developed and used in my group will be handed down from one generation of students to the next. This will only be possible, if the topics they are applied to are sufficiently close that this is meaningfully possible. That is particularly true for my <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">numerical</a> and technically more involved <a href="http://axelmaas.blogspot.de/2012/03/equations-that-describe-world.html">analytical</a> tools.<br /><br />As a consequence, I decided to establish two main directions. One will be concerned with <a href="http://axelmaas.blogspot.de/2013/03/un-dead-stars-and-particles.html">neutron stars</a>. The other will be looking at <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs physics</a>, continuing my research on <a href="http://axelmaas.blogspot.de/2012/11/a-higgs-and-higgs-make-what.html">combinations of Higgs particles</a>.<br /><br />But, of course, just staying with what I already do will not be sufficient. Especially, as there are so many interesting problems offering themselves, like the one of electric charge on which I wrote <a href="http://axelmaas.blogspot.de/2014/05/why-does-chemistry-work.html">last time</a>. Since the technology I have accumulated so far is more than sufficient to start working on it, without needing to first invent a new approach, it is a natural way to expand. The same is true for the so-called <a href="http://axelmaas.blogspot.de/2012/05/technicolor.html">technicolor theories</a>, on which I did some exploratory work in the <a href="http://axelmaas.blogspot.de/2013/05/what-could-higgs-be-made-of.html">past</a>. Hence I decided to make the first new additions to my research fields in these areas. Also, both can be done with already the present infrastructure, so I do not need to wait for new one.<br /><br />Now that I decided what to do, I still have to make it happen. What are the steps I will have to take? The most important goal is to have some students with whom I can work on all these exciting research topics. This will mean to open positions for them, announce them, and find someone for it. This includes not only PhD students, but also master and bachelor students. To reach them, I will have to set up a new web page and other structures to show what I am working on. As important will be to give good lectures in general, and also special lectures about interesting topics. I am looking very much forward to this part. I have already started to prepare the first lecture I will give in the winter term. It will focus on supersymmetry, another candidate for something <a href="http://axelmaas.blogspot.de/2012/05/above-and-beyond.html">beyond the standard model</a>.<br /><br />In the long run, I will have to acquire third-party funding, to enlarge the group beyond what I will have when I start. That, and the accompanying work, is worth a blog entry on its own, so I will not write about it now. I will return to it at a later time.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-5185379849690060362014-05-27T03:39:00.003-07:002014-05-27T03:39:37.861-07:00Why does chemistry work?This seems to be an odd question to ask in a blog about particle physics. But, as you will see, it actually makes connection to a very deep problem of particle physics. A problem, which I am currently turning my attention to. Hence, in preparation of things to come, I write this blog entry.<br /><br />So, where is the connection? Well, chemistry is actually all about the <a href="http://axelmaas.blogspot.de/2009/12/forces-of-nature-ii-electromagnetism.html">electromagnetic interaction</a>. One of the most important features is that atoms are electrically neutral. This is only possible, if the atomic nucleus has the same positive charge as its surrounding electrons have a negative one. The electrons are elementary particles, as far as we know. The atomic nucleus, however, is ultimately made up of <a href="http://axelmaas.blogspot.de/2010/01/forces-of-nature-iii-strong-force-part.html">quarks</a>. So the last statement boils down to the fact that the total electric charge of the quarks in the nucleus has to compensate the one of the electrons. Sounds simple enough. And this is in fact something which has been established very exactly in experiment. The compensation is much better than one part in a billion - within our best efforts, the cancellation appears perfect.<br /><br />The problem is that it is not necessary, according to our current knowledge. Quarks and electrons are very different objects in particle physics. So, why should they carry electric charge such that this balancing is possible? The answer to this is, as often: We do not know. Yet.<br /><br />When we are just looking at electromagnetism, there is actually no theoretical reason why they should have balanced electric charges. Electromagnetism would work in exactly the same way if they did not, if they would have arbitrarily different charges. Of course, atoms are then no longer neutral. And chemistry would then work quite differently.<br /><br />If there is no simple explanation in the details, one should look at the big picture. Perhaps it helps. In this case it does, but this time not in a very useful way.<br /><br />Electromagnetism does not stand on its own. It is part of the <a href="http://axelmaas.blogspot.de/2009/10/let-me-introduce-players-in-standard.html">standard model of particle physics</a>. And here things start to become seriously bizarre.<br /><br />I am a theorist. Hence, the internal consistency of a theory is something quite important to me. The standard model of particle physics as a theory turns out to be consistent if very precise relations exist between the various particles in it - and the charge they carry. The exact cancellation of electric charges in the atoms we observe is one of the very few possibilities how the standard model can work theoretically.<br /><br />So, did we explain it now? Unfortunately, no. "The theory should work" is not an adequate requirement for a description of nature. The game goes the other way around. Nature dictates, and our theory must describe it. Experiment rules theory in physics.<br /><br />So the fact that we need this cancellation is troublesome: We only know that we need it. But it is just there, we cannot explain it with what we know.<br /><br />So that is the point to enter speculation. We know theories in which the electromagnetic charge cancellation is not 'just there', but it follows immediately from the structure of the theory. The best known examples of such theories are the so-called grand-unified theories. In these, there is a super-force, and the known forces of the standard model are just different facets of this super-force. The fact that electrons and quarks have canceling charges in such a theory just stems from the fact that everything originates in this one super-force.<br /><br />It is possible to write down a theory of such a super-force, which is compatible with our current experiments. But so is the standard model. Hence, only if we find an experimental result, in which a theory of such a super-force shows a distinct behavior to the standard-model, we can be sure that it exists. This is not (yet?) the case.<br /><br />At the same time, we so far know relatively little about many aspects of such a theory. This is the reason for me to start getting interested in it. Especially, there are still conceptual questions we need to answer. I will write about them in future entries. Because it will be quite interesting and challenging to understand these things. Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-32596051606067241822014-04-11T04:57:00.004-07:002014-04-11T04:57:38.775-07:00News about the state of the artRight now, I am at <a href="http://www.benasque.org/2014higgs/">workshop in Benasque, Spain</a>. This workshop is called 'After the Discovery: Hunting for a non-standard Higgs Sector'. The topic is essentially this: We now have a Higgs. How can we find what else is out there? Or at least assure that it is currently out of our reach? That there is something more is beyond doubt. <a href="http://axelmaas.blogspot.com.es/2012/05/above-and-beyond.html">We know too many cases where our current knowledge is certainly limited</a>.<br /><br />I will not go on with describing all what is presented on this workshop. This is too much. And there are certainly other places on the web, where this is done. In this entry I will therefore just describe how what is discussed at the workshop relates to my own research.<br /><br />One point is certainly what experiments find. At such specialized workshops, you can get much more details of what they actually do. Since any theoretical investigation is to some extent approximative, it is always good to know, what is known experimentally. Hence, if I get a result in disagreement with the experiment, I know that there is something wrong. Usually, it is the theory, or the calculations performed. Some assumption being too optimistic, some approximation being too drastic.<br /><br />Fortunately, so far nothing is at odds with what I have. That is encouraging. Though no reason for becoming overly confident.<br /><br />The second aspect is to see what other peoples do. To see, which other ideas still hold up against experiment, and which failed. Since different people do different things, combining the knowledge, successes and failures of the different approaches helps you. It helps not only in avoiding too optimistic assumptions or other errors. But other people's successes provide new input.<br /><br />One particular example at this workshop is for me the so-called 2-Higgs-Doublet models. Such models assume that there exists besides the known Higgs another set of Higgs particles. Though this is not obvious, the doublet in the name indicates that they have four more Higgs particles, one of them being just a heavier copy of the one we know. I have recently considered to look also into such models, though for quite different reasons. Here, I learned how they can be motivated for entirely different reasons, and especially why there are so interesting for ongoing experiments. I also learned much about their properties, and what is known (and not known) about them. This gives me quite some new perspectives, and some new ideas.<br /><br />Ideas, I will certainly realize, once being back.<br /><br />Finally, collecting all the talks together, they draw the big picture. They tell me, where we are now. What we know about the Higgs, what we do not know, and where there is room (left) for much more than just the 'ordinary' Higgs. It is an update for my own knowledge about particle physics. And it finally delivers the list, of what will become looked at in the next couple of months and years. I now know better where to look for the next result relevant for my research, and relevant for the big picture.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-50848426133684329882014-03-12T08:00:00.000-07:002014-03-12T08:00:00.295-07:00Precision may matterThe <a href="http://arxiv.org/abs/1402.5050">latest paper</a> I have produced is an example of an often overlooked part of scientific research: It is not enough to get a qualitative picture. Sometimes the quantitative details modify or even alter the picture. Or, put more bluntly, sometimes precision matters.<br /><br />When we encounter a new problem, we usually first try to get a rough idea of what is going on. It starts with a first rough calculation. Such an approach is often not very precise. Still, this creates a first qualitative picture of what is going on. This may be rough around the edges, and often does not perfectly fit the bill. But it usually gets the basic features right. Performing such a first estimate is often not a too serious challenge.<br /><br />But once this rough picture is there, the real work begins. Almost fitting is not quite the same as fitting. This is the time where we need to get quantitative. This implies that we need to use more precise, probably different, but almost certainly more tedious methods. These calculations are usually not as simple, and a lot of work gets involved. Furthermore, we usually cannot solve the problem perfectly in the first round of improvement. We get things a bit rounder at the edges, and the picture normally starts to fit better. Still not everywhere, but better. Often, a second, and sometimes many more, rounds are necessary.<br /><br />Fine, you may say. If things are improving, why bother doing even better? Is not almost fitting as good as fitting? But this is not quite the same. The best known examples we find in history. At the beginning of the 20th century, the picture of physics seem to fit the real world almost perfectly. There were just some small corners, where it seems to still require a bit of polishing. These small problems actually led to one of the greatest change in our understanding of the world, giving birth to both quantum physics and the theory of relativity. Actually, today we are again in a similar situation. Most of what we know, especially the standard model, fits the bill very nicely. But we still have some rough patches. This time, we have learned our lesson, and keep digging into these rough patches. Our secret hope is, of course, that a similar disruption will occur, and that our view of the world will be fundamentally changed. Whether this will be the case, or we just have to slightly augment things, we do not yet know. But it will be surely a great experience to figure it out.<br /><br />Returning to my own research, it is precisely this situation which I am looking at. However, rather than looking at the whole world, I have been just looking at a very simplified theory. One that involves only the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">gluons</a>. This is a much simpler theory than the standard model. Still, it is so complicated that we were not (yet) able to solve it completely. We made great progress, though, and it seems that we almost got it right. Still, also here, some rough edges remain. In this paper, I am looking precisely at these edges, and just check how rough they really are. I am not even trying to round them further. I am not the first to do it, and many other people have looked at them in one or the other way. However, doing it more than once, and especially from slightly different angles, is important. It is part of a system of check and balances, to avoid any error. Tt is also in science true: Nobody is perfect. And though there are many calculations, which are correct, even the greatest mind may fail sometime. And therefore it is very important to cross check any result.<br /><br />In this particular case, everything is correct. But, by looking more precisely, I found some slight deviations. These were previously not found, as precision is almost always also a question of the amount of resources invested. In this case, the resources are mostly computing time, and I have just poured a lot of it into it. These slight deviations do not require a completely new view of the whole theory. But it changes some slight aspects. This may sound like not much. But if they should be confirmed, they provide closure in the following sense: Previously, some conclusions remained dangling, and seemed to be not at ease with each other. There were some ways out, but the previously known results rather suggested a more fundamental problem. My new contribution shifts these old results slightly, and makes them more precise. The new interpretation fits now much better with the suspected ways out rather than with a fundamental problem. Hence, looking closer has in this case improved our understanding.<br /><br />Hence, theoretical physics has often more in common with a detective's work. We start with a suspicion. But then tedious work on the details is required to uncover more and more of the whole picture, until either the original suspicion is confirmed, or it shifts to a different suspect, which may have even been completely overlooked in the beginning. However, at least normally nobody tries to kill us if we come too close to the truth.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-34527021693276277652014-02-03T07:52:00.000-08:002014-02-03T07:52:04.457-08:00The trouble with new toysYou may remember that one of the projects I am <a href="http://axelmaas.blogspot.de/2012/04/what-strong-interactions-temperature.html">working on</a> is understanding so-called <a href="http://axelmaas.blogspot.de/2013/01/taking-detour-helps.html">neutron stars</a>. These are the remnants of heavy stars, which die in a gigantic explosion called a supernova. One of the main problems with understanding these neutron stars is that it is far too expensive to <a href="http://axelmaas.blogspot.de/2013/03/un-dead-stars-and-particles.html">simulate them in detail</a> using computers. We try in our research to circumvent this problem by using not the original theory describing neutron stars, but a <a href="http://axelmaas.blogspot.de/2013/03/un-dead-stars-and-particles.html">slightly modified version</a>. For this modified theory, we actually can do simulations. So is now everything shiny? No, unfortunately not. And about these problems we have published a <a href="http://arxiv.org/abs/1312.5579">new paper</a> recently. Today, I will outline what we did in this paper.<br /><br />So what is actually the problem? The problem is that some of our theories are not linear. What does now linear mean? Well, a theory is called linear, if we apply an external input to it, and the effect is has on theory is of (roughly) the same size as whatever we applied. In contrast, for anything which is non-linear, the response can be much larger, or much smaller, than whatever we applied. Unfortunately, the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">strong interactions</a>, which is responsible for neutron stars, is non-linear. Hence, even though we modified it just a little bit, we can potentially have very strong changes. Therefore, we have to make sure that whatever we did was not having unplanned and strong effects. This task led to the mentioned paper.<br /><br />The main question we have to answer is: If the theory is so sensitive to modifications, were the effects of our modifications still harmless enough? Can we still learn something?<br /><br />The answer is, as always, it depends. To judge the similarities, we have looked at the <a href="http://axelmaas.blogspot.de/2010/01/forces-of-nature-iii-strong-force-part.html">hadrons</a>, the particles build up from <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">quarks and gluons</a>. In the strong force, the masses of these hadrons follow a very special pattern. Especially, there are some unusually light ones, the a few intermediate ones, and then, already quite heavy, the first one which plays an important role in everyday life: The <a href="http://axelmaas.blogspot.de/2010/01/forces-of-nature-iii-strong-force-part.html">proton</a>, the nucleus of a hydrogen atom. We found that in our modified theory this pattern repeats itself. This is already a good sign. However, we also found some indications that not all is well. Some of the lighter particles have a number of different details than in nature, especially the lightest ones.<br /><br />Since we are mostly interested in neutron stars, we also did the calculations at large densities. There, we saw that indeed the slightly different properties of the lightest particles play a role. At quite small densities, we observe a behavior, which we are reasonably sure will not occur in nature. So is then all lost? It does not seem so. While at these densities the behavior is different, this will probably not play an important role for the densities we are really interested in. And indeed, at higher densities the theory behaved similar to the expectations: It seems to behave in a way which we would guess based on the observations of real neutron stars, and general arguments. This is quite encouraging. Still, we also encountered two more challenges. One is that to make a definite statement, we will need much more precision: Some of what we see is sensitive to details. We need to understand this better. And this will require much more calculations.<br /><br />The other one is that we are still not quite sure if there is not some special kind of different particle playing a too important role. This special kind of particle is similar to the proton, but not present in nature. It is only a feature of the modified theory. This is a so-called hybrid. In contrast to the proton, which consist out of three quarks and no gluons, it is made out of one quark and three gluons. There are certain technical reasons, why this particle could be a problem when trying to understand neutron stars. So far, it escaped detection in our calculations. We have to find it, to make really sure what is going on. This will be a challenge.<br /><br />Fortunately, still, even in the worst case scenario of both problems, what we did will not be irrelevant. On the one hand, it was a genuinely new theory we looked at, and we learned already very much about how theories in general work. And the second is - what we created will also serve as a benchmark for other methods. If someone creates a new method to get to the neutron star's core, she or he can test it again our simulations, to build confidence in it.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-76797485568786349752014-01-15T06:31:00.002-08:002014-01-15T06:31:25.455-08:00For each yes and no there is a perhapsThe <a href="http://axelmaas.blogspot.de/2013/12/knowing-limits.html">last time</a>, I was writing about my research on the Higgs. Especially, I was writing how we tested <a href="http://axelmaas.blogspot.de/2012/01/perturbation-theory.html">perturbation theory</a> using <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">numerical simulations</a>. I was quite optimistic back then to have results by now, which could be of either of two types. Either perturbation theory is a good description, or it is not.<br /><br />By now we have finished this project, and you can <a href="http://arxiv.org/abs/1312.4873">download</a> the results from the arxiv. The <a href="http://www.arxiv.org/">arxiv</a> is a server where you can find essentially every published result in particle physics of the last twenty years, legally and free of charge. But lets get back to our results. As I should have expected, things turned out to differently. Instead of a clear yes or no answer I got a perhaps.<br /><br />The <a href="http://axelmaas.blogspot.de/2013/12/knowing-limits.html">original question</a> was, under which circumstances can perturbation theory be applied. It appears to be a simple enough question. Originally, it looked like this would depend on the relative sizes of the <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs</a> mass to the <a href="http://axelmaas.blogspot.de/2010/02/forces-of-nature-iv-weak-force.html">W and Z</a> masses. And yes, it does. But. We found more.<br /><br />We found different regimes. One is where the Higgs is lighter than the W and Z. Of course, this is not a situation we encounter in nature, where it is about 50% heavier. But as theoreticians, we are allowed to play this kind of games. Anyway, in this case, we confirmed what was already indirectly known from other investigations: Perturbation theory seems not to work. Always. While the first statement is not too surprising, the second statement is. Naively, one expected that if the interactions between the Higgs and the W and Z are of certain relative sizes, perturbation theory would still work. We did not find any hint of that. Is this already then a no? Unfortunately not. As I have <a href="http://axelmaas.blogspot.de/2013/08/picking-right-theory.html">described earlier</a>, it is not so easy to relate a simulation to reality. Even if it is only a fantasy version of reality, as in this case. Hence, we cannot be sure that we have exhausted all possibilities. The only thing we can say for sure is that there are cases, where perturbation theory does not work. Perhaps there is something more, some other cases. And thus there is the first perhaps.<br /><br />The situation gets even more interesting, when the Higgs is heavier than the W and Z, but lighter than twice their mass. In this regime, perturbation theory is expected to be pretty good. At least here, we find a rather clear answer: Perturbation theory does indeed well. Wherever we looked, we did not find anything to the contrary. Of course, again we cannot exclude that there is somewhere else a different case. But so far, everything seems to be fine.<br /><br />When the Higgs finally hits the magic limit of twice the W and Z masses, something unexpected happens. This limit is particularly interesting, because above it, the Higgs can <a href="http://axelmaas.blogspot.de/2013/02/almost-nothing-is-forever-decays.html">decay</a> into W and Z. The expectation was that perturbation theory is still valid. At least until reaching several times the W and Z mass. But here, we found something odd. We found both possibilities, depending on the relative interaction strengths. In the one case, perturbation theory still works for a long time. In the other already a little bit above this critical mass perturbation theory starts to fail. We do not yet really understand, what is going on there, and what really characterizes the two different cases. We are working on this right now. But whatever it is, it is different than we expected. And this once more teaches to always expect that your naive expectations are not fulfilled. Things remain full of surprises, even if you think you understood them.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-21534404510678580542013-12-02T06:43:00.000-08:002013-12-02T06:43:53.141-08:00Knowing the limitsSome time ago, I have presented one of the methods I am using: The so-called <a href="http://axelmaas.blogspot.de/2012/01/perturbation-theory.html">perturbation theory</a>. This fancy name signifies the following idea: If we know something, and we add just a little disturbance (a perturbation) to it, then this will not change things too much. If this is the case, then we can systematically give the consequences of the perturbation. Mathematically, this is done by first calculating the direct impact of the perturbation (the leading order). Then we look at the first indirection, which involves not only the direct effect, but also the simplest indirect effect, and so on.<br /><br />Back then, I already wrote that, nice as the idea sounds, it is not possible to describe everything by it. Although it works in many cases very beautifully. But this leaves us with the question when does it not work. We cannot know this exactly. This would require to know the theory perfectly, and then there would be no need in the first place to do perturbation theory. So how can we then know what we are doing?<br /><br />The second problem is that in many cases anything but perturbation theory is technically extremely demanding. Thus the first thing one checks is the simplest one: Whether perturbation theory makes itself sense. Indeed, it turns out that usually perturbation theory starts to produce nonsense if we increase the strength of the perturbation too far. This indicates clearly the breakdown of our assumptions, and thus the breakdown of perturbation theory. However, this is a best-case scenario. Hence, one wonders whether this approach could be fooling us. Indeed, it could be that this approximation breaks down long before it gets critical. So that it first produces bad (or even wrong) answers before it produces nonsensical ones.<br /><br />This seems like serious trouble. What can be done to avoid it? There is no way inside perturbation theory to deal with it. One way is, of course, to compare to experiment. However, this is not always the best choice. On the one hand it is always possible that our underlying theory actually fails. Then we would misinterpret the failure of our ideas of nature as the failure of our methods. One would therefore like to have a more controllable way. In addition, we often reduce complex problems to simpler ones, to make them tractable. But the simpler problems often do not have a direct realization in nature, and thus we have no experimental access to them. Then this way is also not possible.<br /><br />Currently, I find myself in such a situation. I want to understand, in the context of my <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs</a> <a href="http://axelmaas.blogspot.de/2012/11/a-higgs-and-higgs-make-what.html">research</a>, to which extent perturbation theory can be used. In this context, the perturbation is usually the mass of the Higgs. The question then becomes: Up to which Higgs mass is perturbation theory still reliable? Perturbation theory itself predicts its failure at not more than eight time the mass of the observed Higgs particle. The question is, whether this is adequate, or whether this is too optimistic.<br /><br />How can I answer this question? Well, here enters <a href="http://axelmaas.blogspot.de/2012/03/methods-united-they-are-strong.html">my approach not to rely only on a single method</a>. It is true that we are not able to calculate as much with different methods than perturbation theory, just because anything else is too complicated. But if we concentrate on a few questions, enough resources are available to calculate things otherwise. The important task is then to make a wise choice. I.e. a choice from which one can read off the desired answer, in the present case whether perturbation theory applies or not. And at the same time to do something one can afford to calculate.<br /><br />My present choice is to look at the relation of the <a href="http://axelmaas.blogspot.de/2010/02/forces-of-nature-iv-weak-force.html">W boson</a> mass and the Higgs mass. If perturbation theory works, there is a close relation between both, if everything else is adjusted in a suitable way. The perturbative result can be found already in textbooks for physic students. To check it, I am using <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">numerical simulations</a> of both particles and their interactions. Even this simple question is an expensive endeavor, and several ten-thousand days of computing time (we always calculate how much time it would take a single computer to do all the work all by itself) have been invested. The results I found so far are intriguing, but not yet conclusive. However, in just a few weeks more time, it seems, that the fog will finally lift, and at least something can be said. I am looking with great anticipation to this date. Since either of two things will happen: Something unexpected, or something reassuring.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-10464130610883446532013-11-04T07:51:00.000-08:002013-11-04T07:51:46.946-08:00How to mix ingredientsI have <a href="http://axelmaas.blogspot.de/2012/03/methods-united-they-are-strong.html">written earlier</a> that one particular powerful way to do calculations is to combine different methods. In this entry, I will be a bit more specific. The reason is that we just published a <a href="http://arxiv.org/abs/1310.8166">proceeding</a> in which we describe our progress to prepare for such a combination. A proceeding is, by the way, just a fancy name for a write-up of a talk given at a conference.<br /><br />How to combine different methods always depends on the methods. In this case, I would like to combine <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">simulations</a> and the so-called <a href="http://axelmaas.blogspot.de/2012/03/equations-that-describe-world.html">equations of motion</a>. The latter describe how particles move and how they interact. Because you have usually an infinite number of them - a particle can interact with another one, or two, or three, or..., you can usually not solve them exactly. That is unfortunate. You may then ask, how one can be sure that the results after any approximation is still useful. The answer is that this is not always clear. It is here where the combination comes in.<br /><br />The solution to the equations of motion are no numbers, but mathematical functions. These functions describe, for example, how a particle moves from one place to another. Since this travel depends on how far these places are apart, this must be described by a function. Similarly, there are results which describe how two, three, four... particles interact with each other depending on how far they are apart from each other. Since all of these functions then describe how two or more things are related to each other, a thing which is called a correlation in theoretical physics, these functions are correlation functions.<br /><br />If we would be able to solve the equations of motions correctly, we would get the exact answer for all these correlation functions. We are not, and thus we will not get the exact ones, but different ones. The important question is how different they are from the correct ones. One possibility is, of course, to compare to experiment. But it is in all cases a very long way from the start of the calculation to results, which can be compared to experiment. There is therefore a great risk that a lot of effort is invested in vain. In addition, it is often not easy to identify then where the problem is, if there is a large disagreement.<br /><br />It is therefore much better if a possibility exists to check this much earlier. This is the point where the numerical simulations come in. Numerical simulations are a very powerful tool. Up to some subtle, but likely practically irrelevant, fundamental questions, they essentially simulate a system exactly. However, the closer one moves to reality, the more expensive the calculations become. Most expensive is to have very different values in a simulation, e. g. large distances and small distances simultaneously, or large and small masses. There are also some properties of the standard model, like <a href="http://axelmaas.blogspot.de/2011/11/chiral-or-why-left-and-right-is-not.html">parity violation</a>, which can even not be simulated up to now in any reasonable way at all. But within these limitations, it is possible to calculate pretty accurate correlation functions.<br /><br />And this is then how the methods are combined. The simulations provide the correlation functions. They can then be used with the equations of motions in two ways. Either they can be used as a benchmark, to check whether the approximations make sense. Or they can be fed into the equations as a starting point to solve them. Of course, the aim is then not to reproduce them, as otherwise nothing would have been gained. Both possibilities have been used very successfully in the past, especially for <a href="http://axelmaas.blogspot.de/2010/01/forces-of-nature-iii-strong-force-part.html">hadrons made from quarks</a> and to understand how the <a href="http://axelmaas.blogspot.de/2012/04/what-strong-interactions-temperature.html">strong force has influenced the early universe or plays a role in neutron stars</a>.<br /><br />Our aim is to use this combination for <a href="http://axelmaas.blogspot.de/2012/11/a-higgs-and-higgs-make-what.html">Higgs physics</a>. What we did, and have shown in the proceedings, are calculating the correlation functions of a theory including only the <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs</a> and the <a href="http://axelmaas.blogspot.de/2010/02/forces-of-nature-iv-weak-force.html">W and Z</a>. This will now form the starting point for getting also the <a href="http://axelmaas.blogspot.de/2009/11/in-previous-post-particles-appeared.html">leptons and quarks</a> into the game using the equations of motions. And we do this to avoid the problem with parity, and to include the very different masses of the particles. This will be the next step.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-2650885531926043582013-10-14T06:05:00.006-07:002013-10-14T06:05:59.203-07:00Looking for something out of the ordinaryA few days ago, I was at a quite interesting workshop on "<a href="https://indico.desy.de/conferenceDisplay.py?ovw=True&confId=7512">Anomalous quartic gauge couplings</a>". Since the topic itself is quite interesting and has significant relevance to my own research on <a href="http://axelmaas.blogspot.de/2012/11/a-higgs-and-higgs-make-what.html">Higgs physics</a>, I like to spend this entry on it. Let me start with describing what the workshop was about, and what is behind its rather obscure title.<br /><br />At the experiments at the LHC what we do is smashing two <a href="http://axelmaas.blogspot.de/2010/01/forces-of-nature-iii-strong-force-part.html">protons</a> into each other. Occasionally, either of two things may happen. The first possibility is that in the process each proton emits a W or Z boson, the carriers of the <a href="http://axelmaas.blogspot.de/2010/02/forces-of-nature-iv-weak-force.html">weak force</a>. These two particles may collide themselves, but emerge without change. That is what we call an elastic scattering. Afterwards, these bosons are detected. Or rather, they are indirectly detected by their <a href="http://axelmaas.blogspot.de/2013/02/almost-nothing-is-forever-decays.html">decay products</a>. The second possibility is that either of the protons emits a very energetic W or Z, which then splits into three of these afterwards. These are then, again indirectly, observed. This is called an inelastic effect. Of course, much more usually goes on, but I will concentrate here on the most important part of the collision.<br /><br />At first sight, both types of events may not have to do a lot with each other. But they have something important in common: In all cases four particles are involved. They may be any combination of Ws and Zs. To avoid further excessive use of 'W or Z', I will just call them all Ws. The details of which is which is not really important right now.<br /><br />The fact that four are involved explains already the quartic in the name of the workshop. In the standard model of particle physics, that what happens can occur mainly in either of two ways. One is that two Ws form for some time another particle. This may either be another W or Z, or also a Higgs. Or they can all four interact with each other at the same time. That is then called a quartic gauge coupling, and thus the complete second half of the name of the workshop.<br /><br />Now what is not right with this, as we talk about something anomalous? Actually, everything is all right with it in the standard model. But we are not only interested what there is in the standard model, but also whether there is something else. If there is something else, what we observe should not fit with what we calculate in the standard model alone. Such a difference would thus be anomalous. Hence, if we would measure that the quartic gauge coupling is different from the one we calculate in the standard model, we call it an anomalous quartic gauge coupling. And then we are happy, since we found something new. Hence, the workshop was about looking for such anomalous quartic gauge couplings. So far, we did not find anything, but we do not yet have seen many such situations. Far too few to make already any statements. But we will record many more in the future. Then we can make really a statement. At this workshop we were thus essentially preparing for the new data, which we expected to come in once the LHC is switched on again in early 2014.<br /><br />What has this to do with my research? Well, I try to figure out whether two Higgs, or Ws, can form a <a href="http://axelmaas.blogspot.de/2012/11/a-higgs-and-higgs-make-what.html">bound state</a>. If such bound states would exist, they would form exactly in such processes, for a very short time. Afterwards, they again decay into the final Ws. If they form, they would contribute to what we know. They are part of the standard model. So to see something new, we have to subtract their contributions, if they are there. Otherwise, they could be mistaken for a new, anomalous effect. Most important for me at the workshop was to understand, what is measured, and how they could contribute. It was also important to know what can be measured at all, since any experimental constraint I can get would help me in improving my calculations. This kind of <a href="http://axelmaas.blogspot.de/2013/08/picking-right-theory.html">connecting</a> <a href="http://axelmaas.blogspot.de/2012/08/two-worlds-theory-and-experiment.html">experiment and theory</a> is very important for us, as we are yet far from the point where our results are perfect. The developments in this area remain therefore very important to my own research project.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-47692344837048152532013-09-11T08:09:00.006-07:002013-09-11T08:09:52.994-07:00Blessing and bane: RedundancyWe have just recently published a <a href="http://arxiv.org/abs/1309.1957">new paper</a>. It is part of my research on the <a href="http://axelmaas.blogspot.de/2012/04/groundwork.html">foundations of theoretical particle physics</a>. To fully appreciate its topic, it is necessary to say a few words on an important technical tool: Redundancy.<br /><br />Most people have heard the term already when it comes to technology. If you have a redundant system, you have two or more times the same system. If the first one fails, the second takes over, and you have time to do repairs. Redundancy in theoretical physics is a little bit different. But it serves the same ends: To make life easier.<br /><br />When one thinks about a theory in particle physics, one thinks about the particles it describes. But if we would write down a theory only using the particles which we can observe in experiment, these theories would become very quickly very complicated. Too complicated, in fact, in most cases. Thus people have very early on found a trick. If you add artificially something more to the theory, it becomes simpler. Of course, we cannot just simply add it really, because otherwise we would have a different theory. What we really do is, we start with the original theory. Then we add something additional. We make our calculations. And from the final result we remove then what we added. In this sense, we added a redundancy to our theory. It is a mathematical trick, nothing more. We imagine a theory with more particles, and by removing in the end everything too much, we end up with the result for our original problem.<br /><br />Modern particle physics would not be imaginable without such tricks. It is one of the first things we learn when we start particle physics, the power of redundancy. A particular powerful case is to add additional particles. Another one is to add something external to the system. Like opening a door. It is the latter kind with which we had to deal.<br /><br />Now, what has this to do with our work? Well, redundancies are a powerful tool. But one has to be careful with them nonetheless. As I have written, we remove at the end everything we added too much. The question is, can this be done? Or becomes everything so entwined that this is no longer possible? We have looked at especially was such a question.<br /><br />To do this, we regarded a theory of only <a href="http://axelmaas.blogspot.de/2009/11/in-previous-post-particles-appeared.html">gluons</a>, the carrier of the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">strong force</a>. There has been a rather long debate in the scientific community how such gluons move from one place to another. A consensus has only recently started to emerge. One of the puzzling things were that you could prove mathematical certain properties of their movement. Surprisingly, <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">numerical simulations</a> did not agree with this proof. So what was wrong?<br /><br />It was an example of reading the fine-print carefully enough. The proof made some assumptions. Making assumptions is not bad. It is often the only way of making progress: make an assumption, and see whether everything fits together. Here it did not. When studying the assumptions, it turned out that one had to do with such redundancies.<br /><br />What was done, was essentially adding an artificial sea of such gluons to the theory. At the end, this sea was made to vanish, to get the original result. The assumption was that the sea could be removed without affecting how the gluons move. What we found in our research was that this is not correct. When removing the sea, the gluons cling to it in a way that for any sea, no matter how small, they still moved differently. Thus, removing the sea little by little is not the same as starting without the sea in the first place. Thus, the introduction of the sea was not permissible, and hence we found the discrepancy. There have been a number of further results along the way, where we learned a lot more about the theory, and about gluons, but this was the essential result.<br /><br />This may seem a bit strange. Why should an extremely tiny sea have such a strong influence? I am talking here about a difference of principle, not just a number.<br /><br />The reason for this can be found in a very strange property of the strong force, which is called <a href="http://axelmaas.blogspot.de/2012/04/why-colors-cannot-be-seen.html">confinement</a>: A gluon cannot be observed individually. When the sea is introduced, it offers the gluons the possibility to escape into the sea, a loophole of confinement. It is then a question of principle: Any sea, no matter how small, provides such a loophole. Thus, there is always an escape for the gluons, and they can therefore move differently. At the same time, if there is no sea to begin with, the gluons remain confined. Unfortunately, this loophole was buried deep into the mathematical formalism, and we had to first find it.<br /><br />This taught us an important lesson that, while redundancies are a great tool, one has to be careful with them. If you do not introduce your redundancies carefully enough, you may alter the system in a way too substantial to be undone. We now know what to avoid, and can go on, making further progress.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com0tag:blogger.com,1999:blog-3289825502161718378.post-83966842722783812832013-08-06T02:28:00.004-07:002013-08-06T02:28:56.604-07:00Picking the right theoryOne of the more annoying facts of modern particle physics is that our theories cannot stand purely alone. All of them have some parameters, which we are not able to predict (yet). The standard-model of particle physics has some thirty-odd parameters, which we cannot fathom by theoretical investigations alone. An example are the masses of the <a href="http://axelmaas.blogspot.de/2009/11/in-previous-post-particles-appeared.html">particles</a>. We need to measure them in an experiment.<br /><br />Well, you may say that this is not too bad. If we just have to make a couple of measurements, and then can predict all the rest, this is acceptable. We just need to find as many independent quantities as there are parameters, and we are done. In principle, this is correct. But, once more, it is the gap between <a href="http://axelmaas.blogspot.de/2012/08/two-worlds-theory-and-experiment.html">experiment and theory</a> which makes live complicated. Usually, what we can easily measure is something which is very hard to compute theoretically. And, of course, something we ca calculate easily is hard to measure, if at all possible.<br /><br />The only thing left is therefore to do our best, and get as close to each other as possible. As a consequence, we usually do not know the parameters exactly, but only within a certain error. This may still not seem too bad. But here enters something which we call theory space. This is an imaginary space in which every point corresponds to one particular set of parameters for a given theory.<br /><br />In principle, when we change the parameters of our theory, we do not just change some quantitative values. We are really changing our theory. Even a very small change may produce a large effect. This is not only a hypothetical situation - we know explicit examples where this happens. Why is this so?<br /><br />There are two main reasons for this.<br /><br />One is that the theory depends very sensitively on its parameters. Thus, a slight change of the parameters will induce a significant change in everything we can measure. Finding then precisely the parameters which reproduce a certain set of measurements requires a very precise determination of the parameters, possibly with many digits of precision. This is a so-called fine-tuning problem. An example of this is the standard model itself. The mass of the <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs</a> is fine-tuned in the standard model, it depends strongly on the parameters. This was the reason why we were not able to fully predict its mass before we measured it, although we had many, many more measurements than the standard model has parameters. We could just not pinpoint it precisely enough, but only within a rough factor of ten.<br /><br />A related problem is then the so-called hierarchy problem of modern particle physics. It is the question why the parameters are fine-tuned just to the value we measure in the experiment, and not slightly off and giving a completely different result. Especially this takes today the form of "Why is the Higgs particle rather light?"<br /><br />But the standard model is not as worse as it can get. In so-called chaotic models the situation is far worse, and any de-tuning of the parameters leads to an exponential effect. That is the reason why these models are called chaotic - because everything is so sensitive, there is no structure at first sight. Of course, one can, at least in principle, calculate this dependency exactly. But any ever so slight imprecision is punished by an exponentially large effect. This makes chaotic models a persistent challenge to theoreticians. Fortunately, particle physics is not of this kind, but such models are known to be realized in nature nonetheless. E.g. some fluids behave like this, under certain conditions.<br /><br />The other reason is that theory space can have separate regions. Inside these regions, so-called phases, the theory shows quite distinct behaviors. A simple example is water. If you take the temperature and pressure as parameters (though they are not really fundamental parameters of the theory of water, but this is just for the sake of the argument) then two such phases could be solid and liquid. In a similar manner theories in particle physics can show different phases.<br /><br />If the parameters of a theory are very close by to the boundary between two phases, a slight change will push the theory from one phase to another. Thus, also here it is necessary to determine the parameters very precisely.<br /><br />In the case of my research on <a href="http://axelmaas.blogspot.de/2012/11/a-higgs-and-higgs-make-what.html">Higgs physics</a>, I am currently facing such a challenge. The problem is that the set of basic parameters are not unique - you can always exchange one set against another set, as log as the number is kept, and there is no trivial connection between two new parameters. Especially, depending on the <a href="http://axelmaas.blogspot.de/2012/01/tools-of-trade.html">methods used</a>, particular set are more convenient. However, this implies that for every method it is necessary to redo the comparison to experiment. Since I work on Higgs physics, the dependence is strong, as I said above. It is thus highly challenging to make the right pick. And currently some for my results significantly depend on this determination.<br /><br />So what can I do? Instead of pressing on, I have to make a step back, and improve my connection to experiment. Such a cycle is quite normal. First you make your calculation, getting some new and exciting result, but with large errors. To improve your errors, you have to go back, and make some groundwork, to improve the basis. Then you return back to the original result, and get an improved one. Then you do the next step, and repeat.<br /><br />Right now, I am not only for the Higgs physics in such a process. Also our project on a <a href="http://axelmaas.blogspot.de/2013/01/taking-detour-helps.html">neutron star's interior</a> currently underoes such an improvement. This work is rarely very exciting or yields unexpected results. But it is very necessary, and one cannot get away without, once one wants to get reliable results. Also this is part of a researcher's (everyday) life.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com1tag:blogger.com,1999:blog-3289825502161718378.post-42219398476839031712013-05-17T02:01:00.001-07:002013-05-17T04:39:47.199-07:00What could the Higgs be made of?One of the topics I am working on is how the standard model of particle physics can be extended. The reason is that it is, intrinsically, but not practically, <a href="http://axelmaas.blogspot.de/2012/05/above-and-beyond.html">flawed</a>. Therefore, we know that there must be more. However, right now we have only very vague hints from experiments and astronomical observations how we have to improve our theories. Therefore, many possibilities are right now explored. The one I am working on is called <a href="http://axelmaas.blogspot.de/2012/05/technicolor.html">technicolor</a>.<br /><br />A few weeks ago, my master student and I have published a <a href="http://arxiv.org/abs/1304.4423">preprint</a>. By the way, a preprint is a paper which is in the process of being reviewed by the scientific community, whether it is sound. They play an important role in science, as they contain the most recent results. Anyway, in this preprint, we have worked on technicolor. I will not rehearse too much about technicolor here, this can be found in an earlier <a href="http://axelmaas.blogspot.de/2012/05/technicolor.html">blog entry</a>. The only important ingredient is that in a technicolor scenario one assumes that the <a href="http://axelmaas.blogspot.de/2010/03/higgs-effect.html">Higgs particle</a> is not an elementary particle. Instead, just like an atom, it is made from other particles. In analogy to <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">quarks</a>, which build up the <a href="http://axelmaas.blogspot.de/2010/01/forces-of-nature-iii-strong-force-part.html">protons and other hadrons</a>, these parts of the Higgs are called techniquarks. Of course, something has to hold them together. This must be a new, unknown force, called techniforce. It is imagined to be again similar, in a very rough way, to the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">strong force</a>. Consequently, the carrier of this fore are called technigluons, in analogy to the <a href="http://axelmaas.blogspot.de/2010/01/lthe-forces-of-nature-iii-strong-force.html">gluons</a> of the strong force.<br /><br />In our research we wanted to understand the properties of these techniquarks. Since we do not yet know if there is really technicolor, we can also not be sure of how it would eventually look like. In fact, there are many possibilities how technicolor could look like. So many that it is not even simple to enumerate them all, much less to calculate for all of them simultaneously. But since we are anyhow not sure, which is the right one, we are not yet in a position where it makes sense to be overly precise. In fact, what we wanted to understand is how techniquarks work in principle. Therefore, we just selected out of the many possibilities just one.<br /><br />Now, as I said, techniquarks are imagined to be similar to quarks. But they cannot be the same, because we know that the Higgs behaves very different from, say, a <a href="http://axelmaas.blogspot.de/2010/01/forces-of-nature-iii-strong-force-part.html">proton or a pion</a>. It is not possible to get this effect without making the techniquarks profoundly different from the quarks. One of the possibilities to do so is by making them a thing in between a gluon and a quark, which is called an adjoint quark. The term 'adjoint' is referring to some mathematical property, but these are not so important details. So that is what we did: We assumed our techniquarks should be adjoint quarks.<br /><br />The major difference is now what happens if we make these techniquarks light and lighter. For the strong force, we know what happens: We cannot make them arbitrarily light, because they <a href="http://axelmaas.blogspot.de/2011/10/mass-from-strong-force.html">gain mass from the strong force</a>. This appears to be different for the theory we studied. There you can make them arbitrarily light. This has been suspected since a long time from indirect observations. What we did was, for the first time, to directly investigate the techniquarks. What we saw was that when they are rather heavy, we have a similar effect like for the strong force: The techniquarks gain mass from the force. But once they got light enough, this effect ceases. Thus, it should be possible to make them massless. This possibility is necessary to make a Higgs out of them.<br /><br />Unfortunately, because we used <a href="http://axelmaas.blogspot.de/2012/02/simulating-universe.html">computer simulations</a>, we could not really go to massless techniquarks. This is far too expensive in terms of the time needed to do computer simulations (and actually, already part of the simulations were provided by other people, for which we are very grateful). Thus, we could not make sure that it is the case. But our results point strongly in this direction.<br /><br />So is this a viable new theory? Well, we have shown that a necessary condition is fulfilled. But there is a strong difference between necessary and sufficient. For a technicolor theory to be useful it should not only have a Higgs made from techniquarks, and no mass generation from the techniforce. It must also have more properties, to be ok with what we know from experiment. The major requirement is how strong the techniforce is over how long distances. There existed some indirect earlier evidence that for this theory the techniforce is not quite strong enough for sufficiently far distances to be good enough. Our calculations have again a more direct way of determining this strength. And unfortunately, it appears that we have to agree with this earlier calculations.<br /><br />Is this the end of technicolor? Certainly not. As I said above, technicolor is foremost an idea. There are many possibilities how to implement this idea, and we have just checked one. Is it then the end of this version? We have to agree with the earlier investigations that it appears so in this pure form. But, in fact, in this purest form we have neglected a lot, like the rest of the standard model. There is still a significant chance that a more complete version could work. After all, the qualitative features are there, it is just that the numbers are not perfectly right. Or perhaps just a minor alteration may already do the job. And this is something where people are continuing working on.Axel Maashttps://plus.google.com/108226118876999381332noreply@blogger.com5