Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Matt's Particle Physics Column, Part 3b

By manobes in Science
Thu Aug 22, 2002 at 12:56:51 AM EST
Tags: Science (all tags)
Science

This is the second part of the third instalment of Matt's Particle Physics Column. The focus of this instalment is the unified electroweak theory, which is a major component of the standard model of particle physics.

Last time, we reviewed the history of the weak interactions, ending in 1967 with the the unified electroweak theory proposed by Glashow, Weinberg and Salam (the GWS theory from now on). This instalment begins with a brief summary of the major points of the GWS theory, then discusses developments in the early Seventies which cemented belief in the GWS model. Following that, three major experiments from the eighties and nineties are discussed, the direct observation of the W and Z bosons, the discovery of the top quark, and the determination of the number of light neutrino flavours. We conclude with a discussion of Higgs boson, and what a direct observation of it would indicate about the future direction of particle physics.

Readers new to the column are urged to consult the past instalments


Review of the GWS Theory

Following pioneering work by Sheldon Glashow, Steven Weinberg and Abdus Salam independently constructed unified theories of the electromagnetic and weak interactions (Weinberg in 1967, Salam in 1968). These models are essentially the same as those used today. Developments in the 1970s merely added to them. The models in 1967 contained all the elementary particles known at the time.

The matter particles in the original GWS model are the electron and the muon along with their neutrinos. For each matter particle there is also a corresponding antimatter particle as well. The forces between these matter particles are mediated by four force carriers. At low energies the weak force is carried by the electrically charged W+ and W- bosons, and the neutral Z boson, and the electromagnetic force is carried by the photon. The W and Z bosons are very massive -- about 160 000 times more massive than the electron --, while the photon is massless.

The low energy properties of the theory are controlled by something known as the Higgs mechanism. Roughly speaking, at low energy the effect of the Higgs mechanism is to create a sort of "mud" through which the other particles move. This "mud", called the Higgs vacuum, causes various particles to acquire masses depending on their strengths of interacting. This mass gain happens for the W and Z bosons, as well as the muon and electron. The issue of neutrino masses is more subtle, and will be covered in a future column.

At high energies, the GWS theory looks very different. The masses generated by the Higgs vacuum field become negligible, instead, at this energy, Higgs particles can be created. Higgs particles have no quantum spin and are roughly 240 000 times the mass of the electron. A major task in experimental particle physics over the next 15 years will be determining the properties of the Higgs particle directly.

't Hooft, Veltman, the GIM Mechanism, the November Revolution, and Neutral Currents

The period from 1967 until 1972 saw very little activity on the GWS model. For example, Weinberg cited his own paper on the subject only once during this period. While the GWS model reproduced what was known about the weak force at the time it had several serious problems. The most major problem was the problem that nobody had proved that the theory was renormalizable. We discussed renormalization in the context of Quantum Chromodynamics in part 2 of this column. Briefly, if a theory is not renormalizable, then it has much less predictive power.

In 1967, it was not known if theories of the GWS type were renormalizable. Many people suspected that they should be, however, all efforts to prove this had stalled by the mid sixties, despite efforts from some of the field's best theorists, notably Richard Feynman. Without a convincing proof people tried to explore alternative avenues. However, none of the alternatives offered the same simple structure that the GWS theory had.

Just as for quantum Chromodynamics, the situation for the GWS theory changed drastically in 1972. This was the year that Gerhard 't Hooft and Martinus Veltman proved that the class of theories to which the GWS model belonged (Yang-Mills theories) were renormalizable. This was a huge development in particle physics because it showed how truly precise predictions could be extracted from the GWS theory.

Renormalizability wasn't the only problem with the GWS theory. As formulated by Weinberg and Salam the theory had another significant difficulty -- neutral flavour changing weak interactions. Recall the radioactive decay of the neutron into a proton, electron and anti-neutrino. In this process, what happens, according to the GWS theory, is one of the down quarks inside the neutron, which is made up of two downs and an up quark, emits a W- boson, and changes into an up quark which combines with the remaining up and down quarks to make a proton. The W boson quickly disintegrates into an electron and anti-neutrino. This type of process, where a quark of one flavour changes into another is known as a flavour changing interaction. In this case, it's a charged interaction, since a charged W boson is involved.

The trouble with the GWS theory was with the strange quark. The strange quark had a charged flavour changing interaction, in which the strange quark decayed into an up quark and a W+ boson. This interaction was observed in the decays of charged K mesons, for example when a K+ decayed into a neutral pion, plus a positron and a neutrino. However the GWS theory, as it stood in 1967, also permitted a second type of interaction for the strange quark. In this interaction the strange quark could change into a down quark and emit a neutral Z boson. This would lead to decays in which a K- decayed into a negatively charged pion, along with an electron--positron pair. The problem, of course, was that such a decay was not observed, and the rate for this decay predicted by the GWS theory was way above the upper limits that had been measured.

This problem was solved in 1970 by Glashow, Iliopoulos and Maiani. Their solution was to add a fourth quark to the theory, the charm quark. This was proposed on two grounds. The first was that while there were four non-quark matter particles, there were only three quarks. The second was that in a natural way the strange quark to down quark neutral interaction got cancelled by the new charm quark to up quark neutral interaction. This cancellation is known as the GIM mechanism after its three authors.

Of course, this could be regarded as jury-rigging a theory that didn't agree with data. The proposed solution certainly worked, however apart from aesthetics there wasn't much justification. It's worth pointing out that the charm quark did fit very nicely into the framework of the theory and in fact, the theory with the charm seemed to hang together better. The other nice thing about this solution was that it provided a new prediction, that ought to be easy to test, which could quickly rule it out.

With the addition of a new quark, Glashow and company predicted would come new bound states. New mesons and baryons which contained charm quarks. The simplest such state would be a meson made up of a charm and anti-charm quark, dubbed charmonium, in analogy to positronium, the bound state of an electron and a positron. The nice thing about charmonium was that, at the correct energy, it ought to be copiously produced in electron-positron collisions. Glashow gave several talks at conferences encouraging experimentalists to look for these new states.

It took four years for the energies of particle accelerators to reach the point where charmonium could be produced. In November 1974 the discovery of charmonium was announced by two teams working at separate accelerators. At the Stanford Linear accelerator by Burton Richter's team and at Brookhaven National Laboratory by Samual Ting's team. Experimentally the electron-positron collider at Stanford was an excellent tool for studying these new bound states. The reason was that the events involving them were very clean. This image is a very good example of that. The image is a reconstruction of the particle tracks from the Stanford detectors. The incoming electron and positron are clearly visible, and easily distinguished from the outgoing positive and negative pions.

The discovery of Charmonium was perhaps the biggest event in particle physics since the discovery of parity violation. Among physicists it is known as the November Revolution. However, rather than opening up new fields, it had the opposite effect of picking out one model above all others. Of all the competing models available in 1974 only Quantum Chromodynamics and the GWS electroweak theory, with the addition of the charm quark, accounted for all the data. The discovery of charmonium was the key that tied together the proton scattering experiments, which favoured quarks and quantum Chromodynamics, and the large body of weak interaction phenomenology that was explained by the GWS model. When combined with the still-recent, 't Hooft and Veltman proof of renormalizability, it is clear why this event turned the vast majority of particle physicists into believers of the standard model of particle physics. Another gauge of the importance of the charmonium discovery was the rapidity with which Richter and Ting were awarded the Nobel prize. They announced their discovery in 1974 and were awarded the prize in 1976, which is extremely rapid by Nobel standards.

There was one more major piece of evidence that was found in the mid-Seventies to support the standard model of particle physics, the observation of weak neutral currents. We've seen how the introduction of the charm quark eliminated an effect of the neutral Z boson that had not been observed. However, the Z boson was predicted to have many observable effects as well. For example, when an electron and a positron collide, they annihilate. At low energy the result of this annihilation is a photon. As the collision energy is raised, there is a growing chance that they will annihilate into a Z boson instead. Now unlike photons, Z bosons interact with neutrinos, so the Z boson can decay into a neutrino anti-neutrino pair. Since neutrinos are very hard to detect, in a particle detector what is observed is an electron positron collision, then nothing.

It is also possible to attempt to make a beam of neutrinos and try to to have them hit something in a detector. This is the the experiment that was carried out at CERN, which culminated in the discovery of weak neutral currents in 1973. In the CERN experiment, a huge bubble chamber, Gargamelle -- the last of the bubble chambers -- was bombarded by a beam of neutrinos. If the weak neutral current was present, occasionally one of the neutrinos would hit one of the atoms in the bubble chamber, causing a reaction. Such an event is illustrated in this photograph. Notice the spray of particles on the left hand side, which appear to emerge from a single point.

There were many developments in the late seventies relating to the electroweak theory. The most important was the discovery of the third generation of matter. The third generation or family of matter consists of the tau lepton, which is a heavy copy of the muon -- itself a heavy copy of the electron --, the associated neutrino, and two new quarks, named top and bottom quarks. Both the tau lepton and the bottom quark were discovered rather quickly in the late Seventies. The top quark and the tau neutrino weren't directly observed until much later, but because of the success of the GWS model, most physicists assumed that they existed.

The postulating of these particles was strongly motivated by more than aesthetics this time. Theoretical work in the mid-Seventies on something known as the axial anomaly showed that the standard model of particle physics would only be renormalizable if the matter particles it contained came in the observed family structure. That is, the standard model gives no way to predict the total number of matter particles, but it does say that they must come in groups of four, a lepton, its neutrino, and two quarks.

The Direct Observation of the W and Z Bosons

With the standard model theoretically established, and observationally confirmed, people set about to verify one of it's most important features. Although there effects had been indirectly observed, no actual W and Z bosons had been directly produced in the laboratory. The standard model made precise predictions about the conditions in which these particles should be producable, so the accelerator designers set about trying to build a particle accelerator which could reach the required energy goal.

Thanks to a technical achievement by Simon van Der Meer, known as stochastic cooling, it was possible to build the Super proton synchrotron. This machine collided protons and anti-protons with enough energy to produce the elusive W and Z bosons. In 1983 a team lead by Carlo Rubbia observed the first W and Z events. The observed masses of the W and Z were right in the range predicted by the standard model. This work had two primary impacts, the first was the confirmation of the standard model, which had been expected, and the second was the technical triumph. The SPS was the first proton anti-proton collider and paved the way for the Fermilab Tevatron collider.

The Number of Light Neutrinos

Throughout the 1990's the main centre for experimental work on electroweak physics was the Large Electron Positron collider, or LEP. Like all particle physics experiments, LEP was very simple in principle. The idea was to smash electrons and positrons together, right at the energy needed to make Z bosons. This proved to be very fruitful.

Until its shutdown in 2001 LEP produced vast numbers of Z bosons and measured their properties very precisely. The electroweak theory makes lots of predictions about the Z boson and LEP confirmed a great number of them to very high precision. Most of these predictions had to do with the decays of the Z. A typical Z event at LEP went something like this. The electron and positron smash into each other, at an extremely high energy. Because the energy is so high, it is possible to produce a Z boson. In fact, the chance of producing a Z boson is much higher than the chance of producing a photon. This Z boson zips along for a (very) short distance then decays in about 10-25 seconds. Being very heavy the Z can decay into lots of different stuff, exactly what depends on how much energy it has.

At the original LEP energy the Z primarily decayed in three different ways. The first was into a quark-anti-quark pair where the quark type was anything but the top, which is too heavy. As we discussed in the second instalment, these quarks will quickly turn into jets of hadrons, which is what the LEP teams observed. Another possibility with the quarks was that they were created in a bound state. In this case, you might see some specific types of hadrons in your detector. If you lump all of these decays together you find that they account for about 77% of all of the Z decays.

The Z could also decay into a lepton-anti-lepton pair. That is it could decay to electron and a positron, a muon and an anti-muon, or a tau and an anti-tau. These types of decays are really easy to see because you see the lepton and anti-lepton emerge back to back, and you can easily track their charges. Decays of this type account for about 3.5% of all Z decays.

We've already discussed the final way that the Z can decay, into a neutrino-anti-neutrino pair. Now, you don't actually see the neutrinos in your detectors, so what you see is a collision event, with no final products. This is referred to as an invisible event. These types of events account for about 20% of all Z decays. These events proved to be very useful in making a remarkable observation.

You can use the electroweak theory to compute the decay rate of the Z into neutrinos. Since the neutrinos are nearly massless, to a (very good) first approximation you can treat them all as the same. Then you get some decay rate, A, for each neutrino type. Since each type is being treated the same, the total decay rate is just A+A+A=3*A, since there are three types.

But what if there was four types? Or six? Well, then we'd have 4*A or 6*A. Rather than guessing let's just say that there are X types of neutrinos, so the decay rate is X*A. By measuring the total invisible decay rate of the Z it is possible to determine X to a very high precision. This then tells you how many different types of neutrinos there are. Recall the discussion above about the structure of the matter families in the standard model. Matter comes in "generations" of four, a lepton, its neutrino, and two quarks. So a measurement of the number of neutrino types is, within the context of the standard model, a measurement of the number of matter families there are.

Data for this measurement was taken over the entire ten year LEP run. As of 1998 the best value, with no extra input from theory, was X=3.00 plus or minus 0.06. This is strong confirmation that within the standard model, the three generations of matter we've discovered are all that there are. Put in a different and more exciting way, this measurement indicates that any new form of matter that we discover will likely not fit within the framework of the standard model.

The Top Quark

While LEP may have been the centre for precision electroweak physics, for sheer power the Fermilab Tevatron had it beat. Like the SPS before it the Tevatron collides protons and anti-protons, however the Tevatron has much higher energy. By the early 1990s it was clear that the Tevatron was the machine that was needed to find the top quark.

With the discovery of the the bottom quark, the existence of a sixth quark was postulated in order to preserve the structure of the standard model. This quark was named "top". Unfortunately, the mass of the top quark was not predicted by the standard model, so it was difficult to know where to find the top.

Through the eighties and early nineties observations of other electroweak processes constrained the mass of the top (the top quark can indirectly intrude into the calculations of processes) however these constraints were not strong. The other major constraint came from the ever increasing energies being probed by particle accelerators.

By the early nineties, the lower, direct, limit from accelerators, and the upper, indirectly limit had almost converged on a value. It was almost certain that the top was very massive, and that meant that the only machine that stood a chance of finding it directly was the Tevatron. After a number of hints the CDF collaboration announced the discovery of the top quark. Their result was confirmed shortly thereafter by the other major Fermilab group, D0. The observed mass was consistent with the theoretical expectations.

The Final Mystery of the Standard Model

With the observation of the top quark, all three generations of matter particles in the standard model have now been directly seen in experiments. In addition all the force carriers have been observed. The only remaining particle that needs to be measured is the Higgs boson.

Unfortunately it's not easy to detect Higgs bosons. The major problem is that they are very massive, more massive than the Z boson, which is 180 000 times as massive as the electron, but somewhat less massive than the top quark. Incidentally these limits are empirical. The standard model itself does not predict the mass of the Higgs boson. Direct limits, from the LEP collider, just before it shut down, seem to indicate that the Higgs boson is around 220 000 times the mass of the electron.

The other trouble with the Higgs boson is that it's not entirely clear what it's going to be. The standard model only requires a massive spin zero particle, which at low energy creates the Higgs vacuum field. At higher energies there could be a multitude of things. For example people have suggested triplets of Higgs bosons, or Higgs bosons which are bound states of new particles, much like mesons are bound states of quarks.

The reason for these proposals is that the properties of the Higgs boson may depend strongly on whatever theory of particle physics comes beyond the standard model. All of the current proposals, most notably supersymmetry, affect what will be seen in the region where the Higgs boson is expected.

In 2001, right at the end of its ten year run, the LEP accelerator saw what might have been the first events involving the Higgs boson. The teams at LEP observed several events which seemed to be in line with a Higgs boson decay, however, there were not enough of them to be considered statistically significant. After some discussion about extending the run, LEP was shut down for good in late 2001, with the Higgs particle left undiscovered.

Conclusion

This instalment of the column has reviewed the modern history of the unified electroweak force, from 1967 until the present. With the discovery of the charm quark, the proof of renormalizability, and the observation of weak neutral currents, it was the early Seventies which saw the true rise to prominence of the standard model of particle physics. The data amassed during these few years was enough to rule out any competing theory, and focus all theoretical work on the standard model.

A crucial, if expected, observation occurred in 1983, with the direct observation of the W and Z bosons. Additionally this experiment was a technical triumph in particle accelerator design, which paved the way for the Fermilab Tevatron, and the future LHC. The Tevatron, of course, was the accelerator which had energies sufficient to observe the top quark in 1995.

Among the thousands of precision measurements preformed at LEP over its run the most remarkable was the measurement of the total number of standard model families of matter. The LEP experiments measured the number of neutrino types present, then, using the theoretical constraints of the standard model one can conclude that there are only 3 families of matter. This remarkable measurement gives hope to the idea that the next set of major discoveries in particle physics will not be simply extensions of the standard model.

Finally, among a number of open problems, the mystery of the Higgs boson has yet to be solved. With the shutdown of LEP, this mystery will likely remain unsolved until the LHC starts its run (around 2009). With the discovery of the Higgs an almost certain first result, the LHC promises to usher in a brand new era in particle physics.

Acknowledgments

Thanks to all the people who have commented on the other instalments and participated in the discussions. I would also like to thank Peter Whysall for extensive editorial assistance with this instalment.

Matthew Nobes is a PhD student in theoretical particle physics. He studies at Simon Fraser University, in British Columbia, Canada. He has been working on a PhD for about three years, prior to that he spent two years doing a masters degree. He has a web page here where you can go to find some links relating to particle physics.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Part One
o Part Two
o Part Three A
o Sheldon
o Glashow
o Steven
o Weinberg
o Abdus
o Salam
o Higgs
o mechanism
o renormaliz able
o Richard
o Feynman
o Gerhard
o 't Hooft
o Martinus
o Veltman
o Yang
o theories
o Iliopoulos
o Maiani
o Stanford Linear accelerator
o Richter's
o Brookhaven National Laboratory
o Ting's
o This
o CERN
o Gargamelle
o this
o Simon van Der Meer
o Super proton synchrotron
o Rubbia
o Fermilab
o Tevatron
o Large Electron Positron collider
o CDF
o D0
o supersymme try
o LHC
o here
o Also by manobes


Display: Sort:
Matt's Particle Physics Column, Part 3b | 22 comments (11 topical, 11 editorial, 0 hidden)
Confirmation of number of neutrino families (none / 0) (#12)
by BlaisePascal on Thu Aug 22, 2002 at 11:03:18 AM EST

The article described the experiments by the LEP to determine the number of lepton family generations there are. Basically, it said that by measuring the rates of the "invisible events" (e.g., neutrino-antineutrino creation events), they could directly measure the number of families. If neutrinos are nearly massless (the presented argument goes), then each generation should be created with equal probability A, so the total observed rate for invisible events should be X*A, where X is the number of generations. So we have X*A = O, where O is the observed rate. If we know A and O, we can compute X, obviously. We can measure O directly, but I must have missed the part about how we can measure A. It seems that if there were, say, 10 generations of neutrinos, then the creation rate of Tau neutrinos would be 2%. If there are 3, the creation rate of Tau reutrinos would be 7%[1]. How do we measure A, to know it's 7%, not 5% (X=4), 4% (X=5), 3%(X=6, 7, or 8), etc?

Well, (none / 0) (#13)
by manobes on Thu Aug 22, 2002 at 11:39:49 AM EST

We can measure O directly, but I must have missed the part about how we can measure A. It seems that if there were, say, 10 generations of neutrinos, then the creation rate of Tau neutrinos would be 2%. If there are 3, the creation rate of Tau reutrinos would be 7%[1]. How do we measure A, to know it's 7%, not 5% (X=4), 4% (X=5), 3%(X=6, 7, or 8), etc?

You didn't miss anything. A is input from theory. That's why I used phrases like "within the standard model". However, the chance that the computation of A is off is pretty slim. In order for that to be the case, the neutrino-Z interaction would have to be different in this case than in other cases where it has been measured (such as the bubble chamber experiment discussed above).


No one can defend creationism against the overwhelming scientific evidence of creationism. -- Big Sexxy Joe


[ Parent ]
calculations (5.00 / 2) (#14)
by sarunas on Thu Aug 22, 2002 at 04:08:43 PM EST

Something I've been interested in (and don't know a thing about) is the calculation involved for these things - how do you predict this and that.  Why are the simulations so computationaly expensive? etc etc  What does it look like to have this or that revision to the equations?  

It'd probably be a nice separate article to have.  Examples and explanations of all these simplifications for people who understand math, but haven't had exposure to the more interesting parts.

Wait for part four (5.00 / 1) (#16)
by manobes on Fri Aug 23, 2002 at 12:02:55 PM EST

Something I've been interested in (and don't know a thing about) is the calculation involved for these things - how do you predict this and that.

This is a rather simple idea actually, just difficult to do in pratice. I'll give a short answer here, and refer you to the upcoming part four for a bit more detial.

The short anwser is this. You cannot exactly solve most of these types of theories (and none of the ones that actually make up the standard model). What you can solve exactly is the much simplier case were there are no interactions. In electrodynamics, this means that the photons zip about, and the elctrons and positrons zip about, but they never interact with each other, or others of their kind.

Like I said, you can solve that theory exactly, but, of course, it's basically worthless because it predicts that nothing happens. However, one fact saves the day. In the theory with interactions, the strentgh of the interaction is quite small. So you can treat the interaction approximatly as a correction to the theory with no interaction.

This is a systematic procedure, called perturbation theory, which builds up these corrections, order by order in the number of interactions which take place.

This also works for the GWS theory, though it's a touch more complicated since there are a lot more particles.

How does it work? Well say you wanted to compute the quantity A I discussed above. Okay the first and most obvious thing that could happen is that the Z boson simpliy disintegrates into the neutrino anti-neutrino pair. That's one interaction. That sort of computation typically takes about a half hour at most. The number you get for A will be close to the correct number, perhaps a couple of percent off.

Okay so what's next? Well another thing that could happen is that the Z boson could decay into an electron and positron pair. The pair can then recombine back into a Z boson, which then decays into the neutrino anti-neutrino pair. Well now we've got three interactions happening, so this is a much smaller contribution to the whole. You can go through and enumerate all the ways to get 2 or 3 interactions to happen and compute them all. This is a very difficult calculation, perhaps requireing a week. Typically this will get you within a fraction of a percent of the exact answer.

The pattern continues, but quickly gets very hard to do, and the returns are very small. (Note the above discription is just a picture to put into your head as a placeholder. It's not an entirely correction way of thinking about this, but not entirely wrong either. Stay tuned for part 4).

Why are the simulations so computationaly expensive?

Well, that's a different issue. The entire explanation I just gave relied crucially on the fact that the interaction was weak. If you go back to part two, you'll recall that for the strong interactions at low energy the interaction is in fact strong. In this case the method I outlined above is basically worthless.

What you have to do instead is approximate the theory in a whole different way. The idea is to replace continous space and time with a discrete grid of points (a 4 dimensional hypercube) and to only consider a small bit of space and time (typically about twice the radius of a proton in space, and a very short amount of time).

By doing this, you reduce the problem to a problem which can be formulated algorithmically on a computer. Then you simulate the theory at a number of different values of the interpoint spaceing and total volume. Then you have to extrapolate to zero interpoint spaceing and infinite volume, at which point, presumably, you have the exact answer.

It turns out that the extrapolation in volume typically isn't much of a problem. Two proton radii are quite enough. But geting reliable extrapolations in the grid spacing is a real pain. The trouble is that the simualtions take longer as you reduce the spacing (at a fixed volume). In fact the time it takes scales as the sixth power of the spaceing. That's why big computers are needed. To push the spacing lower, in order to get better extrapolations.

xamples and explanations of all these simplifications for people who understand math, but haven't had exposure to the more interesting parts.

Well, I hope that cleared some stuff up, but I'm sure that I wasn't 100% clear in all spots. Please keep asking questions until it's clear, this is valueable practice for when I write part 4.


No one can defend creationism against the overwhelming scientific evidence of creationism. -- Big Sexxy Joe


[ Parent ]
re: wait for part four (none / 0) (#19)
by sarunas on Fri Aug 30, 2002 at 01:39:00 AM EST

excellent. thanks for the explanation, i'll stay tuned.

[ Parent ]
Massive neutrino? (none / 0) (#15)
by galibert on Thu Aug 22, 2002 at 11:04:41 PM EST

I'm wondering (not trying to defend a theory here) if it could be possible that a neutrino exists that would be massive enough that it couldn't appear in a Z decay, within the bounds of currently accepted theories?  I wonder in particular if what seems to go on about neutinos automagically changing of kind would apply.

  OG.


A bit of a gloss (none / 0) (#17)
by manobes on Fri Aug 23, 2002 at 12:06:29 PM EST

I'm wondering (not trying to defend a theory here) if it could be possible that a neutrino exists that would be massive enough that it couldn't appear in a Z decay, within the bounds of currently accepted theories?

Short answer, yes. If there was a fourth famliy of matter, with the neutrino's mass greater than half of the Z boson mass, then you could simply wheel that up into the standard model. Most people, myself included, don't thing that's very likely, given that all three other neutrinos have extremely small masses.

I wonder in particular if what seems to go on about neutinos automagically changing of kind would apply.

Well, I'll cover neutrino oscillations some time in the future, but to answer your question, no. The oscillation results indicate that the neutrinos have very, very small masses, far to small to alter the conclusions of the measurement I discussed.


No one can defend creationism against the overwhelming scientific evidence of creationism. -- Big Sexxy Joe


[ Parent ]
Possible but unprobable then (none / 0) (#18)
by galibert on Fri Aug 23, 2002 at 06:40:36 PM EST

That's more or less my point of view too.

About the neutrino oscillations, I was in fact wondering if the current working theories (if there is any) require all the neutrinos to be of same or similar mass?  OTOH, now that I think of it, it would probably involve the usual heisenberg fuzzyness, and the neutrino could tunnel to the higher mass with a probability depending on the mass difference, i.e. low enough to be currently undetectable.

Of course, occam's razor is currently against it.  We don't need a 4th generation yet :-)

  OG.


[ Parent ]

How sad! (none / 0) (#20)
by epepke on Fri Aug 30, 2002 at 10:20:18 PM EST

I've been out of the fizix biz for about 5 years. I hadn't heard that LEP had been shut down for good. How sad!


The truth may be out there, but lies are inside your head.--Terry Pratchett


November Miracle? (1.00 / 1) (#21)
by losang on Sat Aug 31, 2002 at 02:00:14 AM EST

Is the November miracle when you realize in three months that physics is nothing more than speculation and not based on pure logic?

Respecting your religion (none / 0) (#22)
by drum on Mon Mar 31, 2003 at 12:27:31 PM EST

Hi Losang,
I think you're doing Budhism a disservice here. I'll go out on a limb here, and guess that Matt doesn't comment on Budhist discussions from the viewpoint of a Physicist. Though you've made it clear that some connections (or contradictions) may exist.
I've no doubt you feel your making a point, or some points, and a lot of sense; and that you know people who feel the same. But I get the feeling that people reading these threads don't agree that your making a lot of sense. I have some good impressions of Budhist thoughts. But honestly, I find it hard to recall them after encountering your brand of Budhist logic.
Anyway, it seems you're going after the scientific method, using Sophist techniques. It's your prerogative not to ascribe to the kind of thinking scientists refer to as "rational" or "logical", prefering those which you feel support your religion. But what's the relevance to Matt's excellent & brave attempt to explain QFT and the SM to a general audience? It only serves to annoy, and puts a (very slight) smear on Budhism.


[ Parent ]
Matt's Particle Physics Column, Part 3b | 22 comments (11 topical, 11 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!