The standard model: its successes and limitations
Dr Mika Vesterinen
Dr. Vesterinen is a Senior Research Fellow at the University of Warwick. He completed both his degree (2007) and PhD at the University of Manchester (2011). His thesis was on a measurement of the Z boson transverse momentum distribution and of the ZZ and WZ production x-sections using 7.3-8.6 inverse femtobarns of proton-antiproton collisions at a centre of mass energy of 1.96TeV at Fermilab.
His research highlights include studies of CP violation with beauty hadrons and ongoing measurements of the W boson mass; Convenor of the “semileptonic” physics group and project leader for the trigger system of LHCb
The quest to understand the elementary building blocks of matter and their interactions has led us to the standard model. This contains 12 matter fermions, and their corresponding antiparticles, and describes the electromagnetic and strong/weak nuclear forces in terms of three fundamental symmetries.
Dr Veterinen described how the standard model was developed, including the theoretical developments and key experimental tests culminating with the discovery of the Higgs boson at the LHC in 2012. He also discussed the limitations of the standard model and the prospects for discovering physics beyond it.
My notes from the lecture (if they don’t make sense then it is entirely my fault)
What are the elementary constituents of matter and how do they act?
Our ideas of what matter is have changed considerably over time. Some ancient Greeks such as Aristotle thought it was made up of various combinations of earth, air, fire and water (although a few such as Leucippus of Miletus and his student Democritus of Abdera believed that all matter is made up of tiny, indivisible particles, or atoms) and because Roman Catholic theologians were heavily influenced by Aristotle’s ideas, atomic philosophy was largely dismissed for centuries.
Aristotle (384–322 BC) was a Greek philosopher during the Classical period in Ancient Greece.
Leucippus (5th cent. BCE) is reported in some ancient sources to have been a philosopher who was the earliest Greek to develop the theory of atomism—the idea that everything is composed entirely of various imperishable, indivisible elements called atoms. Leucippus often appears as the master to his pupil Democritus, a philosopher also touted as the originator of the atomic theory.
Democritus (c. 460 – c. 370 BC) was an Ancient Greek pre-Socratic philosopher primarily remembered today for his formulation of an atomic theory of the universe.
From the time of the ancient Greeks to the late 18th century alchemists did experiments to try and turn materials into gold using lead and sulphur (we now know that gold, lead and sulphur are elements in their own right). During this time the importance of common salt (sodium chloride) was also realised.
Modern atomic theory is generally said to begin with John Dalton, an English chemist and meteorologist who in 1808 published a book on the atmosphere and the behaviour of gases that was entitled A New System of Chemical Philosophy. Dalton’s theory of atoms rested on four basic ideas: chemical elements were composed of atoms; the atoms of an element were identical in weight; the atoms of different elements had different weights; and atoms combined only in small whole-number ratios, such as 1:1, 1:2, 2:1, 2:3, to form compounds.
John Dalton FRS (6 September 1766 – 27 July 1844) was an English chemist, physicist, and meteorologist. He is best known for introducing the atomic theory into chemistry.
Dmitri Ivanovich Mendeleev (8 February 1834 – 2 February 1907 [OS 27 January 1834 – 20 January 1907]) was a Russian chemist and inventor.
By 1863, there were 56 known elements with a new element being discovered at a rate of approximately one per year. Mendeleev built on Dalton and other’s work by arranging the elements by their atomic mass. This wasn’t quite right as the modern periodic table is based on the atomic number, not mass.
The atomic mass (ma) is the mass of an atom. Its unit is the unified atomic mass units (symbol: u) where 1 unified atomic mass unit is defined as 1⁄12 of the mass of a single carbon-12 atom, at rest. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the atomic mass measured in u has nearly the same value as the mass number.
The atomic number or proton number (symbol Z ) of a chemical element is the number of protons found in the nucleus of every atom of that element. The atomic number uniquely identifies a chemical element. It is identical to the charge number of the nucleus. In an uncharged atom, the atomic number is also equal to the number of electrons.
Of course, at the time, Mendeleev had no idea about electrons, protons and neutrons.
Modern theories about the physical structure of atoms did not begin until 1897, with J. J. Thomson’s discovery of the electron.
Sir Joseph John Thomson OM PRS (18 December 1856 – 30 August 1940) was an English physicist and Nobel Laureate in Physics, credited with the discovery and identification of the electron, the first subatomic particle to be discovered.
Several scientists had suggested that atoms were built up from a more fundamental unit, but they imagined this unit to be the size of the smallest atom, hydrogen. Thomson suggested in 1897 that one of the fundamental units was more than 1,000 times smaller than an atom, suggesting the subatomic particle now known as the electron. He discovered this through his explorations on the properties of cathode rays (at the time known as Lenard rays) which could travel much further through the air than expected for an atom-sized particle. He claimed that these particles, which he called “corpuscles,” were the things that atoms were made from. The term “electron” predated Thomson’s discovery—a few years earlier Irish physicist G. J. Stoney had proposed that electricity was made of negative particles called “electrons,” and scientists had adopted the word to refer to anything with an electric charge. However, Thomson, who was a physicist at Cambridge University, was the first to suggest that these particles were a building block of the atom.
Thomson constructed a Crookes tube with an electrometer set to one side, out of the direct path of the cathode rays. Thomson could trace the path of the ray by observing the phosphorescent patch it created where it hit the surface of the tube. Thomson observed that the electrometer registered a charge only when he deflected the cathode ray to it with a magnet. He concluded that the negative charge and the rays were one and the same.
The cathode ray tube by which J. J. Thomson demonstrated that cathode rays could be deflected by a magnetic field and that their negative charge was not a separate phenomenon.
Thomson’s illustration of the Crookes tube by which he observed the deflection of cathode rays by an electric field (and later measured their mass-to-charge ratio). Cathode rays were emitted from the cathode C, passed through slits A (the anode) and B (grounded), then through the electric field generated between plates D and E, finally impacting the surface at the far end.
“As the cathode rays carry a charge of negative electricity, are deflected by an electrostatic force as if they were negatively electrified, and are acted on by a magnetic force in just the way in which this force would act on a negatively electrified body moving along the path of these rays, I can see no escape from the conclusion that they are charges of negative electricity carried by particles of matter”. — J. J. Thomson
“If, in the very intense electric field in the neighbourhood of the cathode, the molecules of the gas are dissociated and are split up, not into the ordinary chemical atoms, but into these primordial atoms, which we shall for brevity call corpuscles; and if these corpuscles are charged with electricity and projected from the cathode by the electric field, they would behave exactly like the cathode rays”. — J. J. Thomson
Thomson also tried to show how the electrons were situated in the atom. Since atoms were known to be electrically neutral, Thomson proposed (1904) a model in which the atom was a positively charged sphere studded with negatively charged electrons. It was called the “plum-pudding” model since the electrons in the atom resembled the raisins in a plum pudding.
This model did not survive unchallenged for long. In 1911, Ernest Rutherford’s experiments with alpha rays led him to describe the atom as a small, heavy nucleus with electrons in orbit around it. This nuclear model of the atom became the basis for the one that is still accepted today.
Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, FRS, HFRSE, LLD (30 August 1871 – 19 October 1937), was a New Zealand physicist who came to be known as the father of nuclear physics.
Rutherford performed his most famous work along with Hans Geiger and Ernest Marsden in 1909, known as the Geiger–Marsden experiment, which demonstrated the nuclear nature of atoms by deflecting alpha particles passing through a thin gold foil. Rutherford was inspired to ask Geiger and Marsden in this experiment to look for alpha particles with very high deflection angles, of a type not expected from any theory of matter at that time. Such deflections, though rare, were found, and proved to be a smooth but high-order function of the deflection angle. It was Rutherford’s interpretation of this data that led him to formulate the Rutherford model of the atom in 1911 – that a very small charged nucleus, containing much of the atom’s mass, was orbited by low-mass electrons.
Left shows the expected results: alpha particles passing through the plum pudding model of the atom undisturbed. Right shows the observed results: a small portion of the particles were deflected, indicating a small, concentrated charge. Note that the image is not to scale; in reality, the nucleus is vastly smaller than the electron shell.
Had Thomson’s model been correct, all the alpha particles should have passed through the foil with minimal scattering. What Geiger and Marsden observed was that a small fraction of the alpha particles experienced strong deflection.
It was Rutherford’s interpretation of this data that led him to formulate the Rutherford model of the atom in 1911 – that a very small positively charged nucleus, containing much of the atom’s mass, was orbited by low-mass negatively charged electrons.
https://en.wikipedia.org/wiki/Hans_Geiger below left
https://en.wikipedia.org/wiki/Ernest_Marsden above right
In 1913, Danish physicist Niels Bohr, who had studied under both Thomson and Rutherford, further refined the nuclear model by proposing that electrons moved only in restricted, successive orbital shells and that the outer, higher-energy orbits determined the chemical properties of the different elements. Furthermore, Bohr was able to explain the spectral lines of the different elements by suggesting that as electrons jumped from higher to lower orbits, they emitted energy in the form of light. In the 1920s, Bohr’s theory became the basis for quantum mechanics, which explained in greater detail the complex structure and behaviour of atoms.
Niels Henrik David Bohr (7 October 1885 – 18 November 1962) was a Danish physicist who made foundational contributions to understanding the atomic structure and quantum theory, for which he received the Nobel Prize in Physics in 1922.
Since Thomson’s discovery of the electron in 1897, scientists had realized that an atom must contain a positive charge to counterbalance the electrons’ negative charge. In 1919, as a by-product of his experiments on the splitting of atomic nuclei, Rutherford discovered the proton, which constitutes the nucleus of a hydrogen atom. A proton carries a single positive electrical charge, and every atomic nucleus contains one or more protons. Although Rutherford proposed the existence of a neutral subatomic particle, the neutron, in 1920, the actual discovery was made by English physicist James Chadwick, a former student of Rutherford, in 1932.
Sir James Chadwick, CH, FRS (20 October 1891 – 24 July 1974) was a British physicist who was awarded the 1935 Nobel Prize in Physics for his discovery of the neutron in 1932.
Chadwick devised a simple apparatus that consisted of a cylinder containing a polonium source and beryllium target.
A schematic diagram of the experiment used to discover the neutron in 1932. At left, a polonium source was used to irradiate beryllium with alpha particles, which induced an uncharged radiation. When this radiation struck paraffin wax, protons were ejected. The protons were observed using a small ionization chamber. Adapted from Chadwick (1932).
Paraffin wax is a hydrocarbon high in hydrogen content, hence offers a target dense with protons; since neutrons and protons have almost equal mass, protons scatter energetically from neutrons. Chadwick measured the range of these protons and also measured how the new radiation impacted the atoms of various gases. He found that the new radiation consisted of not gamma rays, but uncharged particles with about the same mass as the proton. These particles were neutrons. Chadwick won the Nobel Prize in Physics in 1935 for this discovery.
So by 1932, we had a version of the atoms that looked like
Left: Models depicting the nucleus and electron energy levels in hydrogen, helium, lithium, and neon atoms. In reality, the diameter of the nucleus is about 100,000 times smaller than the diameter of the atom.
How did physicist Murray Gell-Man discover the existence of quarks?
The nuclear model was further developed but during the 1940s the most important research involved nuclear fission and fusion (unfortunately the emphasis was warfare).
In the 1950s, with development of particle accelerators and studies of cosmic rays, inelastic scattering experiments on protons (and other atomic nuclei) with energies about hundreds of MeVs became affordable. They created some short-lived resonance “particles”, but also hyperons and K-mesons with unusually long lifetime. The cause of the latter was found in a new quasi-conserved quantity, named strangeness, which is conserved in all circumstances except for the weak interaction. The strangeness of heavy particles and the μ-lepton were the first two signs of what is now known as the second generation of fundamental particles.
The weak interaction revealed soon yet another mystery. In 1957 it was found that it does not conserve parity. In other words, the mirror symmetry was disproved as a fundamental symmetry law.
Throughout the 1950s and 1960s, improvements in particle accelerators and particle detectors (as well as work on cosmic rays) led to a bewildering variety of particles found in high-energy experiments. The term elementary particle came to refer to dozens of particles, most of them unstable. It prompted Wolfgang Pauli’s remark: “Had I foreseen this, I would have gone into botany”. The entire collection was nicknamed the “particle zoo”. It became evident that some smaller constituents, yet invisible, form mesons and baryons that counted most of the then-known particles.
Wolfgang Ernst Pauli (25 April 1900 – 15 December 1958) was an Austrian-born Swiss and American theoretical physicist and one of the pioneers of quantum physics.
The interaction of these particles by scattering and decay provided a key to new fundamental quantum theories. Murray Gell-Mann and Yuval Ne’eman brought some order to mesons and baryons, the most numerous classes of particles, by classifying them according to certain qualities. It began with what Gell-Mann referred to as the “Eightfold Way”, but proceeding into several different “octets” and “decuplets” which could predict new particles.
https://en.wikipedia.org/wiki/Murray_Gell-Mann (below left)
https://en.wikipedia.org/wiki/Yuval_Ne%27eman (above right)
A quark is a type of elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei.
At the time of the quark theory’s inception, the “particle zoo” included, among other particles, a multitude of hadrons. Gell-Mann and Zweig posited that they were not elementary particles, but were instead composed of combinations of quarks and antiquarks. Their model involved three flavours of quarks, up, down, and strange, to which they ascribed properties such as spin and electric charge. The initial reaction of the physics community to the proposal was mixed. There was particular contention about whether the quark was a physical entity or a mere abstraction used to explain concepts that were not fully understood at the time.
In 1968, deep inelastic scattering experiments at the Stanford Linear Accelerator Centre (SLAC) showed that the proton contained much smaller, point-like objects and was therefore not an elementary particle. The objects that were observed at SLAC would later be identified as up and down quarks as the other flavours were discovered. SLAC experiments also provided evidence for the strange quark
In 1974 the charm quark was discovered. In 1977, the bottom quark was observed by a team at Fermilab and in 1995 the top quark was finally observed.
Electrons have not been found to be anything but fundamental but high energy experiments have shown they are not alone. There are others and they are collectively known as leptons. Like with quarks, there are six types of leptons, known as flavours, grouped in three generations.
The first lepton identified was the electron, discovered by J.J. Thomson and his team of British physicists in 1897. In 1930 Wolfgang Pauli postulated the electron neutrino to preserve conservation of energy, conservation of momentum, and conservation of angular momentum in beta decay.
So there are 12 matter particles: 6 quarks and 6 leptons. It was realised that to enable them to fit and stay together then “forces” were required.
The Standard Model of particle physics is the theory describing three of the four known fundamental forces (the electromagnetic, weak, and strong interactions, and not including the gravitational force) in the universe, as well as classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists around the world, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, confirmation of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
At present, matter and energy are best understood in terms of the kinematics and interactions of elementary particles. To date, physics has reduced the laws governing the behaviour and interaction of all known forms of matter and energy to a small set of fundamental laws and theories. A major goal of physics is to find the “common ground” that would unite all of these theories into one integrated theory of everything, of which all the other known laws would be special cases, and from which the behaviour of all matter and energy could be derived (at least in principle).
The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as colour charge.
The Standard Model includes 12 elementary particles of spin 1⁄2, known as fermions. According to the spin-statistics theorem, fermions respect the Pauli exclusion principle. Each fermion has a corresponding antiparticle.
The fermions of the Standard Model are classified according to how they interact (or equivalently, by what charges they carry). There are six quarks (up, down, charm, strange, top, bottom), and six leptons (electron, electron neutrino, muon, muon neutrino, tau, tau neutrino). Pairs from each classification are grouped together to form a generation, with corresponding particles exhibiting similar physical behaviour
The defining property of the quarks is that they carry colour charge, and hence interact via the strong interaction. A phenomenon called colour confinement results in quarks being very strongly bound to one another, forming colour-neutral composite particles (hadrons) containing either a quark and an antiquark (mesons) or three quarks (baryons). The familiar proton and neutron are the two baryons having the smallest mass. Quarks also carry an electric charge and weak isospin. Hence they interact with other fermions both electromagnetically and via the weak interaction. The remaining six fermions do not carry colour charge and are called leptons. The three neutrinos do not carry electric charge either, so their motion is directly influenced only by the weak nuclear force, which makes them notoriously difficult to detect. However, by virtue of carrying an electric charge, the electron, muon, and tau all interact electromagnetically.
Each member of a generation has greater mass than the corresponding particles of lower generations. The first-generation charged particles do not decay, hence all ordinary (baryonic) matter is made of such particles. Specifically, all atoms consist of electrons orbiting around atomic nuclei, ultimately constituted of up and down quarks. Second- and third-generation charged particles, on the other hand, decay with very short half-lives and are observed only in very high-energy environments. Neutrinos of all generations also do not decay and pervade the universe, but rarely interact with baryonic matter.
In the Standard Model, gauge bosons are defined as force carriers that mediate the strong, weak, and electromagnetic fundamental interactions.
Interactions in physics are the ways that particles influence other particles. At a macroscopic level, electromagnetism allows particles to interact with one another via electric and magnetic fields, and gravitation allows particles with mass to attract one another in accordance with Einstein’s theory of general relativity. The Standard Model explains such forces as resulting from matter particles exchanging other particles, generally referred to as force mediating particles. When a force-mediating particle is exchanged, at a macroscopic level the effect is equivalent to a force influencing both of them, and the particle is therefore said to have mediated (i.e., been the agent of) that force.
Summary of interactions between particles described by the Standard Model
Photons mediate the electromagnetic force between electrically charged particles. The photon is massless and is well-described by the theory of quantum electrodynamics.
The W+, W−, and Z gauge bosons mediate the weak interactions between particles of different flavours (all quarks and leptons). They are massive, with the Z being more massive than the W±. The weak interactions involving the W± exclusively act on left-handed particles and right-handed antiparticles. Furthermore, the W± carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral Z boson interacts with both left-handed particles and antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction. The eight gluons mediate the strong interactions between colour charged particles (the quarks). Gluons are massless. The eightfold multiplicity of gluons is labelled by a combination of colour and anticolour charge (e.g. red–antigreen). Because the gluons have an effective colour charge, they can also interact among themselves. The gluons and their interactions are described by the theory of quantum chromodynamics.
The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs in 1964.
The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself.
The Standard Model Lagrangian – or formula – in all its glory, written up by Italian mathematician and physicist Matilde Marcolli
A bit shorter – explanation comes with some help from the nice people over at CERN.
The top-line describes the forces in the Universe: electricity, magnetism, and strong and weak nuclear forces.
The second line describes how these forces act on the fundamental particles of matter, namely the quarks and leptons.
Below that, the formula describes how these particles obtain their masses from the Higgs boson.
“The fourth line enables the Higgs boson to do the job,” the CERN media team explains.
As the structure of the atom was initially being investigated during the late 19th and early 20th century physicists were considering the problems associated with considering electromagnetic radiation as waves.
The ultraviolet catastrophe also called the Rayleigh–Jeans catastrophe was the prediction of the late 19th century/early 20th-century classical physics that an ideal black body (also blackbody) at thermal equilibrium will emit radiation in all frequency ranges, emitting more energy as the frequency increases. By calculating the total amount of radiated energy (i.e., the sum of emissions in all frequency ranges), it can be shown that a blackbody is likely to release an arbitrarily high amount of energy. This would cause all matter to instantaneously radiate all of its energy until it is near absolute zero – indicating that a new model for the behaviour of blackbodies was needed.
The term “ultraviolet catastrophe” was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law. The phrase refers to the fact that the Rayleigh–Jeans law accurately predicts experimental results at radiative frequencies below 105 GHz, but begins to diverge with empirical observations as these frequencies reach the ultraviolet region of the electromagnetic spectrum.
Max Planck derived the correct form for the intensity spectral distribution function by making some strange (for the time) assumptions. In particular, Planck assumed that electromagnetic radiation can be emitted or absorbed only in discrete packets, called quanta, of energy: Equanta = hf =hc/l where h is Planck’s constant, f is the frequency of the electromagnetic radiation, c is the speed of the electromagnetic radiation in a vacuum and l is the wavelength of the electromagnetic radiation.
Karl Ernst Ludwig Marx Planck, ForMemRS (23 April 1858 – 4 October 1947) was a German theoretical physicist whose discovery of energy quanta won him the Nobel Prize in Physics in 1918.
Albert Einstein in 1905, in order to explain the photoelectric effect, postulated consistently with Max Planck’s quantum hypothesis that electromagnetic radiation itself is made of individual quantum particles, which in 1926 came to be called photons by Gilbert N. Lewis. The photoelectric effect was observed upon shining light of particular wavelengths on certain materials, such as metals, which caused electrons to be ejected from those materials only if the light quantum energy was greater than the work function of the metal’s surface.
The emission of electrons from a metal plate caused by light quanta (photons) with energy greater than the work function of the metal
Albert Einstein (14 March 1879 – 18 April 1955) was a German-born theoretical physicist who developed the theory of relativity, one of the two pillars of modern physics (alongside quantum mechanics).
The idea that each photon had to consist of energy in terms of quanta was a remarkable achievement; it effectively solved the problem of black-body radiation attaining infinite energy, which occurred in theory if light were to be explained only in terms of waves. In 1913, Niels Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules.
Niels Bohr’s 1913 quantum model of the atom, which incorporated an explanation of Max Planck’s 1900 quantum hypothesis, i.e. that atomic energy radiators have discrete energy values (E = hf), J. J. Thomson’s 1904 plum pudding model, Albert Einstein’s 1905 light quanta postulate, and Ernest Rutherford’s 1907 discovery of the atomic nucleus. Note that the electron does not travel along the black line when emitting a photon. It jumps, disappearing from the outer orbit and appearing in the inner one and cannot exist in the space between orbits 2 and 3.
In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. This theory was for a single particle and derived from special relativity theory. Building on de Broglie’s approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation as an approximation to the generalised case of de Broglie’s theory (Once upon a time I could derive these equations). Schrödinger subsequently showed that the two approaches were equivalent.
https://en.wikipedia.org/wiki/Werner_Heisenberg above right
https://en.wikipedia.org/wiki/Max_Born below left
https://en.wikipedia.org/wiki/Pascual_Jordan above right
Heisenberg formulated his uncertainty principle in 1927 stating that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.
Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron.
The Dirac equation achieves the relativistic description of the wavefunction of an electron. It predicts electron spin and led Dirac to predict the existence of the positron.
Beginning in 1927, researchers made attempts at applying quantum mechanics to fields instead of single particles, resulting in quantum field theories. This area of research culminated in the formulation of quantum electrodynamics by physicists such as Richard Feynman and Freeman Dyson during the 1940s. Quantum electrodynamics describes a quantum theory of electrons, positrons, and the electromagnetic field, and served as a model for subsequent quantum field theories.
https://en.wikipedia.org/wiki/Freeman_Dyson above right
A gauge theory is a type of theory in physics. The word gauge means a measurement, a thickness, an in-between distance (as in railway tracks), or a resulting number of units per certain parameter (a number of loops in a cm of fabric or a number of lead balls in a kg of ammunition). Modern theories describe physical forces in terms of fields, e.g., the electromagnetic field, the gravitational field, and fields that describe forces between the elementary particles. A general feature of these field theories is that the fundamental fields cannot be directly measured; however, some associated quantities can be measured, such as charges, energies, and velocities.
In field theories, different configurations of the unobservable fields can result in identical observable quantities. A transformation from one such field configuration to another is called a gauge transformation; the lack of change in the measurable quantities, despite the field being transformed, is a property called gauge invariance.
Since any kind of invariance under a field transformation is considered a symmetry, gauge invariance is sometimes called gauge symmetry. Generally, any theory that has the property of gauge invariance is considered a gauge theory. For example, in electromagnetism the electric and magnetic fields, E and B are observable, while the potentials V (“voltage”) and A (the vector potential) are not. Under a gauge transformation in which a constant is added to V, no observable change occurs in E or B.
With the advent of quantum mechanics in the 1920s, and with successive advances in quantum field theory, the importance of gauge transformations has steadily grown. Gauge theories constrain the laws of physics because all the changes induced by a gauge transformation have to cancel each other out when written in terms of observable quantities. Over the course of the 20th century, physicists gradually realized that all forces (fundamental interactions) arise from the constraints imposed by local gauge symmetries, in which case the transformations vary from point to point in space and time. Perturbative quantum field theory (usually employed for scattering theory) describes forces in terms of force-mediating particles called gauge bosons. The nature of these particles is determined by the nature of the gauge transformations. The culmination of these efforts is the Standard Model, a quantum field theory that accurately predicts all of the fundamental interactions except gravity.
The earliest field theory having a gauge symmetry was Maxwell’s formulation, in 1864–65, of electrodynamics (“A Dynamical Theory of the Electromagnetic Field”).
The importance of gauge theories for physics stems from their tremendous success in providing a unified framework to describe the quantum-mechanical behaviour of electromagnetism, the weak force and the strong force. This gauge theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature.
Gauge theories are important as the successful field theories explaining the dynamics of elementary particles. Quantum electrodynamics is an abelian gauge theory with the symmetry group U(1) and has one gauge field, the electromagnetic four-potential, with the photon being the gauge boson.
Physical quantities like a charge density or a current are all invariant if we add a local phase to the field (this is called a local U(1) gauge transformation).
Relativistic quantum field theory is a mathematical scheme generally recognized to form the adequate theoretical frame for subatomic physics and forces, with the Standard Model of Particle Physics as a major achievement. It grew from relativistic field theory, such as Maxwell’s theory of electromagnetism and quantization rules used in nonrelativistic quantum mechanics. Electromagnetism is a gauge theory, associated with the group U(1)em. Its basic starting point is that the axioms of Special Relativity on the one hand and those of Quantum Mechanics on the other should be combined into one theory.
So quantum mechanics describes nature at the smallest scales of energy levels of atoms and subatomic particles.
Waves have phases. What happens if we change the phase?
Local gauge transformation.
Gauge theory is a type of theory of elementary particles designed to explain the strong, weak, and electromagnetic interactions in terms of exchange of virtual particles.
A gauge transformation says that we are allowed to transform the coordinate system. For example gravitational potential energy = mgh and we could take h to be the height above a table, or from the centre of the earth, but we normally choose a base location that simplifies the problem we are trying to solve. We have considerable freedom in selecting the origin of our coordinate system. This is known as gauge freedom. If we choose the floor to be the origin of our system (which we have the freedom to do), but would like to measure the potential energy of a mass with respect to a table, 1 metre tall, we could just as easily move the origin from the ground to the table through a gauge transformation. In this case we are transforming the potential energy so long as the equations of motion remain unchanged. In our case, adding a constant potential to shift the origin off of the ground and to the table is an example of a gauge transformation. As long as this transformation is carried out throughout the problem, the physics of the problem will remain unchanged. This is known as gauge invariance. Another simple analogy to a gauge transformation can be seen in introductory electronics. In electronics, the absolute value of a potential has little meaning. Instead we are often concerned with the potential difference between two points. For example, a battery has a potential difference of 1.5V. A gauge transformation allows us to set the potential at one side of the battery to any arbitrary value, say 100V. However, we must also add this 100V potential to the other side of the battery, giving 101.5V. In the end, the potential difference is still 1.5V and the physical system will remain invariant.
In general, a gauge transformation will make a problem easier to solve by exploiting symmetries in a physical system. By changing coordinates, our example of gravitational potential energy became simpler to solve mathematically. Up until now, we have made transformations by adding a constant to the potential. This has allowed us to preserve the forces on a system.
In quantum mechanics the phase of a wave function does not visibly affect the physics of a system. This means that we can arbitrarily pick a phase to start with and as long as we carry this phase through the problem, no physical effects will be observed (the particle will end up in the correct location). Here the gauge transformation is regarded as the change in phase of a wave function, while the potential will ”connect” the phase at one space-time to the phase at another location.
The phase of a wave function can be regarded as a local variable that can be changed under a gauge transformation. In the end, the same physical equations are obtained, so no observable change occurs by changing the phase of the wave function.
A gauge is nothing more than a “coordinate system” that varies depending on one’s “location” with respect to some “base space” or “parameter space”, a gauge transform is a change of coordinates applied to each such location, and a gauge theory is a model for some physical or mathematical system to which gauge transforms can be applied (and is typically gauge invariant, in that all physically meaningful quantities are left unchanged (or transform naturally) under gauge transformations). By fixing a gauge (thus breaking or spending the gauge symmetry), the model becomes something easier to analyse mathematically, such as a system of partial differential equations (in classical gauge theories) or a perturbative quantum field theory (in quantum gauge theories), though the tractability of the resulting problem can be heavily dependent on the choice of gauge that one fixed. Deciding exactly how to fix a gauge (or whether one should spend the gauge symmetry at all) is a key question in the analysis of gauge theories, and one that often requires the input of geometric ideas and intuition into that analysis.
An interaction between a photon and an electron links back to Maxwell’s equations. The electron is classed as a real particle in space and it is assumed that the photon is as well. The electron can absorb the energy from the photon and be promoted to a higher energy level. However, a photon of the same energy is released when the electron returns to its original energy level.
However, a photon can be scattered by an electron.
The Compton effect is the name given to the scattering of a photon by an electron. Energy and momentum are conserved, resulting in a reduction of both for the scattered photon. Studying this effect, Compton verified that photons have momentum.
Around 1923, Compton observed that x rays scattered from materials had a decreased energy and correctly analysed this as being due to the scattering of photons from electrons. This phenomenon could be handled as a collision between two particles—a photon and an electron at rest in the material. Energy and momentum are conserved in the collision. He won a Nobel Prize in 1929 for the discovery of this scattering, now called the Compton effect, because it helped prove that photon momentum is given by P = h/l , where h is Planck’s constant and l is the photon wavelength.
Feynman diagram of the interaction of an electron with the electromagnetic force. The basic vertex (n) shows the emission of a photon (g) by an electron (e−).
Feynman diagram of scattering between two electrons by the emission of a virtual photon
Feynman diagrams are pictorial representations of the mathematical expressions describing the behaviour of subatomic particles.
In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. It is an important tool for describing real quantum systems, as it turns out to be very difficult to find exact solutions to the Schrödinger equation for operators of even moderate complexity.
Perturbation theory is applicable if the problem at hand cannot be solved exactly, but can be formulated by adding a “small” term to the mathematical description of the exactly solvable problem.
Feynman diagrams can be used to explain perturbation theory in quantum mechanics. For example, in the process of electron-positron annihilation, the initial state is one electron and one positron, the final state: two photons.
The diagrams not only describe scattering, but they also are a representation of the short-distance field theory correlations.
There is a problem in that some of the analysis results in infinities.
On the one side, some calculations of effects for cosmic rays clearly differed from measurements. On the other side and, from a theoretical point of view more threatening, calculations of higher orders of the perturbation series led to infinite results. The self-energy of the electron, as well as vacuum fluctuations of the electromagnetic field, seemed to be infinite. The perturbation expansions did not converge to a finite sum and even most individual terms were divergent.
Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, that are used to treat nonsensical infinities arising in calculated quantities by altering values of quantities to compensate for effects of their self-interactions.
For example, an electron theory may begin by postulating an electron with an initial mass and charge. In quantum field theory, a cloud of virtual particles, such as photons, positrons, and others surrounds and interacts with the initial electron. Accounting for the interactions of the surrounding particles (e.g. collisions at different energies) shows that the electron-system behaves as if it had a different mass and charge than initially postulated. Renormalization, in this example, mathematically replaces the initially postulated mass and charge of an electron with the experimentally observed mass and charge. Mathematics and experiments prove that positrons and more massive particles like protons, exhibit precisely the same observed charge as the electron – even in the presence of much stronger interactions and more intense clouds of virtual particles.
In general, a QFT is called renormalizable, if all infinities can be absorbed into a redefinition of a finite number of coupling constants and masses. A consequence is that the physical charge and mass of the electron must be measured and cannot be computed from first principles.
Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory.
Renormalization in quantum electrodynamics: The simple electron/photon interaction that determines the electron’s charge at one renormalization point is revealed to consist of more complicated interactions at another.
The breakthrough in this work occurred during the 1940s and 1950s because more reliable and effective methods for dealing with infinities in quantum field theory (QFT) were developed, namely coherent and systematic rules for performing relativistic field theoretical calculations, and a general renormalization theory.
Freeman Dyson, Richard P. Feynman, Julian Schwinger and Sin-itiro Tomonaga became the inventors of renormalization theory.
In quantum electrodynamics, the anomalous magnetic moment of a particle is a contribution of effects of quantum mechanics, expressed by Feynman diagrams with loops, to the magnetic moment of that particle. (The magnetic moment, also called magnetic dipole moment, is a measure of the strength of a magnetic source.)
The “Dirac” magnetic moment, corresponding to tree-level Feynman diagrams (which can be thought of as the classical result), can be calculated from the Dirac equation. It is usually expressed in terms of the g-factor; the Dirac equation predicts g = 2.
The electron behaves like a little bar magnet
For particles such as the electron, this classical result differs from the observed value by a small fraction of a per cent. The difference is the anomalous magnetic moment, denoted a and defined as a = (g – 2)/2
The one-loop contribution to the anomalous magnetic moment—corresponding to the first and largest quantum mechanical correction—of the electron is found by calculating the vertex function shown in the adjacent diagram. The calculation is relatively straightforward and the one-loop result is:
ae = a/2p » 0.0011614
where a is the fine structure constant. This result was first found by Julian Schwinger in 1948 and is engraved on his tombstone.
As of 2016, the coefficients of the QED formula for the anomalous magnetic moment of the electron are known analytically up to α3 and have been calculated up to order α5
ae = 0.001159652181643(764)
The QED prediction agrees with the experimentally measured value to more than 10 significant figures, making the magnetic moment of the electron the most accurately verified prediction in the history of physics.
The current experimental value and uncertainty is:
ae = 0.001159652188073(28)
According to this value, ae is known to an accuracy of around 1 part in 1 billion (109). This required measuring g to an accuracy of around 1 part in 1 trillion (1012).
The effect of the weak force
In particle physics, the weak interaction, which is also often called the weak force or weak nuclear force, is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms. The weak interaction serves an essential role in nuclear fission, and the theory regarding it in terms of both its behaviour and effects is sometimes called quantum flavordynamics (QFD). However, the term QFD is rarely used because the weak force is better understood in terms of electroweak theory (EWT). In addition to this, QFD is related to quantum chromodynamics (QCD), which deals with the strong interaction, and quantum electrodynamics (QED), which deals with the electromagnetic force.
The effective range of the weak force is limited to subatomic distances and is less than the diameter of a proton. It is one of the four known force-related fundamental interactions of nature, alongside the strong interaction, electromagnetism, and gravitation.
The radioactive beta decay is due to the weak interaction, which transforms a neutron into a proton, an electron, and an electron antineutrino.
In 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi’s interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range.
However, it is better described as a non-contact force field having a finite range, albeit very short. In 1968, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electroweak force.
The existence of the W and Z bosons was not directly confirmed until 1983.
The Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction.
In particle physics, the hypercharge Y of a particle is related to the strong interaction and is distinct from the similarly named weak hypercharge, which has an analogous role in the electroweak interaction. The concept of hypercharge combines and unifies isospin and flavour into a single charge operator. It is a quantum number relating the strong interactions of the SU(3) model. Isospin is defined in the SU(2) model while the SU(3) model defines hypercharge.
In particle physics, weak isospin is a quantum number relating to the weak interaction, and parallels the idea of isospin under the strong interaction. Weak isospin is usually given the symbol T or I with the third component written as Tz, T3, Iz or I3. It can be understood as the eigenvalue of a charge operator.
The W and Z bosons are together known as the weak or more generally as the intermediate vector bosons. The key problem is to understand how these have mass.
These elementary particles mediate the weak force because they have a large mass.
Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state ends up in an asymmetric state.
Graph of “Mexican hat” potential function.
Consider a symmetric upward dome with a trough circling the bottom. If a ball is put at the very peak of the dome, the system is symmetric with respect to a rotation around the centre axis. But the ball may spontaneously break this symmetry by rolling down the dome into the trough, a point of lowest energy. Afterwards, the ball has come to rest at some fixed point on the perimeter. The dome and the ball retain their individual symmetry, but the system does not.
In the simplest idealized relativistic model, the spontaneously broken symmetry is summarized through an illustrative scalar field theory.
For ferromagnetic materials, the underlying laws are invariant under spatial rotations. Here, the order parameter is the magnetization, which measures the magnetic dipole density. Above the Curie temperature, the order parameter is zero, which is spatially invariant, and there is no symmetry breaking. Below the Curie temperature, however, the magnetization acquires a constant nonvanishing value, which points in a certain direction (in the idealized situation where we have full equilibrium; otherwise, translational symmetry gets broken as well). The residual rotational symmetries which leave the orientation of this vector invariant remain unbroken, unlike the other rotations which do not and are thus spontaneously broken.
Below the Curie temperature, neighbouring magnetic spins align parallel to each other in ferromagnet in the absence of an applied magnetic field
Above the Curie temperature, the magnetic spins are randomly aligned in a paramagnet unless a magnetic field is applied
For the electroweak model, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetry. Like the ferromagnetic example, there is a phase transition at the electroweak temperature. The same comment about us not tending to notice broken symmetries suggests why it took so long for us to discover electroweak unification.
The strong, weak, and electromagnetic forces can all be understood as arising from gauge symmetries. The Higgs mechanism, the spontaneous symmetry breaking of gauge symmetries, is an important component in understanding the superconductivity of metals and the origin of particle masses in the standard model of particle physics.
In the standard model of particle physics, spontaneous symmetry breaking of the SU(2) × U(1) gauge symmetry associated with the electro-weak force generates masses for several particles and separates the electromagnetic and weak forces. The W and Z bosons are the elementary particles that mediate the weak interaction, while the photon mediates the electromagnetic interaction. At energies much greater than 100 GeV all these particles behave in a similar manner. The Weinberg–Salam theory predicts that, at lower energies, this symmetry is broken so that the photon and the massive W and Z bosons emerge. In addition, fermions develop mass consistently.
In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property “mass” for gauge bosons. Without the Higgs mechanism, all bosons (one of the two classes of particles, the other being fermions) would be considered massless, but measurements show that the W+, W−, and Z bosons actually have relatively large masses of around 80 GeV/c2. The Higgs field resolves this conundrum. The simplest description of the mechanism adds a quantum field (the Higgs field) that permeates all space to the Standard Model. Below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass. In the Standard Model, the phrase “Higgs mechanism” refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. The Large Hadron Collider at CERN announced results consistent with the Higgs particle on 14 March 2013, making it extremely likely that the field, or one like it, exists, and explaining how the Higgs mechanism takes place in nature.
In the standard model, the Higgs field is an SU(2) doublet (i.e. the standard representation with two complex components called isospin).
The Higgs field, through the interactions specified by its potential, induces spontaneous breaking of three out of the four generators (“directions”) of the gauge group U(2).
After symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons (W+, W− and Z), and are only observable as components of these weak bosons, which are made massive by their inclusion; only the single remaining degree of freedom becomes a new scalar particle: the Higgs boson.
So the Higgs gives the W and Z bosons mass because it combines with them. It does not combine with the photon which is why the photon is massless.
The W1 and W2 bosons combine to give massive charged bosons
A Model of Leptons Steven Weinberg
Phys. Rev. Lett. 19, 1264 – Published 20 November 1967
Leptons interact only with photons, and with the intermediate bosons that presumably mediate weak interactions.
Gerardus (Gerard) ‘t Hooft (born July 5, 1946) is a Dutch theoretical physicist and professor at Utrecht University, the Netherlands. He shared the 1999 Nobel Prize in Physics with his thesis advisor Martinus J. G. Veltman “for elucidating the quantum structure of electroweak interactions”.
Martinus Justinus Godefriedus “Tini” Veltman (born 27 June 1931) is a Dutch theoretical physicist. He shared the 1999 Nobel Prize in physics with his former student Gerardus ‘t Hooft for their work on particle theory.
Electroweak theory, in physics, the theory that describes both the electromagnetic force and the weak force. Superficially, these forces appear quite different. The weak force acts only across distances smaller than the atomic nucleus, while the electromagnetic force can extend for great distances (as observed in the light of stars reaching across entire galaxies), weakening only with the square of the distance. Moreover, comparison of the strength of these two fundamental interactions between two protons, for instance, reveals that the weak force is some 10 million times weaker than the electromagnetic force. Yet one of the major discoveries of the 20th century has been that these two forces are different facets of a single, more-fundamental electroweak force.
There are 3 fundamental parameters, g, g’ and V (vacuum expectant)
W and Z bosons have the following predicted masses:
MW = vg/2 and MZ = v√(g2 + g’2)/2 where g is the SU(2) gauge coupling, g’ is U(1) gauge coupling, and v is the Higgs vacuum expectation value.
v ≈ 250GeV; MZ = 90GeV; MW = 80GeV
Weak neutral current interactions are one of the ways in which subatomic particles can interact by means of the weak force. These interactions are mediated by the Z boson. The discovery of weak neutral currents was a significant step toward the unification of electromagnetism and the weak force into the electroweak force and led to the discovery of the W and Z bosons.
Gargamelle was a heavy liquid bubble chamber detector in operation at CERN between 1970 and 1979. It was designed to detect neutrinos and antineutrinos, which were produced with a beam from the Proton Synchrotron (PS) between 1970 and 1976 before the detector was moved to the Super Proton Synchrotron (SPS).
Gargamelle is famous for being the experiment where neutral currents were discovered. Presented in July 1973 neutral currents were the first experimental indication of the existence of the Z0 boson, and consequently a major step towards the verification of the electroweak theory.
An event in which the electron and neutrino changes momentum and/or energy by exchange of the neutral Z0 boson. Flavours are unaffected. The neutrino is scattering off the electron.
Above left: View of Gargamelle bubble chamber detector in the West Hall at CERN, February 1977. Above right: This event shows the real tracks produced in the Gargamelle bubble chamber that provided the first confirmation of a leptonic neutral current interaction. A neutrino interacts with an electron, the track of which is seen horizontally and emerges as a neutrino without producing a muon.
The Super Proton–Antiproton Synchrotron (or SppS, also known as the Proton–Antiproton Collider) was a particle accelerator that operated at CERN from 1981 to 1991. The main experiments at the accelerator were UA1 and UA2, where the W and Z boson were discovered in 1983.
The interaction between charged particles takes place by the exchange of photons. This is depicted in the following diagram (Feynman diagram) of electron-muon scattering
In 1983, the UA1 team was the first to announce the discovery of the W boson. This was soon confirmed by the UA2 team.
The annihilation of an electron-positron pair into a μ+– μ– pair is shown in the following diagram:
Invariant mass of l+ and l–
This binds quarks together in a nucleon. It is a short-range force because of how the bosons interact with each other.
It is one of the four known fundamental interactions, with the others being electromagnetism, the weak interaction, and gravitation. At the range of 10−15 m (1 femtometer), the strong force is approximately 137 times as strong as electromagnetism, a million times as strong as the weak interaction, and 1038 times as strong as gravitation. The strong nuclear force holds most ordinary matter together because it confines quarks into hadron particles such as the proton and neutron. In addition, the strong force binds neutrons and protons to create atomic nuclei. Most of the mass of a common proton or neutron is the result of the strong force field energy; the individual quarks provide only about 1% of the mass of a proton.
It is observable at two ranges and mediated by two force carriers. On a larger scale (about 1 to 3 fm), it is the force (carried by mesons) that binds protons and neutrons (nucleons) together to form the nucleus of an atom. On the smaller scale (less than about 0.8 fm, the radius of a nucleon), it is the force (carried by gluons) that holds quarks together to form protons, neutrons, and other hadron particles. In the latter context, it is often known as the colour force. The strong force inherently has such a high strength that hadrons bound by the strong force can produce new massive particles. Thus, if hadrons are struck by high-energy particles, they give rise to new hadrons instead of emitting freely moving radiation (gluons). This property of the strong force is called colour confinement, and it prevents the free “emission” of the strong force: instead, in practice, jets of massive particles are produced.
In the context of atomic nuclei, the same strong interaction force (that binds quarks within a nucleon) also binds protons and neutrons together to form a nucleus. In this capacity, it is called the nuclear force (or residual strong force).
The strong interaction is mediated by the exchange of massless particles called gluons that act between quarks, antiquarks, and other gluons. Gluons are thought to interact with quarks and other gluons by way of a type of charge called colour charge. Colour charge is analogous to electromagnetic charge, but it comes in three types (±red, ±green, ±blue) rather than one, which results in a different type of force, with different rules of behaviour. These rules are detailed in the theory of quantum chromodynamics (QCD), which is the theory of quark-gluon interactions.
Quarks carry three types of colour charge; antiquarks carry three types of anticolour.
A quark is a type of elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei.
Unlike the single photon of QED or the three W and Z bosons of the weak interaction, there are eight independent types of gluon in QCD.
A gluon is an elementary particle that acts as the exchange particle (or gauge boson) for the strong force between quarks. It is analogous to the exchange of photons in the electromagnetic force between two charged particles. In layman’s terms, they “glue” quarks together, forming hadrons such as protons and neutrons.
In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the colour charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons, therefore, participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyse than QED (quantum electrodynamics).
In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks and gluons, the fundamental particles that make up composite hadrons such as the proton, neutron and pion.
In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved.
Gluons can interact with each other
Feynman diagram for an interaction between quarks generated by a gluon
Feynman diagrams of prompt photon production by a) quark-gluon Compton scattering, b) quark-antiquark annihilation
Feynman diagrams for gluon self-energy.
Due to a phenomenon known as colour confinement, quarks are never directly observed or found in isolation; they can be found only within hadrons, which include baryons (such as protons and neutrons) and mesons. For this reason, much of what is known about quarks has been drawn from observations of hadrons.
In particle physics, asymptotic freedom is a property of some gauge theories that causes interactions between particles to become asymptotically weaker as the energy scale increases and the corresponding length scale decreases.
Asymptotic freedom is a feature of quantum chromodynamics (QCD), the quantum field theory of the strong interaction between quarks and gluons, the fundamental constituents of nuclear matter. Quarks interact weakly at high energies, allowing perturbative calculations. At low energies the interaction becomes strong, leading to the confinement of quarks and gluons within composite hadrons.
Many quite different experiments, performed at different energies, have been successfully analysed by using QCD. Each fits a large quantity of data to a single parameter, the strong coupling αs. By comparing the values they report, we obtain direct confirmation that the coupling evolves as predicted. (Figure courtesy S. Bethke, ref. 8.)
In particle physics, a kaon also called a K meson and denoted K, is any of a group of four mesons distinguished by a quantum number called strangeness. In the quark model, they are understood to be bound states of a strange quark (or antiquark) and an up or down antiquark (or quark).
Kaons were discovered in cosmic rays in 1947.
The long-lived neutral kaon is called the KL (“K-long”), decays primarily into three pions, and has a mean lifetime of 5.18 x 10−8 s.
An experimental observation made in 1964 that K-longs rarely decay into two pions was the discovery of CP violation
In particle physics, a pion (or a pi meson, denoted with the Greek letter pi: π) is any of three subatomic particles: p0, p+, and p−. Each pion consists of a quark and an antiquark and is, therefore, a meson. Pions are the lightest mesons and, more generally, the lightest hadrons.
p+ is made up of an up and an anti-down quark
In particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry) while its spatial coordinates are inverted (“mirror” or P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers, James Cronin and Val Fitch.
It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present Universe and in the study of weak interactions in particle physics.
https://en.wikipedia.org/wiki/James_Cronin below left
https://en.wikipedia.org/wiki/Val_Logsdon_Fitch Above right
The Tevatron was a circular particle accelerator (inactive since 2011) in the United States, at the Fermi National Accelerator Laboratory (also known as Fermilab), east of Batavia, Illinois, and holds the title of the second-highest energy particle collider in the world, after the Large Hadron Collider (LHC) of the European Organization for Nuclear Research (CERN) near Geneva, Switzerland. The Tevatron was a synchrotron that accelerated protons and antiprotons in a 6.28 km (3.90 mi) ring to energies of up to 1 TeV, hence its name. The Tevatron was completed in 1983 at a cost of $120 million and significant upgrade investments were made in 1983–2011.
The main achievement of the Tevatron was the discovery in 1995 of the top quark—the last fundamental fermion predicted by the standard model of particle physics.
The top quark, also known as the t quark (symbol: t) or truth quark, is the most massive of all observed elementary particles. Like all quarks, the top quark is a fermion with spin 1/2, and experiences all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. It has an electric charge of +2/3 e. It has a mass of 173.0 ± 0.4 GeV/c2, which is about the same mass as an atom of rhenium. The antiparticle of the top quark is the top antiquark (symbol: t, sometimes called anti-top quark or simply anti-top), which differs from it only in that some of its properties have equal magnitude but opposite sign.
At least two experiments are required to confirm the particle physics discoveries.
Until 2012 there was one piece missing from the standard model, the Higgs.
An estimate of the Higgs mass in the Standard Model from electroweak precision measurements. The excluded area labelled LHC has been updated with later results, courtesy M. Grunewald. Present search limits for the Higgs boson from LEP and LHC leave only a small mass gap, the white bar close to the minimum of the χ2 distribution.
Graph showing the events observed by CMS and ATLAS in the two-photon decay channel. The ATLAS and CMS detectors observed a peak in the events (the bump) due to the decay of the Higgs boson into photons, an excess signal above the background made by photons produced mainly by the proton-proton collisions. The peak is around 125 GeV. Source: http://cms.web.cern.ch and http://atlas.web.cern.ch
If the Higgs field leads to the particle masses, their coupling strength to the field should be proportional to those masses. If the Higgs is not responsible for masses there is no reason to expect such a relation.
The Standard Model makes even more predictions for its Higgs field and the particle – and all measured properties of the particle agree with those predictions.
The heavier the particle the more it has coupled with the Higgs field
Problems with the Standard Model
1) It doesn’t explain why there is more matter than antimatter
This quantity relates the overall number density difference between baryons and antibaryons and the number density of cosmic background radiation photons ng.
The composition of the universe determined from analysis of the Planck mission’s cosmic microwave background data. Image via the University of Oxford/© ESA.
2) It doesn’t explain dark matter
3) It doesn’t explain dark energy
4) It can’t unify all the forces yet
5) Flavour and neutrino masses. Why are their three families of quarks and leptons?
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, to the point where one or both theories break down under certain conditions (for example within known spacetime singularities like the Big Bang and black hole event horizons).
Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one or at least the “best step” towards a Theory of Everything, can only be settled via experiments and is one of the most active areas of research in both theoretical and experimental physics.
In particle physics, quarkonium (from quark and -onium, pl. quarkonia) is a flavourless meson whose constituents are a heavy quark and its own antiquark, making it a neutral particle and the antiparticle of itself.
Quarkonia has been suggested as a diagnostic tool of the formation of the quark–gluon plasma: both disappearance and enhancement of their formation depending on the yield of heavy quarks in plasma can occur.
Direct and indirect methods are used for searching for new physics.
I have a physics degree but its emphasis was experimental. I avoided the advanced quantum mechanics module because I knew my maths wasn’t good enough.
In researching this topic I found a lot of maths which I did not understand so it may be that I have not interpreted the information correctly. For that, I apologise. Helen