The future of Particle Physics
Prof. Jon Butterworth
Professor Butterworth is a physics professor at UCL who works on the ATLAS experiment at CERN’s Large Hadron Collider. He previously worked on the HERA electron-proton collider in Hamburg. He won the Chadwick prize of the Institute of Physics in 2013 for his pioneering experimental and phenomenological work in high-energy particle physics, especially in the understanding of hadronic jets. He has written for the Guardian, New Scientist and others and is the author of two popular books on particle physics.
In his talk. Professor Butterworth reviewed our current status, and proposals for the future – including future colliders – being discussed in the context of the update of the European Strategy for Particle Physics which is currently underway.
My notes from the lecture (if they don’t make sense then it is entirely my fault)
There is a huge range of phenomena concerning particle physics
A map of the invisible universe as seen in Jon Butterworth’s book: A map of the invisible journeys into particle physics.
An atom is the smallest constituent unit of ordinary matter that constitutes a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small; typical sizes are around 100 picometres (1 x 10−10 m, a ten-millionth of a millimetre). They are so small that accurately predicting their behaviour using classical physics – as if they were billiard balls, for example – is not possible. This is due to quantum effects. Current atomic models now use quantum principles to better explain and predict this behaviour.
The idea that matter is made up of discrete units is a very old idea, appearing in many ancient cultures such as Greece and India. The word atomos, meaning “uncuttable”, was coined by the ancient Greek philosophers Leucippus and his pupil Democritus (c. 460 – c. 370 BC). Democritus taught that atoms were infinite in number, uncreated, and eternal, and that the qualities of an object result from the kind of atoms that compose it. Democritus’s atomism was refined and elaborated by the later philosopher Epicurus (341–270 BC). During the Early Middle Ages, atomism was mostly forgotten in Western Europe (mainly due to the belief that Aristotle was correct in that everything was simply made up of earth, air, fire and water), but survived among some groups of Islamic philosophers.
The Catholic Church was the main reason why the idea of atoms was deemed unacceptable during the middle ages however a French Catholic priest Pierre Gassendi (1592–1655) revived atomism with modifications, arguing that atoms were created by God and, though extremely numerous, are not infinite.
In England the chemist Robert Boyle (1627–1691) and the physicist Isaac Newton (1642–1727) both defended atomism and, by the end of the seventeenth century, it had become accepted by portions of the scientific community.
In the early 1800s, John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers (the law of multiple proportions).
So up until the late 19th century, an atom was classed as something that all matter was made up of and was the smallest piece of matter possible. Then came J. J. Thompson’s cathode ray experiments.
Several scientists, such as William Prout and Norman Lockyer, had suggested that atoms were built up from a more fundamental unit, but they envisioned this unit to be the size of the smallest atom, hydrogen. Thomson in 1897 was the first to suggest that one of the fundamental units was more than 1,000 times smaller than an atom, suggesting the subatomic particle now known as the electron. Thomson discovered this through his explorations on the properties of cathode rays. Thomson made his suggestion on 30 April 1897 following his discovery that cathode rays (at the time known as Lenard rays) could travel much further through air than expected for an atom-sized particle. He estimated the mass of cathode rays by measuring the heat generated when the rays hit a thermal junction and comparing this with the magnetic deflection of the rays. His experiments suggested not only that cathode rays were over 1,000 times lighter than the hydrogen atom, but also that their mass was the same in whichever type of atom they came from. He concluded that the rays were composed of very light, negatively charged particles which were a universal building block of atoms. He called the particles “corpuscles”, but later scientists preferred the name electron which had been suggested by George Johnstone Stoney in 1891, prior to Thomson’s actual discovery.
Thomson believed that the corpuscles emerged from the atoms of the trace gas inside his cathode ray tubes. He thus concluded that atoms were divisible, and that the corpuscles were their building blocks. In 1904, Thomson suggested a model of the atom, hypothesizing that it was a sphere of positive matter within which electrostatic forces determined the positioning of the corpuscles. To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge. In this “plum pudding” model, the electrons were seen as embedded in the positive charge like plums in a plum pudding (although in Thomson’s model they were not stationary, but orbiting rapidly).
Sir Joseph John Thomson OM PRS (18 December 1856 – 30 August 1940) was an English physicist and Nobel Laureate in Physics, credited with the discovery and identification of the electron, the first subatomic particle to be discovered.
So J. J. Thomson discovered the first fundamental particle. We now know that the electron belongs to a group in the standard model called leptons. Ironically Thompson thought that he had tied up all the loose ends of physics and there was nothing new to discover.
In particle physics, a lepton is an elementary particle of half-integer spin (spin ½) that does not undergo strong interactions. Two main classes of leptons exist, charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Charged leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed. The best known of all leptons is the electron.
There are six types of leptons, known as flavours, grouped in three generations. The first-generation leptons, also called electronic leptons, comprise the electron (e−) and the electron neutrino (ne); the second is the muonic leptons, comprising the muon (μ−) and the muon neutrino (nμ); and the third is the tauonic leptons, comprising the tau (t−) and the tau neutrino (nt). Electrons have the least mass of all the charged leptons. The heavier muons and taus will rapidly change into electrons and neutrinos through a process of particle decay: the transformation from a higher mass state to a lower mass state. Thus electrons are stable and the most common charged lepton in the universe, whereas muons and taus can only be produced in high energy collisions (such as those involving cosmic rays and those carried out in particle accelerators).
Of course, through the early part of the 20th century, only the electron was known.
Some aspects of J. J. Thompson’s plum pudding model puzzled one of the physicists that had worked under him at Cambridge, Ernest Rutherford.
Rutherford had moved to Manchester to become Langsworthy Professor of Physics at the Victoria University of Manchester (now the University of Manchester).
An alpha particle is a sub-microscopic, positively charged particle of matter. According to Thomson’s model, if an alpha particle were to collide with an atom, it would just fly straight through, its path being deflected by at most a fraction of a degree. At the atomic scale, the concept of “solid matter” is meaningless, so the alpha particle would not bounce off the atom like a marble. It would be affected only by the atom’s electric fields, and Thomson’s model predicted that the electric fields in an atom are too weak to affect a passing alpha particle much (alpha particles tend to move very fast). Both the negative and positive charges within the Thomson atom are spread out over the atom’s entire volume. According to Coulomb’s Law, the less concentrated a sphere of electric charge is, the weaker its electric field at its surface will be.
Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, FRS, HFRSE, LLD (30 August 1871 – 19 October 1937), was a New Zealand physicist who came to be known as the father of nuclear physics.
Rutherford instructed two physicists in his department, Geiger and Marsden, to perform a series of experiments. This involved pointing a beam of alpha particles at a thin foil of metal and measuring the scattering pattern by using a fluorescent screen. They spotted alpha particles bouncing off the metal foil in all directions, some right back at the source. This should have been impossible according to Thomson’s model; the alpha particles should have all gone straight through. Obviously, those particles had encountered an electrostatic force far greater than Thomson’s model suggested they would, which in turn implied that the atom’s positive charge was concentrated in a much tinier volume than Thomson imagined.
When Geiger and Marsden shot alpha particles at their metal foil, they noticed only a tiny fraction of the alpha particles were deflected by more than 90°. Most flew straight through the foil. This suggested that those tiny spheres of intense positive charge were separated by vast gulfs of empty space. Most particles passed through the empty space and experienced negligible deviation, while a handful passed close to the nuclei of the atoms and were deflected through large angles.
Rutherford thus rejected Thomson’s model of the atom, and instead proposed a model where the atom consisted of mostly empty space, with all of its positive charge concentrated in its centre in a very tiny volume, surrounded by a cloud of electrons.
In the 1911 Rutherford model, the atom consisted of a small positively charged massive nucleus surrounded by a much larger cloud of negatively charged electrons. In 1920, Rutherford suggested that the nucleus consisted of positive protons and neutrally-charged particles, suggested to be a proton and an electron bound in some way. Electrons were assumed to reside within the nucleus because it was known that beta radiation consisted of electrons emitted from the nucleus. Rutherford called these uncharged particles neutrons, by the Latin root for neutralis (neuter) and the Greek suffix -on (a suffix used in the names of subatomic particles, i.e. electron and proton). References to the word neutron in connection with the atom can be found in the literature as early as 1899, however.
Throughout the 1920s, physicists assumed that the atomic nucleus was composed of protons and “nuclear electrons” but there were obvious problems. It was difficult to reconcile the proton–electron model for nuclei with the Heisenberg uncertainty relation of quantum mechanics.
Basically, Heisenberg’s uncertainty principle states that you can either know a particle’s position or momentum, but not both. The more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.
In 1917, Rutherford actually became the first person to create an artificial nuclear reaction in laboratories at the University. His discovery is now often described as ‘splitting the atom’ in popular accounts, but this should not be confused with the process of nuclear fission discovered later in the 1930s.
Many physicists worked on models for the atomic nucleus throughout the first three decades of the 20th century.
Rutherford had returned to Cambridge in 1919 to become the Director of the Cavendish Laboratory at the University of Cambridge and under his leadership, the neutron was discovered by James Chadwick in 1932.
A schematic diagram of the experiment used to discover the neutron in 1932. At left, a polonium source was used to irradiate beryllium with alpha particles, which induced an uncharged radiation. When this radiation struck paraffin wax, protons were ejected. The protons were observed using a small ionization chamber. Adapted from Chadwick (1932).
Sir James Chadwick, CH, FRS (20 October 1891 – 24 July 1974) was a British physicist who was awarded the 1935 Nobel Prize in Physics for his discovery of the neutron in 1932.
Chadwick produced radiation using beryllium and aimed the radiation at paraffin following experiments carried out in Paris. Paraffin wax is a hydrocarbon high in hydrogen content, hence offers a target dense with protons; since neutrons and protons have almost equal mass, protons scatter energetically from neutrons. Chadwick measured the range of these protons and also measured how the new radiation impacted the atoms of various gases. He found that the new radiation consisted of not gamma rays, but uncharged particles with about the same mass as the proton. These particles were neutrons. Chadwick won the Nobel Prize in Physics in 1935 for this discovery.
So the above image illustrates what Rutherford thought the atom looked like.
Niels Henrik David Bohr (7 October 1885 – 18 November 1962) was a Danish physicist who made foundational contributions to understanding atomic structure and quantum theory, for which he received the Nobel Prize in Physics in 1922.
Niels Bohr received an invitation from Rutherford to conduct post-doctoral work at Victoria University of Manchester. He tweaked Rutherford’s solar system like model by suggesting that electrons revolve around a nucleus in fixed or definite orbits. He claimed that in these orbits, the electrons wouldn’t lose any energy, therefore ensuring that they didn’t collapse into the nucleus.
Bohr called these fixed orbits “stationary orbits”. He claimed that the orbits weren’t randomly situated, but were instead at discrete distances from the nucleus in the centre and that each of them was associated with fixed energies. Inspired by Planck’s theory, he denoted the orbits by n, and called it the quantum number. We often call these orbits, shells
Of course, things are never simple
Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles exhibit a wave nature and vice versa. This phenomenon has been verified not only for elementary particles but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.
Bohr’s view that an atom had its electrons set in fixed orbits led to the idea that electrons can move between orbits or shells. If the atom gains the right quantity (quanta) of energy an electron will move up specific energy levels.
When an electron falls from a higher energy orbit to a lower energy orbit energy is released as a photon of light. The difference in energy between the orbits is the same as the energy of the photon, which can be calculated using Planck’s equation, E = hf. In this case, we think of light as a particle (the photon).
With his model, Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta. For example, if an electron jumps one orbit closer to the nucleus, it must emit energy equal to the difference of the energies of the two orbits. Conversely, when the electron jumps to a larger orbit, it must absorb a quantum of light equal in energy to the difference in orbits.
A continuous spectrum apparently has all wavelengths over a comparatively wide range, usually characteristic of solids and other substances at high temperatures. It is caused when lots of electrons are moving down to different lower energy levels. Long-wavelength em waves have low energy meaning the electrons have dropped a “short distance”. Short wavelength em waves have high energy meaning the electrons have dropped a “long-distance”.
The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. The photon energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference. This collection of different transitions, leading to different radiated wavelengths, make up an emission spectrum. Each element’s emission spectrum is unique. Therefore, spectroscopy can be used to identify the elements in matter of unknown composition. Similarly, the emission spectra of molecules can be used in chemical analysis of substances.
A material’s absorption spectrum is the fraction of incident radiation absorbed by the material over a range of frequencies. The absorption spectrum is primarily determined by the atomic and molecular composition of the material. Radiation is more likely to be absorbed at frequencies that match the energy difference between two quantum mechanical states of the molecules. The absorption that occurs due to a transition between two states is referred to as an absorption line and a spectrum is typically composed of many lines. Simply put an absorption line is due to an electron absorbing energy and moving up an energy level.
The frequencies where absorption lines occur, as well as their relative intensities, primarily depend on the electronic and molecular structure of the sample. The frequencies will also depend on the interactions between molecules in the sample, the crystal structure in solids, and on several environmental factors (e.g., temperature, pressure, electromagnetic field). The lines will also have a width and shape that are primarily determined by the spectral density or the density of states of the system.
So if light could behave as a particle then perhaps a particle could behave like a wave.
Matter waves are a central part of the theory of quantum mechanics, being an example of wave–particle duality. All matter can exhibit wave-like behaviour. For example, a beam of electrons can be diffracted just like a beam of light or a water wave. The concept that matter behaves like a wave was proposed by Louis de Broglie in 1924. It is also referred to as the de Broglie hypothesis. Matter waves are referred to as de Broglie waves.
The de Broglie wavelength is the wavelength, l, associated with a massive particle and is related to the Planck constant, h, through its momentum, p:
l = h/p = h/mv
Wave-like behaviour of matter was first experimentally demonstrated by George Paget Thomson’s thin metal diffraction experiment, and independently in the Davisson–Germer experiment both using electrons, and it has also been confirmed for other elementary particles, neutral atoms and even molecules.
The Davisson–Germer experiment was a 1923-7 experiment by Clinton Davisson and Lester Germer at Western Electric (later Bell Labs), in which electrons, scattered by the surface of a crystal of nickel metal, displayed a diffraction pattern. This confirmed the hypothesis, advanced by Louis de Broglie in 1924, of wave-particle duality, and was an experimental milestone in the creation of quantum mechanics.
In 1928, Paul Dirac published a paper proposing that electrons can have both a positive and negative charge. This paper introduced the Dirac equation, a unification of quantum mechanics, special relativity, and the then-new concept of electron spin to explain the Zeeman effect. The paper did not explicitly predict a new particle but did allow for electrons having either positive or negative energy as solutions.
Carl David Anderson discovered the positron on 2 August 1932, for which he won the Nobel Prize for Physics in 1936. Anderson did not coin the term positron but allowed it at the suggestion of the Physical Review journal editor to whom he submitted his discovery paper in late 1932. The positron was the first evidence of antimatter and was discovered when Anderson allowed cosmic rays to pass through a cloud chamber and a lead plate. A magnet surrounded this apparatus, causing particles to bend in different directions based on their electric charge. The ion trail left by each positron appeared on the photographic plate with a curvature matching the mass-to-charge ratio of an electron, but in a direction that showed its charge was positive. He also discovered the muon in 1936.
So by 1935, we knew that the atom consisted of a tiny nucleus made up of positively charged protons and neutral neutrons with negatively charged electrons orbiting the nucleus. We knew these electrons could move up and down energy levels and when they moved down they emitted photons of electromagnetic radiation with energies equal to the energy difference of the energy levels. We knew that atoms and electrons were particles but could also behave like waves. We knew that electrons had antimatter partners called positrons. However, at this time protons and neutrons were considered to be fundamental particles like the electrons.
A more modern understanding of atoms, reflected in these representations of the electron in a hydrogen atom, is that electrons occupy regions of space about the nucleus; they are not in discrete orbits like planets around the sun. (a) The darker the colour, the higher the probability that an electron will be at that point. (b) In a two-dimensional cross-section of the electron in a hydrogen atom, the more crowded the dots, the higher the probability that an electron will be at that point. In both (a) and (b), the nucleus is in the centre of the diagram.
We also knew that atoms could be split up (ironically Rutherford didn’t think there was any future in this because splitting a single atom released a very small amount of energy).
By 1938 physicists realised that if atoms are split in a certain way a lot of energy can be released.
In nuclear physics and nuclear chemistry, nuclear fission is a nuclear reaction or a radioactive decay process in which the nucleus of an atom splits into smaller, lighter nuclei. The fission process often produces free neutrons and gamma photons and releases a very large amount of energy even by the energetic standards of radioactive decay.
Nuclear fission of heavy elements was discovered on December 17, 1938, by German Otto Hahn and his assistant Fritz Strassmann, and explained theoretically in January 1939 by Lise Meitner and her nephew Otto Robert Frisch. Frisch named the process by analogy with biological fission of living cells. For heavy nuclides, it is an exothermic reaction which can release large amounts of energy both as electromagnetic radiation and as kinetic energy of the fragments (heating the bulk material where fission takes place). In order for fission to produce energy, the total binding energy of the resulting elements must be more negative (greater binding energy) than that of the starting element.
Unfortunately, the Second World War meant that research into fission was harnessed for weapons.
The Manhattan Project was a research and development undertaking during World War II that produced the first nuclear weapons. It was led by the United States with the support of the United Kingdom and Canada. It had the advantage of being able to use the expertise of physicists who were forced to leave Nazi Germany.
After the war came the cold war so research into nuclear weapons continued however research into particle physics continued.
In the 1950s, with development of particle accelerators and studies of cosmic rays, inelastic scattering experiments on protons (and other atomic nuclei) with energies about hundreds of MeVs became affordable. They created some short-lived resonance “particles”, but also hyperons and K-mesons with unusually long lifetime. The cause of the latter was found in a new quasi-conserved quantity, named strangeness, which is conserved in all circumstances except for the weak interaction. The strangeness of heavy particles and the μ-lepton were the first two signs of what is now known as the second generation of fundamental particles.
The weak interaction revealed soon yet another mystery. In 1957 it was found that it does not conserve parity. In other words, the mirror symmetry was disproved as a fundamental symmetry law.
Throughout the 1950s and 1960s, improvements in particle accelerators and particle detectors led to a bewildering variety of particles found in high-energy experiments. The term elementary particle came to refer to dozens of particles, most of them unstable. It prompted Wolfgang Pauli’s remark: “Had I foreseen this, I would have gone into botany”. The entire collection was nicknamed the “particle zoo”. It became evident that some smaller constituents, yet invisible, form mesons and baryons that counted most of the then-known particles.
It should be noted here that Pauli also came up with the idea of neutrinos in 1930 to explain how beta decay could conserve energy, momentum, and angular momentum (spin). Although it took until well after the Second World War for the neutrino to be detected.
In particle physics, mesons are hadronic subatomic particles composed of one quark and one antiquark, bound together by strong interactions.
In particle physics, a baryon is a type of composite subatomic particle which contains an odd number of valence quarks (at least 3). Baryons belong to the hadron family of particles, which are the quark-based particles.
In particle physics, a hadron is a composite particle made of two or more quarks held together by the strong force in a similar way as molecules are held together by the electromagnetic force. Most of the mass of ordinary matter comes from two hadrons, the proton and the neutron. A proton is made up of two up quarks and one down and a neutron is made up of two down quarks and one up quark
The interaction of these particles by scattering and decay provided a key to new fundamental quantum theories. Murray Gell-Mann and Yuval Ne’eman brought some order to mesons and baryons, the most numerous classes of particles, by classifying them according to certain qualities. It began with what Gell-Mann referred to as the “Eightfold Way”, but proceeding into several different “octets” and “decuplets” which could predict new particles.
Gell-Mann’s work in the 1950s involved recently discovered cosmic ray particles that came to be called kaons and hyperons. Classifying these particles led him to propose that a quantum number called strangeness would be conserved by the strong and the electromagnetic interactions, but not by the weak interactions.
He introduced a novel classification scheme, in 1961, for hadrons, elementary particles that participate in the strong interaction and in 1964 he went on to postulate the existence of quarks, particles of which the hadrons of this scheme are composed. The name was coined by Gell-Mann and is a reference to the novel Finnegans Wake, by James Joyce (“Three quarks for Muster Mark!” book 2, episode 4). Quarks, antiquarks, and gluons were soon established as the underlying elementary objects in the study of the structure of hadrons. He was awarded a Nobel Prize in Physics in 1969 for his contributions and discoveries concerning the classification of elementary particles and their interactions.
A quark is a type of elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. Due to a phenomenon known as colour confinement, quarks are never directly observed or found in isolation; they can be found only within hadrons, which include baryons (such as protons and neutrons) and mesons. For this reason, much of what is known about quarks has been drawn from observations of hadrons.
In the 1970s fundamental and exchange particles were established as was the fundamental strong interaction, experienced by quarks and mediated by gluons. These particles were proposed as a building material for hadrons. This theory (Quantum chromodynamics) is unusual because individual (free) quarks cannot be observed, unlike the situation with composite atoms where electrons and nuclei can be isolated by transferring ionization energy to the atom.
Then, the old, broad denotation of the term elementary particle was deprecated and a replacement term subatomic particle covered all the “zoo”, with its hyponym “hadron” referring to composite particles directly explained by the quark model. The designation of an “elementary” (or “fundamental”) particle was reserved for leptons, quarks, their antiparticles, and quanta of fundamental interactions only.
So by the 1970s, we knew that there were fundamental particles called quarks and leptons. Some of the quarks and leptons were just theory as there wasn’t enough energy to produce them. The Z boson exchange was discovered at CERN in 1973.
The W± and Z0 bosons were discovered experimentally in 1983, and the ratio of their masses was found to be as the Standard Model predicted.
Before 2012 the standard model consisted of:
Six “flavors” of quarks: up (1969), down (1969), strange (1969), charm (1974), bottom (1977), and top (1995);
Six types of leptons: electron (1897), electron neutrino (1956), muon (1936), muon neutrino (1962), tau (1975), tau neutrino (2000);
Twelve gauge bosons (force carriers): the photon of electromagnetism (1801, 1895, 1900), the three W and Z bosons of the weak force (1983), and the eight gluons of the strong force (1979)
The Standard Model (SM) predicted the existence of the W and Z bosons, gluon, and the top and charm quarks and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision.
HERA (German: Hadron-Elektron-Ringanlage, English: Hadron-Electron Ring Accelerator) was a particle accelerator at DESY in Hamburg. It began operating in 1992. At HERA, electrons or positrons were collided with protons at a centre of mass energy of 318 GeV. It was the only lepton-proton collider in the world while operating. Also, it was on the energy frontier in certain regions of the kinematic range. HERA was closed down on 30 June 2007.
The scattering of electrons and protons at HERA. The four-momenta of the particles are indicated in the parentheses. The exchanged gauge boson is a photon (gg) or Z0Z0 boson in NC interactions and a WW boson in CC interactions.
The Large Electron–Positron Collider (LEP) was one of the largest particle accelerators ever constructed.
It was built at CERN, a multi-national centre for research in nuclear and particle physics near Geneva, Switzerland. LEP collided electrons with positrons at energies that reached 209 GeV. It was a circular collider with a circumference of 27 kilometres built in a tunnel roughly 100 m underground and passing through Switzerland and France. LEP was used from 1989 until 2000. Around 2001 it was dismantled to make way for the LHC, which re-used the LEP tunnel. To date, LEP is the most powerful accelerator of leptons ever built.
Both direct and indirect methods had been used at LEP and have produced valuable measurements of the number of neutrino families. Definitely, they have shown that there are only 3 light neutrinos.
The results of the LEP experiments allowed precise values of many quantities of the Standard Model — most importantly the mass of the Z boson and the W boson (which were discovered in 1983 at an earlier CERN collider) to be obtained — and so confirm the Model and put it on a solid basis of empirical data. Precision measurements of the shape of the Z boson mass peak constrained the number of light neutrinos in the standard model to exactly three.
Three generations of matter are seen and other precision results produced
The Higgs boson breaks electroweak symmetry spontaneously.
Electricity and magnetism are two manifestations of the same fundamental force. This is seen in Maxwell’s equations. Electroweak symmetry is, in a sense, the next step in this progression, by which the electromagnetic force is unified with the weak force. This unification into an ‘electroweak’ theory and the theory’s subsequent ‘breaking’ into separate electromagnetic and weak forces led to the 1979 Nobel Prize in Physics.
In everyday phenomena, we observe electricity and magnetism as distinct phenomena. The same thing happens for electromagnetism and the weak force: instead of seeing three massless Ws and a massless B, we see two massive charged weak bosons (W+ and W–), a massive neutral weak boson (Z) and a massless photon. We say that electroweak symmetry is broken down to electromagnetism.
Now that masses have come up it would seem that the Higgs has something to do with this. Now there are, in fact, four Higgs bosons: three of which are “eaten” by the weak gauge bosons to allow them to become massive. It turns out that this “eating” does more than that: it combines the ‘unified’ electroweak bosons into their ‘not-unified’ combinations!
The first two are easy; the W1 and W2 combine into the W+ and W- by “eating” the charged Higgs bosons. (Technically we should now call them “Goldstone” bosons.)
A similar story goes through for the W3, B, and H0 (this is not the same as the Higgs boson, which we write with a lowercase h). The W3 and B combine and eat the neutral Higgs/Goldstone to form the massive Z boson. Meanwhile, the photon is the leftover combination of the W3 and B. There are no more Higgses to eat, so the photon remains massless.
Electroweak symmetry breaking explains how the massless W and B bosons combine with the Higgses to form the usual W+, W–, Z (which have mass), and photon (which doesn’t have mass).
In 2012 it was announced that the Higgs Boson had been found
The Standard Model of particle physics is the theory describing three of the four known fundamental forces (the electromagnetic, weak, and strong interactions, and not including the gravitational force) in the universe, as well as classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists around the world, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, confirmation of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
The Higgs boson is an elementary particle in the Standard Model of particle physics, produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. It is named after physicist Peter Higgs, who in 1964, along with five other scientists, proposed the mechanism to explain why particles have mass, this Higgs mechanism also implies the existence of a new boson. Its existence was confirmed in 2012 by the ATLAS and CMS collaborations based on collisions in the LHC at CERN.
Elementary particles included in the Standard Model.
All the particles mentioned so far needed to be detected.
In experimental and applied particle physics, nuclear physics, and nuclear engineering, a particle detector, also known as a radiation detector, is a device used to detect, track, and/or identify ionizing particles, such as those produced by nuclear decay, cosmic radiation, or reactions in a particle accelerator. Detectors can measure the particle energy and other attributes such as momentum, spin, charge, particle type, in addition to merely registering the presence of the particle.
A cloud chamber, also known as a Wilson cloud chamber, is a particle detector used for visualizing the passage of ionizing radiation.
Charles Thomson Rees Wilson (1869–1959), a Scottish physicist, is credited with inventing the cloud chamber. Inspired by sightings of the Brocken spectre while working on the summit of Ben Nevis in 1894, he began to develop expansion chambers for studying cloud formation and optical phenomena in moist air. Very rapidly he discovered that ions could act as centres for water droplet formation in such chambers. He pursued the application of this discovery and perfected the first cloud chamber in 1911.
A cloud chamber consists of a sealed environment containing a supersaturated vapour of water or alcohol. An energetic charged particle (for example, an alpha or beta particle) interacts with the gaseous mixture by knocking electrons off gas molecules via electrostatic forces during collisions, resulting in a trail of ionized gas particles. The resulting ions act as condensation centres around which a mist-like trail of small droplets form if the gas mixture is at the point of condensation. These droplets are visible as a “cloud” track that persist for several seconds while the droplets fall through the vapour. These tracks have characteristic shapes. For example, an alpha particle track is thick and straight, while an electron track is wispy and shows more evidence of deflections by collisions.
Cloud chambers played a prominent role in experimental particle physics from the 1920s to the 1950s, until the advent of the bubble chamber. In particular, the discoveries of the positron in 1932 and the muon in 1936, both by Carl Anderson (awarded a Nobel Prize in Physics in 1936), used cloud chambers. Discovery of the kaon by George Rochester and Clifford Charles Butler in 1947 also was made using a cloud chamber as the detector. In each case, cosmic rays were the source of ionizing radiation.
In particle physics, a kaon also called a K meson and denoted K, is any of a group of four mesons distinguished by a quantum number called strangeness. In the quark model, they are understood to be bound states of a strange quark (or antiquark) and an up or down antiquark (or quark).
In particle physics, strangeness (S) is a property of particles, expressed as a quantum number, for describing the decay of particles in strong and electromagnetic interactions which occur in a short period of time.
Cloud chamber photograph of the first positron ever observed by C. Anderson.
A bubble chamber is a vessel filled with a superheated transparent liquid (most often liquid hydrogen) used to detect electrically charged particles moving through it. It was invented in 1952 by Donald A. Glaser, for which he was awarded the 1960 Nobel Prize in Physics. Supposedly, Glaser was inspired by the bubbles in a glass of beer; however, in a 2006 talk, he refuted this story, although saying that while beer was not the inspiration for the bubble chamber, he did experiments using beer to fill early prototypes.
While bubble chambers were extensively used in the past, they have now mostly been supplanted by wire chambers and spark chambers. Notable bubble chambers include the Big European Bubble Chamber (BEBC) and Gargamelle.
The bubble chamber is similar to a cloud chamber, both in the application and in the basic principle. It is normally made by filling a large cylinder with a liquid heated to just below its boiling point. As particles enter the chamber, a piston suddenly decreases its pressure, and the liquid enters into a superheated, metastable phase. Charged particles create an ionization track, around which the liquid vaporizes, forming microscopic bubbles. Bubble density around a track is proportional to a particle’s energy loss.
Bubbles grow in size as the chamber expands until they are large enough to be seen or photographed. Several cameras are mounted around it, allowing a three-dimensional image of an event to be captured. Bubble chambers with resolutions down to a few micrometres (μm) have been operated.
The entire chamber is subject to a constant magnetic field, which causes charged particles to travel in helical paths whose radius is determined by their charge-to-mass ratios and their velocities. Since the magnitude of the charge of all known charged, long-lived subatomic particles is the same as that of an electron, their radius of curvature must be proportional to their momentum. Thus, by measuring their radius of curvature, their momentum can be determined.
Notable discoveries made by bubble chamber include the discovery of weak neutral currents at Gargamelle in 1973, which established the soundness of the electroweak theory and led to the discovery of the W and Z bosons in 1983 (at the UA1 and UA2 experiments). Recently, bubble chambers have been used in research on Weakly interacting massive particles (WIMP)s, at SIMPLE, COUPP, PICASSO and more recently, PICO.
Other detectors for particle and nuclear physics
In particle physics, a hermetic detector (also called a 4π detector) is a particle detector designed to observe all possible decay products of an interaction between subatomic particles in a collider by covering as large an area around the interaction point as possible and incorporating multiple types of sub-detectors. They are typically roughly cylindrical, with different types of detectors wrapped around each other in concentric layers; each detector type specializes in particular particles so that almost any particle will be detected and identified. Such detectors are called “hermetic” because they are constructed so as the motion of particles are ceased at the boundaries of the chamber without any moving beyond due to the seals; the name “4π detector” comes from the fact that such detectors are designed to cover nearly all of the 4π steradians of solid angle around the interaction point; in terms of the standard coordinate system used in collider physics.
The first such detector was the Mark I at the Stanford Linear Accelerator Center, and the basic design has been used for all subsequent collider detectors. Prior to the building of the Mark I, it was thought that most particle decay products would have relatively low transverse momentum (i.e. momentum perpendicular to the beamline) so that detectors could cover this area only. However, it was learned at the Mark I and subsequent experiments that most fundamental particle interactions at colliders involve very large exchanges of energy and therefore large transverse momenta are not uncommon; for this reason, large angular coverage is critical for modern particle physics.
More recent hermetic detectors include the CDF and DØ detectors at Fermilab’s Tevatron accelerator, as well as the ATLAS and CMS detectors at CERN’s LHC. These machines have a hermetic construction because they are general-purpose detectors, meaning that they are able to study a wide range of phenomena in high-energy physics. More specialised detectors do not necessarily have a hermetic construction; for example, LHCb covers only the forward (high-pseudorapidity) region, because this corresponds to the phase space region of greatest interest to its physics program.
There are three main components of a hermetic detector. From the inside out, the first is a tracker, which measures the momentum of charged particles as they curve in a magnetic field. Next, there are one or more calorimeters, which measure the energy of most charged and neutral particles by absorbing them in dense material, and a muon system which measures the one type of particle that is not stopped through the calorimeters and can still be detected. Each component may have several different specialized sub-components.
A schematic of the basic components of a hermetic detector; I.P. refers to the region containing the interaction point for the colliding particles. This is a cross-section of the typical cylindrical design.
Detectors designed for modern accelerators are huge, both in size and in cost. The term counter is often used instead of detector when the detector counts the particles but does not resolve its energy or ionization. Particle detectors can also usually track ionizing radiation (high energy photons or even visible light). If their main purpose is radiation measurement, they are called radiation detectors, but as photons are also (massless) particles, the term particle detector is still correct.
A list of detectors at various particle physics experiments. Some are no longer active or have been superseded by other experiments. The LHC has absorbed LEP for instance.
Discoveries since 2012
Muon g−2 is a particle physics experiment at Fermilab to measure the anomalous magnetic dipole moment of a muon to a precision of 0.14 ppm, which will be a sensitive test of the Standard Model. It might also provide evidence of the existence of entirely new particles.
The muon, like its lighter sibling the electron, acts like a spinning magnet. The parameter known as the “g-factor” indicates how strong the magnet is and the rate of its gyration. The value of g is slightly larger than 2, hence the name of the experiment. This difference from 2 (the “anomalous” part) is caused by higher-order contributions from quantum field theory. In measuring g−2 with high precision and comparing its value to the theoretical prediction, physicists will discover whether the experiment agrees with theory. Any deviation would point to as yet undiscovered subatomic particles that exist in nature.
The first muon g−2 experiments were born at CERN in 1959 under the initiative of Leon Lederman. A group of six physicists formed the first experiment, using the Synchrocyclotron at CERN. The first results were published in 1961, with a 2% precision with respect to the theoretical value, and then the second ones with this time a 0.4% precision, hence validating the quantum electrodynamics theory.
Fermilab is continuing an experiment conducted at Brookhaven National Laboratory to measure the anomalous magnetic dipole moment of the muon. The Brookhaven experiment ended in 2001, but ten years later Fermilab acquired the equipment and is working to make a more accurate measurement (smaller σ) which will either eliminate the discrepancy or confirm it as an experimentally observable example of physics beyond the Standard Model. Data taking will run until 2020
Pictorial representation (Feynman graph) of the fundamental process responsible for the leading order hadronic contribution to the anomalous magnetic moment of the muon. Copyright: CPT, University of Marseille
An extremely precise measurement of the anomalous magnetic moment of the muon. This quantity can be predicted with great precision by the Standard Model, but if the experiment measures a different value that will indicate that new physics (such as supersymmetry) is responsible for the difference.
g-factor dictates the relationship between momentum and spin, tells something fundamental about the particle itself (and those interacting with it)
Elementary particles such as muons should give g = 2
Quantum chromodynamics binding energy (QCD binding energy), gluon binding energy or chromodynamic binding energy is the energy binding quarks together into hadrons. It is the energy of the field of the strong force, which is mediated by gluons. Motion-energy and interaction-energy contribute most of the hadron’s mass.
Hadron spectroscopy is the subfield of particle physics that studies the masses and decays of hadrons. Hadron spectroscopy is also an important part of the new nuclear physics. The properties of hadrons are a consequence of a theory called quantum chromodynamics (QCD).
QCD predicts that quarks and antiquarks bind into particles called mesons. Another type of hadron is called a baryon, that is made of three quarks. There is good experimental evidence for both mesons and baryons. Potentially QCD also has bound states of just gluons called glueballs. One of the goals of the field of hadronic spectroscopy is to find experimental evidence for exotic mesons, tetraquarks, molecules of hadrons, and glueballs.
An important part of the field of hadronic spectroscopy are the attempts to solve QCD. The properties of hadrons require the solution of QCD in the strong coupling regime, where perturbative techniques based on Feynman diagrams do not work. There are several approaches to trying to solve QCD to compute the masses of hadrons:
Strongly coupled theory leads to confinement
The HPQCD Collaboration is an international collaboration focussed on achieving high-precision results for a wide range of lattice quantum chromodynamics (QCD) calculations, with an emphasis on systems containing heavy quarks. The goal of the collaboration is to reduce statistical and systematic errors on hadron masses and matrix elements to less than a few percent. These calculations are of vital importance to the determination of the fundamental parameters of the Standard Model (coupling constant, quark masses and CKM matrix elements). The collaboration currently involves groups of senior lattice theorists, postdocs, and graduate students, from the UK (Cambridge, Glasgow, Plymouth), the US (Cornell, Ohio State), Italy (Rome), Japan(KEK) and Spain (Zaragoza).
Quarks are the most fundamental constituents of matter discovered so far. Their interaction via the strong force is key to their behaviour because this force prevents them appearing as free particles and means that we must study them through experimental observation of their bound states known as hadrons. Examples of hadrons are the protons and neutrons that make up the atomic nucleus. The collision at high energy of particles (such as protons) then does not liberate the proton’s quark constituents but instead converts the energy of the collision into showers of further hadrons. The hadrons can be tracked in particle detectors and properties, such as their mass, determined. These properties reflect those of their quark constituents and their behaviour under strong force interactions and so can be predicted if we can solve the theory.
Quantum Chromodynamics (QCD) is the theory that describes the strong force, through the interaction of quarks carrying colour charge via force carriers called gluons. Gluon self-interactions cause an anti-screening of quark colour charge so that, for example, the effective strength of the quark-antiquark interaction (denoted by the strong coupling constant, αs) increases with their separation. This leads to the phenomenon of quark confinement. The strongly-coupled non-linear behaviour of QCD (necessary to produce confinement) means that it has to be solved numerically to determine hadron properties. This is recognised as one of the “Grand Challenge” problems of computational physics.
Lattice QCD provides a framework for the numerical calculation in which the theory is discretised onto a lattice of space-time points. Very fast supercomputers are required to perform the calculations.
The plot below shows the masses of “gold-plated” mesons comparing HPQCD results to experiment (an update of a figure that appeared in). Gold-plated mesons are those that are narrow with no strong two-body decay mode and so are well-characterised experimentally. Note that some of the results were predictions made ahead of experimental confirmation, and some are predictions awaiting experimental results.
A pentaquark is a subatomic particle consisting of four quarks and one antiquark bound together.
A five-quark “bag” (above left); A “meson-baryon molecule” (above right)
q indicates a quark, whereas qbar indicates an antiquark. The wavy lines are gluons, which mediate the strong interaction between the quarks. The colours correspond to the various colour charges of quarks. The colours red, green, and blue must each be present and the remaining quark and antiquark must share corresponding colour and anticolour, here chosen to be blue and antiblue (shown as yellow).
The name pentaquark was coined by Claude Gignoux et al. and Harry J. Lipkin in 1987; however, the possibility of five-quark particles was identified as early as 1964 when Murray Gell-Mann first postulated the existence of quarks.
The first claim of pentaquark discovery was recorded at LEPS in Japan in 2003, and several experiments in the mid-2000s also reported discoveries of other pentaquark states. Others were not able to replicate the LEPS results, however, and the other pentaquark discoveries were not accepted because of poor data and statistical analysis. On 13 July 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom Lambda baryons (Λ0b). On 26 March 2019, the LHCb collaboration announced the discovery of a new pentaquark that had not been previously observed. The observations pass the 5-sigma threshold required to claim the discovery of new particles.
Other quark bound states are available
In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks and gluons, the fundamental particles that make up composite hadrons such as the proton, neutron and pion. The QCD analogue of electric charge is a property called colour. Gluons are the force carrier of the theory like photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics. A large body of experimental evidence for QCD has been gathered over the years.
Hard processes are those in which the momentum transfer, Q, is substantial with respect to the QCD scale.
Jets are produced in QCD hard scattering processes, creating high transverse momentum quarks or gluons, or collectively called partons in the partonic picture.
Hit hard to see what is it there inside
Hit the proton (with an electromagnetic/electroweak probe)
e+e− annihilation into hadrons: e+e− → qqbar → hadrons.
Deep Inelastic lepton-hadron Scattering (DIS) : e− p → e− + X.
Make two hadrons hit each other hard
e +e − annihilation into hadrons: e+e− → qqbar → hadrons.
Deep Inelastic lepton-hadron Scattering (DIS): e− p → e− + X.
Hadron–hadron collisions: production of
massive “sterile” objects :
➥ lepton pairs (µ+µ−, the Drell-Yan process),
➥ electroweak vector bosons (Z0 , W±),
➥ Higgs boson(s)
hadrons/photons with large transverse momenta wrt to the collision axis.
Momentum transfer = measure of “hardness”
Hard QCD Physics in ATLAS https://cds.cern.ch/record/1456523/files/ATL-PHYS-SLIDE-2012-398.pdf
testing the Standard Model at the shortest distance scales
Asymptotic freedom http://www.hep.wisc.edu/~sheaff/PASI2012/lectures/melnikov.pdf
Two related ideas justify the quark-parton model as a reasonable starting point for developing a systematic description of deep inelastic scattering. One of them is the asymptotic freedom that allows us to argue that reasonable perturbative expansion is possible for short-distance observables since the coupling constant is small
In other words high energy, strong force at short distances, becoming weaker with increasing distance
When quarks and gluons make a break for it you get jets
To the TeV scale and beyond (or into the unknown)
Top Quark Production Cross Section Measurements https://reader.elsevier.com/reader/sd/pii/S2405601418300191?token=0C94E54B4B44C67C314AC66507688A08C7D41D3A490BDF542E4F98DA1BCF7360A21C2C5A3ADCB177B9A8FF0B3E64972C
The top quark – the heaviest known fundamental particle – plays a unique role in high-energy physics. Studies of its properties have opened new opportunities for furthering our knowledge of the Standard Model. In a new paper submitted to Physical Review D, the ATLAS collaboration presents a comprehensive measurement of high-momentum top-quark pair production at 13 TeV.
In physics, symmetry exists when a transformation can be applied without changing the laws of physics.
In particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry) while its spatial coordinates are inverted (“mirror” or P symmetry).
There are six quark-flavours: the so-called up-type (u, c, t) and down-type (d, s, b).
Transitions take place between the up-type and down-type quarks, with relative strengths which may be represented in a matrix of the form
Within the Standard Model, such quark-flavor changing transitions are accomplished by the weak interactions, and the quark mixing matrix above is unitary and referred to as the CKM matrix.
It seems very likely that Nature is indeed a world of six quark flavours. From the theoretical point of view, it is not just a straightforward extension of the four quark world, because a new and important phenomenon arises: CP violation.
The spectrum of coloured fermions consists of 36 degrees of freedom divided into nine multiplets of three different kinds, namely a three-generation world.
For years it was assumed that elementary processes involving the electromagnetic force and the strong and weak forces exhibited symmetry with respect to both charge conjugation and parity—namely, that these two properties were always conserved in particle interactions. The same was held true for a third operation, time-reversal (T), which corresponds to a reversal of motion. Invariance under time implies that whenever a motion is allowed by the laws of physics, the reversed motion is also an allowed one.
Experiments demonstrated conclusively that parity was not conserved in particle decays that occur via the weak force. These experiments also revealed that charge conjugation symmetry was broken during these decay processes as well. These experiments also revealed that charge conjugation symmetry was broken during these decay processes as well.
The discovery that the weak force conserves neither charge conjugation nor parity separately, however, led to a quantitative theory establishing combined CP as a symmetry of nature. Physicists reasoned that if CP were invariant, time reversal T would have to remain so as well.
However, it was demonstrated that the electrically neutral K-meson—which normally decays via the weak force to give three pi-mesons—decayed a fraction of the time into only two such particles and thereby violated CP symmetry.
In detail two of the decay paths of the K-long kaon are particularly striking. The products of the two decays are entirely CP-symmetric versions of one another. For example, one involved an electron antineutrino and the other an electron neutrino, and all the other corresponding particles of the two decays were also entirely CP-symmetric.
Despite the products of the two decays being CP-symmetric, they weren’t produced at equal rates. If CP symmetry were absolutely true, then K-long kaons would decay via those two paths in equal amounts. However, in experiments, the K-long was more likely to decay into the path involving the electron neutrino by a very small fraction. It is this fractional difference that demonstrates clear CP violation and so provides the undeniable proof that physics does distinguish between matter and antimatter.
The theoretical description of subatomic particles and forces known as the Standard Model contains an explanation of CP violation, but, as the effects of the phenomenon are small, it has proved difficult to show conclusively that this explanation is correct. The root of the effect lies in the weak force between quarks, the particles that make up K-mesons. The weak force appears to act not upon a pure quark state, as identified by the “flavour” or type of quark, but on a quantum mixture of two types of quarks.
It was realized that with six types of quarks, quantum mixing would allow very rare decays that would violate CP symmetry.
CP violation is expected to be more prominent in the decay of the particles known as B-mesons, which contain a bottom quark instead of the strange quark of the K-mesons. Experiments at facilities that can produce large numbers of the B-mesons (which are heavier than the K-mesons) are continuing to test these ideas. In 2010, scientists at the Fermi National Accelerator Laboratory in Batavia, Ill., finally detected a slight preference for B-mesons to decay into muons rather than anti-muons.
The High-Luminosity LHC (HL-LHC) is a major upgrade of the Large Hadron Collider (LHC). The LHC collides tiny particles of matter (protons) at an energy of 13 TeV in order to study the fundamental components of matter and the forces that bind them together. The High-Luminosity LHC will make it possible to study these in more detail by increasing the number of collisions by a factor of between five and seven.
The Belle II experiment is a particle physics experiment designed to study the properties of B mesons (heavy particles containing a bottom quark). Belle II is the successor to the Belle experiment and is currently being commissioned at the SuperKEKB accelerator complex at KEK in Tsukuba, Ibaraki Prefecture, Japan. The Belle II detector was “rolled in” (moved into the collision point of SuperKEKB) in April 2017. Belle II started taking data in early 2018. Over its running period, Belle II is expected to collect around 50 times more data than its predecessor due mostly to a factor 40 increase in instantaneous luminosity provided by SuperKEKB over the original KEKB accelerator.
The LHCb experiment will undergo a metamorphosis over the coming two years, during CERN’s maintenance and upgrade period known as Long Shutdown 2 (LS2). When the Large Hadron Collider (LHC) restarts in 2021, the proton-proton collision rate at LHCb will be increased by a factor of five, and the collaboration is upgrading its detector to be ready for it.
The LHCb experiment is trying to solve the mystery of why nature prefers matter over antimatter: small asymmetries between the two could explain why matter emerged from the aftermath of the Big Bang while antimatter did not. In particular, LHCb is hunting for beauty or bottom (b) quarks, which were common at the infancy of the Universe and can be generated in their billions by the LHC, along with their antimatter counterparts, beauty antiquarks.
Since 2013, the LHCb collaboration has reported on the measurement of several observables associated with b à s transitions, finding various deviations from their predicted values in the Standard Model. These anomalies may be linked to dark matter.
It is an exciting moment in flavour physics, with several interesting anomalies in B-meson decays. Whether real or not, only time can tell. New LHCb analyses based on larger datasets are expected to appear in the near future, possibly shedding new light on these anomalies. In the longer term, fundamental contributions from the Belle II experiment will also be crucial to settle the issue. The possible connection to one of the central problems in current physics, the nature of the dark matter of the Universe, would definitely be a fascinating outcome of this endeavour.
The LHCb collaboration has presented several new measurements concerning particles containing beauty or charm quarks. Certain properties of these particles can be affected by the existence of new particles beyond the Standard Model. This allows LHCb to search for signs of new physics via a complementary, indirect route. One much-anticipated result, shown for the first time at the conference, is a measurement using data taken from 2011 to 2016 of the ratio of two related rare decays of a B+ particle. These decays are predicted in the Standard Model to occur at the same rate to within 1%; the data collected are consistent with this prediction but favour a lower value. This follows a pattern of intriguing hints in other, similar decay processes; while none of these results are significant enough to constitute evidence of new physics on their own, they have captured the interest of physicists and will be investigated further with the full LHCb data set. LHCb also presented the first observation of matter-antimatter asymmetry known as CP violation in charm particle decays.
The behaviour could be explained by introducing a new particle/interaction. Maybe a new Majorana mass.
A Majorana fermion, also referred to as a Majorana particle, is a fermion that is its own antiparticle.
The neutrino is the only fermion with potential to be Majorana.
The existence of neutrino mass implies physics beyond the Standard Model, either from a right-handed state needed for the standard mass mechanism, or a Higgs triplet, or a new mass mechanism.
What will we discover next? Plus talks to Professor Ben Allanach https://www.youtube.com/watch?v=EwsgUkxMBy0
A black hole is a region of spacetime exhibiting gravitational acceleration so strong that nothing—no particles or even electromagnetic radiation such as light—can escape from it. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon.
The supermassive black hole at the core of supergiant elliptical galaxy Messier 87
A torrent of energetic particles appears to be spewing from the centre of our Milky Way Galaxy, coming from the gigantic black hole that lies at its heart, according to a new study.
Some theories that go beyond the Standard Model of particle physics predict the existence of new ultralight particles, with masses far below the lightest known particles in nature. These particles have such very weak interactions with ordinary matter that they are hard to detect via particle colliders and dark matter detectors. However, according to a new paper by physicists Daniel Baumann and Horng Sheng Chia from the University of Amsterdam (UvA), together with Rafael Porto from DESY (Hamburg), such particles could be detectable in gravitational-wave signals originating from merging black holes.
Two black holes orbiting one another at close distance, with one black hole carrying a cloud of ultralight bosons. As the new computations show, the presence of the boson cloud will lead to a distinct fingerprint in the gravitational wave signal emitted by the black hole pair. Credit: D. Baumann
Unfortunately, the standard model cannot explain gravity
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the strong CP problem, neutrino oscillations, matter-antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, to the point where one or both theories break down under certain conditions (for example within known spacetime singularities like the Big Bang and black hole event horizons).
Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the “best step” towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics.
The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain:
B-meson decay etc.
Does the Higgs boson have all of the properties predicted by the Standard Model?
A few hadrons (i.e. composite particles made of quarks) whose existence is predicted by the Standard Model, which can be produced only at very high energies in very low frequencies have not yet been definitively observed, and “glueballs” (i.e. composite particles made of gluons) have also not yet been definitively observed. Some very-low-frequency particle decays predicted by the Standard Model have also not yet been definitively observed because insufficient data is available to make a statistically significant observation.
Number of parameters – the standard model depends on 19 numerical parameters. Their values are known from experiment, but the origin of the values is unknown.
A Grand Unified Theory (GUT) is a model in particle physics in which, at high energy, the three gauge interactions of the Standard Model that define the electromagnetic, weak, and strong interactions, or forces, are merged into a single force. Although this unified force has not been directly observed, the many GUT models theorize its existence. If unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct.
In particle physics, supersymmetry (SUSY) is a principle that proposes a relationship between two basic classes of elementary particles: bosons, which have an integer-valued spin, and fermions, which have a half-integer spin. A type of spacetime symmetry, supersymmetry is a possible candidate for undiscovered particle physics and seen as an elegant solution to many current problems in particle physics if confirmed correct, which could resolve various areas where current theories are believed to be incomplete. A supersymmetrical extension to the Standard Model would resolve major hierarchy problems within gauge theory, by guaranteeing that quadratic divergences of all orders will cancel out in perturbation theory.
In the standard model, neutrinos have exactly zero mass. This is a consequence of the standard model containing only left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model. Measurements, however, indicated that neutrinos spontaneously change flavour, which implies that neutrinos have a mass. These measurements only give the mass differences between the different flavours. The best constraint on the absolute mass of the neutrinos comes from precision measurements of tritium decay, providing an upper limit 2 eV, which makes them at least five orders of magnitude lighter than the other particles in the standard model. This necessitates an extension of the standard model, which not only needs to explain how neutrinos get their mass, but also why the mass is so small.
Several preon models have been proposed to address the unsolved problem concerning the fact that there are three generations of quarks and leptons. Preon models generally postulate some additional new particles which are further postulated to be able to combine to form the quarks and leptons of the standard model. To date, no preon model is widely accepted or fully verified.
There are researches about quark-gluon plasma, a new (hypothetical) state of matter. There is also some recent experimental evidence that tetraquarks, pentaquarks and glueballs exist.
The proton decay is not observed (or, generally, non-conservation of the baryon number), but predicted by some theories extending beyond the Standard Model, hence there are searches for it.
Do we need new colliders?
Should we study the Higgs more carefully?
250GeV Higgs factory
e+e– collider: linear collider with storage ring
250 GeV: Mass, Spin, CP nature; Absolute measure of HZZ; BRs Higgs –> qq, ll, VV
Compelling science motivates continuing this program with experiments at lepton colliders. Experiments at such colliders can reach sub-percent precision in Higgs boson properties in a unique, model-independent way, enabling discovery of percent-level deviations from the Standard Model predicted in many theories. They can improve the precision of our knowledge of the W, Z, and top quark well enough to allow the discovery of predicted new-physics effects. They search for new particles in a manner complementing new particle searches at the LHC.
A global effort has completed the technical design of the International Linear Collider (ILC) accelerator and detectors that will provide these capabilities in the latter part of the next decade. The Japanese particle physics community has declared this facility as its first priority for new initiatives.
Other precision measurements:
350 GeV: Top threshold: mass, width, anomalous couplings … (more stats on Higgs BRs)
500 GeV: HWW coupling –> total width –> absolute couplings; Higgs self-coupling; Top Yukawa coupling
–> 1000 GeV: as motivated by physics
Precision is possible at the LHC
Aiming at a high-energy collider with a clean collision environment, CERN has for several years been developing an e+e– linear collider called CLIC. With an energy up to 3 TeV, CLIC would combine the precision of an e+e– collider with the high-energy reach of a hadron collider such as the LHC. But with the lack so far of any new particles at the LHC beyond the Higgs, evidence is mounting that even higher energies may be required to fully explore the next layer of phenomena beyond the SM. Prompted by the outcome of the 2013 European Strategy for Particle Physics, CERN has therefore undertaken a five-year study for a Future Circular Collider (FCC) facility built in a new 100 km-circumference tunnel.
Such a tunnel could host an e+e– collider (called FCC-ee) with an energy and intensity much higher than LEP, improving by orders of magnitude the precision of Higgs and other SM measurements. It could also house a 100 TeV proton–proton collider (FCC-hh) with a discovery potential more than five times greater than the 27 km-circumference LHC. An electron–proton collider (FCC-eh), furthermore, would allow the proton’s substructure to be measured with unmatchable precision.
The future proton–proton collider FCC-hh would operate at seven times the LHC energy, and collect about 10 times more data.
We don’t necessarily need to build an even bigger collider to study them better. If there is new physics up at a very high energy scale, we could probe it in depth with a potential “phase IV” for a Future Circular Collider: a muon-antimuon collider in the same tunnel. The muon is like an electron: it’s a point particle. It has the same charge, except it’s approximately 207 times heavier. This means some extremely good things:
it can reach much higher energies by achieving the same speeds,
it provides a clean, energy-tunable signature,
and unlike electrons, because of the much lower charge-to-mass ratio, its synchrotron radiation can be neglected.
It’s a brilliant idea, but also a tremendous challenge. The drawback is singular but substantial: muons decay away with a mean lifetime of just 2.2 microseconds.
European Strategy for Particle Physics 13-16 May 2019 – Granada, Spain https://indico.cern.ch/event/808335/timetable/#20190512.detailed
Is the standard model isolated?