A composite image of the Large Hadron Collider Credit: CERN
It’s 10 years since the first particles smashed into each other at the world’s biggest scientific experiment, the Large Hadron Collider. Since then physicists have discovered the Higgs boson, created forms of matter not seen since the Big Bang and ruled out a whole host of speculative theories about the subatomic world.
In this virtual event a panel of physicists from the four giant experiments at the Large Hadron Collider – ALICE, ATLAS, CMS and LHCb – reflected on what they’ve learned over the past decade and what they hope to discover in the next.
Sudarshan Paramesvaran is a Lecturer at the University of Bristol. He has worked on the CMS experiment at the LHC for 10 years, having achieved his PhD at Royal Holloway, University of London working on the BaBar experiment at SLAC in 2010.
Jan Fiete Grosse-Oetringhaus is Section Leader of the CERN ALICE physics and performance group and the ALICE Analysis Coordinator. He achieved his PhD at the University of Muenster, Germany in 2009. He has worked at CERN since 2006, and has been a staff member since 2012.
Barbara Sciascia (PhD, University of Rome, Sapienza, 2020) is researcher at the Laboratori Nazionali di Frascati (LNF) of National Institute for Nuclear Physics (INFN), Italy. Her scientific activity is in the field of high energy experimental physics mainly studying flavour physics through participation in the KLOE experiment at LNF (1998-2013) and the LHCb experiment at CERN (2011-present).
Monica D’Onofrio is the team leader of the Liverpool group at the ATLAS experiment at the LHC. Previously she studied her undergraduate at the University of Pisa, Italy, followed by a PhD at the University of Geneva, Switzerland in 2005. Since 2010 she has worked at the University of Liverpool after a post-doc in IFAE, Barcelona. She has been an ATLAS member since 2002, working on searches for new physics in particular supersymmetry and dark matter.
The following are notes from the on-line lecture. Even though I could stop the video and go back over things there are likely to be mistakes because I haven’t heard things correctly or not understood them. I hope the speakers and my readers will forgive any mistakes and let me know what I got wrong.
A tour of ALICE
Courtesy of CERN
What is the LHC?
Well, L stands for large because the LHC is big, consisting of a ring approximately 27km in circumference, 100m below ground
H stands for hadron because the LHC accelerates protons, which belong to the group of particles called hadrons. A hadron is made up of two or more quarks
C stands for collider because the protons or ions, which form two beams travelling in opposite directions, are made to collide at four points around the machine
The above picture shows the blue cylinders which contain the magnetic yoke and coil of the dipole magnets together with the liquid helium system required to cool the magnet so that it becomes superconducting. The beams are contained within the beam pipes. (Image: CERN)
The CERN accelerator complex is a succession of machines with increasingly higher energies. Each machine accelerates a beam of particles to a given energy before injecting the beam into the next machine in the chain. This next machine brings the beam to an even higher energy and so on. The LHC is the last element of this chain, in which the beams reach their highest energies.
As the particles are accelerated their inertial mass increases, reaching a maximum when they have reached a maximum speed (close to the speed of light). Because of this, mass changes use energy values instead of the normal mass units (which would be too small anyway)
The energy of a tiny particle, like a proton, is measured in electronvolts. One electronvolt is the energy gained by an electron that accelerates through a one-volt electrical field. As they race around the LHC, the protons acquire an energy of 6.5 million million electronvolts, known as 6.5 tera-electronvolts or TeV.
For the acceleration of protons, hydrogen is the start of the process.
We can simplify the process as follows:
Electrons are removed from hydrogen molecules.
For the LHC beam, the following is needed:
2808 bunches x 1.15 x 1011 = 3 x 1014 protons per beam or, 6 x 1014 protons for the two beams
The particles are accelerated by a 90 kV supply and leave the Duoplasmatron with 1.4% speed of light, i.e. ~ 4000 km/s.
Then they are sent to a radio frequency quadrupole, QRF -an accelerating component that both speeds up and focuses the particle beam. From the quadrupole, the particles are sent to the linear accelerator (LINAC2).
The protons are accelerated many times in both linear and circular accelerators before they reach the LHC, which is the last stage in the process.
Why is the LHC circular?
Linear accelerators would have to be extremely long to produce energies near to what the LHC can provide (of the order of the TeV). A circular path can allow the particle to gain a small amount of energy each time it completes a circuit.
It is also a question of economy in particle acceleration and colliding them. In the case of a linear accelerator the acceleration process would have to happen during a very short time (Length of accelerator divided by an average particles speed) and accelerated bunches of particles collide only once and then a whole process would have to be repeated again. In a circular accelerator once the particles are accelerated, they can be kept circulating for hours and collide millions of times before the bunches become attenuated too much to continue the process.
In the LHC the particles can travel around the circuit 11000 times per second.
The large magnets keep the charge particles in the right positions as they circulate the LHC.
There are 1200 dipole magnets. To the side of the blue cylinders is room for people to move if they need to check and maintain the apparatus.
A dipole magnet is a magnet in which there are two poles (i.e., North and South poles) that form a closed field loop. The simplest example of a dipole magnet is a bar magnet.
In particle accelerators, a dipole magnet is the electromagnet used to create a uniform magnetic field over some distance. Particle motion in that field will be circular in a plane perpendicular to the field and collinear to the direction of particle motion and free in the direction right angles to it. Thus, a particle injected into a dipole magnet will travel on a circular or helical trajectory. By adding several dipole sections on the same plane, the bending radial effect of the beam increases.
In accelerator physics, dipole magnets are used to realise bends in the design trajectory (or ‘orbit’) of the particles, as in circular accelerators. Other uses include:
Injection of particles into the accelerator
Ejection of particles from the accelerator
Correction of orbit errors
The magnets used in the LHC are superconducting electromagnets.
A superconducting magnet is an electromagnet made from coils of superconducting wire. They must be cooled to cryogenic temperatures during operation. In its superconducting state the wire has no electrical resistance and therefore can conduct much larger electric currents than ordinary wire, creating intense magnetic fields. Superconducting magnets can produce greater magnetic fields than all but the strongest non-superconducting electromagnets and can be cheaper to operate because no energy is dissipated as heat in the windings.
The LHC superconducting magnets are cooled to -271oC making them colder than the Universe.
The LHC is actually made up of two rings. One ring for particles moving clockwise and the other for particles moving anti-clockwise.
When the particles have reached the desired energy, they are allowed to collide at one of four experimental positions, ATLAS, CMS, ALICE and LHCb.
The energy at the moment of collision is twice the energy of the single beam. At the moment of collision there are millions of collisions per second.
Continuous collisions can last for about ten hours until the refilling with protons is needed.
The LHC runs everyday unless it is being repaired during the shutdown.
The machine is so sensitive that it can be affected by the Moon.
The orbits of protons in the 27-kilometre Large Hadron Collider (LHC) have to be adjusted regularly to account for the gravitational effect of the moon.
In the graph above, the two lower curves (in beige and green) show the instantaneous luminosity measured when the moon was full – by the two largest detectors at the LHC, CMS and ATLAS. Instantaneous luminosity is a measure of how many collisions happen per second in each experiment between the two beams of protons circulating in opposite directions in the LHC tunnel.
The LHC is so large that the gravitational force exerted by the moon is not the same at all points, which creates small distortions of the tunnel. And the machine is sensitive enough to detect minute deformations created by the small differences in gravitational force across its diameter.
As the moon rises in the sky, the force it exerts changes enough to require a periodic correction of the orbit of the proton beams in the accelerator to adapt to a deformed tunnel. The corrections appear as regular dips in luminosity (see graph above) as the LHC operator adjusts the orbits.
The length of the LHC can change by 20μm
The large energies involved in the LHC allows many different particles to be produced which are not normally seen, which hopefully will answer the unanswered questions in particle physics.
Some history about particles
The ancient Greeks and Indians were the first people to consider that everything was made up of small particles. This ancient idea was based in philosophical reasoning rather than scientific reasoning. The word atom is derived from the Greek word atomos, which means “uncuttable”.
Not much was done in this field until the 18th century. In 1789, French chemist Antoine Lavoisier wrote Traité Élémentaire de Chimie (Elementary Treatise of Chemistry), which is considered to be the first modern textbook about chemistry. Lavoisier defined an element as a substance that cannot be broken down into a simpler substance by a chemical reaction, however there is no mention that elements are made up from the same type of atom.
Antoine-Laurent de Lavoisier (26 August 1743 – 8 May 1794), also Antoine Lavoisier after the French Revolution, was a French nobleman and chemist who was central to the 18th-century chemical revolution and who had a large influence on both the history of chemistry and the history of biology. He is widely considered in popular literature as the “father of modern chemistry”
In the early 1800s, John Dalton compiled experimental data gathered by himself and other scientists and discovered a pattern now known as the “law of multiple proportions”. He noticed that in chemical compounds which contain a particular chemical element, the content of that element in these compounds will differ by ratios of small whole numbers. This pattern suggested to Dalton that each chemical element combines with others by some basic and consistent unit of mass.
John Dalton FRS (6 September 1766 – 27 July 1844) was an English chemist, physicist, and meteorologist. He is best known for introducing the atomic theory into chemistry,
Scientists continued the work during the 19th century leading to Dmitri Mendeleev’s work on what we now consider the periodic table.
Dmitri Ivanovich Mendeleev (8 February 1834 – 2 February 1907 [OS 27 January 1834 – 20 January 1907]) was a Russian chemist and inventor. He is best remembered for formulating the Periodic Law and creating a farsighted version of the periodic table of elements. He used the Periodic Law not only to correct the then-accepted properties of some known elements, such as the valence and atomic weight of uranium, but also to predict the properties of eight elements that were yet to be discovered.
Mendeleev’s first periodic table. Dating from 1869, these notes show the early concept of a periodic table of the elements. This version has the elements listed using chemical symbols, and ordered by atomic weight, but the fully correct chemical arrangement had not yet emerged. Using the table, he predicted the properties of elements yet to be discovered. In 1863 there were 56 known elements with a new element being discovered at a rate of approximately one per year. After becoming a teacher, Mendeleev wrote the definitive textbook of his time: Principles of Chemistry (two volumes, 1868-70). As he attempted to classify the elements according to their chemical properties, he noticed patterns that led him to postulate his periodic table which described elements according to both atomic weight and valence. The drawing is in the possession of the Oesper Collection, University of Cincinnati.
Sorting out elements using their atomic weights did produce problems in the table and further work by other scientists worked out that the atomic number (nuclear charge) determined the placement of elements in the periodic table. Mendeleev was happy with this however it was Wolfgang Pauli who investigated the length of periods in the periodic table in 1924 and found it and the periodicity of the table were due to the order in which electron shells were filled.
In chemistry and atomic physics, an electron shell may be thought of as an orbit followed by electrons around an atom’s nucleus. The closest shell to the nucleus is called the “1 shell” (also called the “K shell”), followed by the “2 shell” (or “L shell”), then the “3 shell” (or “M shell”), and so on farther and farther from the nucleus.
Electron shells are sometimes called energy levels because electrons contain energy. Electrons near the nucleus have less energy than those farther away. The concept of an electron shell is used to help visualize the behaviour of electrons. However, the location and energy of electrons are more accurately described as electron clouds.
The periodic table today.
Wolfgang Ernst Pauli (25 April 1900 – 15 December 1958) was an Austrian theoretical physicist and one of the pioneers of quantum physics.
But what does an atom look like?
In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as “Brownian motion”. This was thought to be caused by invisible water molecules knocking the grains about.
Robert Brown FRSE FRS FLS MWS (21 December 1773 – 10 June 1858) was a Scottish botanist and paleobotanist who made important contributions to botany largely through his pioneering use of the microscope.
Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron.
Throughout the 19th century the atom was recognised as the smallest constituent of matter, however in 1897 J. J. Thomson found that particles 1,800 times lighter than hydrogen could be emitted from atoms, the very atoms from the cathode in his instruments. He called them corpuscles but they were later renamed electrons. This meant that atoms are not indivisible as the name atomos suggested.
J. J. Thomson thought that the negatively-charged electrons were distributed throughout the atom in a sea of positive charge that was distributed across the whole volume of the atom. This model is sometimes known as the plum pudding model.
Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn’t think he’d run into this same problem because alpha particles are much heavier than electrons. According to Thomson’s model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle, and the electrons are so lightweight they should be pushed aside effortlessly by the much heavier alpha particles. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully.
Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with alpha particles. They spotted alpha particles being deflected by angles greater than 90°. To explain this, Rutherford proposed that the positive charge of the atom is not distributed throughout the atom’s volume as Thomson believed, but is concentrated in a tiny nucleus at the centre. Only such an intense concentration of charge could produce an electric field strong enough to deflect the alpha particles as observed.
The Geiger–Marsden experiment: Left: Expected results: alpha particles passing through the plum pudding model of the atom with negligible deflection. Right: Observed results: a small portion of the particles were deflected by the concentrated positive charge of the nucleus.
Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, FRS, HonFRSE (30 August 1871 – 19 October 1937) was a New Zealand–born British physicist who came to be known as the father of nuclear physics
https://en.wikipedia.org/wiki/Hans_Geiger (below left)
Johannes Wilhelm “Hans” Geiger (30 September 1882 – 24 September 1945) was a German physicist.
https://en.wikipedia.org/wiki/Ernest_Marsden (above right)
Sir Ernest Marsden CMG CBE MC FRS (19 February 1889 – 15 December 1970) was an English-New Zealand physicist.
In 1913 the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon.
The cake model of the hydrogen atom (Z = 1) or a hydrogen-like ion (Z > 1), where the negatively charged electron confined to an atomic shell encircles a small, positively charged atomic nucleus and where an electron jumps between orbits, is accompanied by an emitted or absorbed amount of electromagnetic energy (hn). Z is the proton number (which equals electron number in a neutral atom), h is Planck’s constant and n is the frequency of the electromagnetic energy.
The Bohr model of the atom, with an electron making instantaneous “quantum leaps” from one orbit to another with gain or loss of energy. This model of electrons in orbits is obsolete.
Niels Henrik David Bohr (7 October 1885 – 18 November 1962) was a Danish physicist who made foundational contributions to understanding atomic structure and quantum theory, for which he received the Nobel Prize in Physics in 1922.
In 1926 Erwin Schrödinger developed the Schrödinger equation, a mathematical model of the atom (wave mechanics) that described the electrons as three-dimensional waveforms rather than point particles.
Erwin Rudolf Josef Alexander Schrödinger (12 August 1887 – 4 January 1961), sometimes written as Erwin Schrodinger or Erwin Schroedinger, was a Nobel Prize-winning Austrian-Irish physicist who developed a number of fundamental results in quantum theory: the Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time.
A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behaviour that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favour of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed
Werner Karl Heisenberg (5 December 1901 – 1 February 1976) was a German theoretical physicist and one of the key pioneers of quantum mechanics.
Atomic orbitals of the electron in a hydrogen atom at different energy levels. The probability of finding the electron is given by the colour, as shown in the key at upper right.
The concept of a hydrogen-like particle as a constituent of other atoms was developed over a long period. As early as 1815, it was proposed that all atoms are composed of hydrogen atoms, based on a simplistic interpretation of early values of atomic weights, which was disproved when more accurate values were measured.
Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, Antonius van den Broek proposed that the place of each element in the periodic table (its atomic number) is equal to its nuclear charge. This was confirmed experimentally by Henry Moseley in 1913 using X-ray spectra.
https://en.wikipedia.org/wiki/Antonius_van_den_Broek (below left)
Antonius Johannes van den Broek (4 May 1870, Zoetermeer – 25 October 1926, Bilthoven) was a Dutch amateur physicist notable for being the first who realized that the number of an element in the periodic table (now called atomic number) corresponds to the charge of its atomic nucleus.
https://en.wikipedia.org/wiki/Henry_Moseley (Above right)
Henry Gwyn Jeffreys Moseley (23 November 1887 – 10 August 1915) was an English physicist, whose contribution to the science of physics was the justification from physical laws of the previous empirical and chemical concept of the atomic number.
In 1917 (in experiments reported in 1919 and 1925), Rutherford proved that the hydrogen nucleus is present in other nuclei, a result usually described as the discovery of protons.
Rutherford knew hydrogen to be the simplest and lightest element and perhaps it could be the building block of all elements. Discovery that the hydrogen nucleus is present in all other nuclei as an elementary particle led Rutherford to give the hydrogen nucleus a special name as a particle, since he suspected that hydrogen, the lightest element, contained only one of these particles. He named this new fundamental building block of the nucleus the proton, after the neuter singular of the Greek word for “first”. The first use of the word “proton” in the scientific literature appeared in 1920.
By 1920, the existence of electrons within the atomic nucleus was widely assumed. It was assumed the nucleus consisted of hydrogen nuclei in number equal to the atomic mass. But since each hydrogen nucleus had charge +1, the nucleus required a smaller number of “internal electrons” each of charge -1 to give the nucleus its correct total charge. The mass of protons is about 1800 times greater than that of electrons, so the mass of the electrons is incidental in this computation. Such a model was consistent with the scattering of alpha particles from heavy nuclei, as well as the charge and mass of the many isotopes that had been identified. There were other motivations for the proton–electron model. As noted by Rutherford at the time, “We have strong reason for believing that the nuclei of atoms contain electrons as well as positively charged bodies…”, namely, it was known that beta radiation was electrons emitted from the nucleus.
Rutherford conjectured the existence of new particles. The alpha particle was known to be very stable, and it was assumed to retain its identity within the nucleus. It was presumed to consist of four protons and two closely bound electrons to give it +2 charge and mass 4. However further work suggested the existence of a new particle with a mass similar to that of a proton but with no charge. Rutherford determined that such a zero-charge particle would be difficult to detect by available techniques.
By 1921 Rutherford had named the uncharged particle the neutron, while about that same time the word proton was adopted for the hydrogen nucleus. Neutron was apparently constructed from the Latin root for neutral and the Greek ending -on (by imitation of electron and proton). References to the word neutron in connection with the atom can be found in the literature as early as 1899, however.
In 1931, it was found that if alpha particle radiation from polonium fell on beryllium, boron, or lithium, an unusually penetrating radiation was produced. Because the radiation was not affected by an electric field it was thought that this radiation was gamma radiation (the most energetic part of the electromagnetic spectrum).
Neither Rutherford nor James Chadwick at the Cavendish Laboratory in Cambridge were convinced by the gamma ray interpretation. Chadwick quickly performed a series of experiments that showed that the new radiation consisted of uncharged particles with about the same mass as the proton. These particles were neutrons. Chadwick won the 1935 Nobel Prize in Physics for this discovery.
Sir James Chadwick, CH, FRS (20 October 1891 – 24 July 1974) was a British physicist who was awarded the 1935 Nobel Prize in Physics for his discovery of the neutron in 1932.
A schematic diagram of the experiment used to discover the neutron in 1932. At left, a polonium source was used to irradiate beryllium with alpha particles, which induced an uncharged radiation. When this radiation struck paraffin wax, protons were ejected. The protons were observed using a small ionization chamber. Adapted from Chadwick (1932)
The image below is an illustration of a helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more complicated nuclei this is not always the case.
So, the modern basic view of an atom is a tiny nucleus consisting of protons and neutrons surrounded by a cloud of electrons.
So, by 1930 the atom was no longer the smallest particle of matter but made up of three particles and that only these three particles existed.
the neutrino was postulated first by Wolfgang Pauli in 1930 to explain how beta decay could conserve energy, momentum, and angular momentum (spin). However, it wasn’t directly detected until 1956.
Muons were discovered by Carl D. Anderson and Seth Neddermeyer at Caltech in 1936, while studying cosmic radiation. Anderson noticed particles that curved differently from electrons and other known particles when passed through a magnetic field. They were negatively charged but curved less sharply than electrons, but more sharply than protons, for particles of the same velocity. It was assumed that the magnitude of their negative electric charge was equal to that of the electron, and so to account for the difference in curvature, it was supposed that their mass was greater than an electron but smaller than a proton. Thus, Anderson initially called the new particle a mesotron, adopting the prefix meso- from the Greek word for “mid-“. The existence of the muon was confirmed in 1937 with cloud chamber experiments.
The eventual recognition of the muon as a simple “heavy electron”, with no role at all in the nuclear interaction, seemed so incongruous and surprising at the time, that Nobel laureate I. I. Rabi famously quipped, “Who ordered that?”
https://en.wikipedia.org/wiki/Carl_David_Anderson (below left)
Carl David Anderson (September 3, 1905 – January 11, 1991) was an American physicist. He is best known for his discovery of the positron in 1932, an achievement for which he received the 1936 Nobel Prize in Physics, and of the muon in 1936.
https://en.wikipedia.org/wiki/Seth_Neddermeyer (Above centre)
Seth Henry Neddermeyer (September 16, 1907 – January 29, 1988) was an American physicist who co-discovered the muon,
https://en.wikipedia.org/wiki/Isidor_Isaac_Rabi (Above right)
Isidor Isaac Rabi (July 29, 1898 – January 11, 1988) was an American physicist who won the Nobel Prize in Physics in 1944 for his discovery of nuclear magnetic resonance, which is used in magnetic resonance imaging.
The muon wasn’t needed for the physics at the time. It wasn’t involved in the periodic table and played no part in chemistry experiments.
By the 1950’s many new particles had been discovered and by 1963 there were 60 of them. Discovered in cosmic rays and through accelerator experiments.
Over the years, and through many stages, the standard model of particle physics was developed
The Standard Model of particle physics is the theory describing three of the four known fundamental forces (the electromagnetic, weak, and strong interactions, and not including the gravitational force) in the universe, as well as classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists around the world, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, confirmation of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Part of the standard model consists of six types, known as flavors, of quarks: up, down, strange, charm, bottom, and top.
A quark is a type of elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. All commonly observable matter is composed of up quarks, down quarks and electrons. Due to a phenomenon known as colour confinement, quarks are never found in isolation; they can be found only within hadrons, which include baryons (such as protons and neutrons) and mesons, or in quark–gluon plasmas. For this reason, much of what is known about quarks has been drawn from observations of hadrons.
The quark model was independently proposed by physicists Murray Gell-Mann and George Zweig
in 1964. Quarks were introduced as parts of an ordering scheme for hadrons, and there was little evidence for their physical existence until deep inelastic scattering experiments at the Stanford Linear Accelerator Center in 1968. Accelerator experiments have provided evidence for all six flavors. The top quark, first observed at Fermilab in 1995, was the last to be discovered.
https://en.wikipedia.org/wiki/Murray_Gell-Mann (below left)
Murray Gell-Mann (September 15, 1929 – May 24, 2019) was an American physicist who received the 1969 Nobel Prize in Physics for his work on the theory of elementary particles.
https://en.wikipedia.org/wiki/George_Zweig (above right)
George Zweig (born May 30, 1937) is a Russian-American physicist. He was trained as a particle physicist under Richard Feynman. He introduced, independently of Murray Gell-Mann, the quark model (although he named it “aces”).
The standard model also consists of six types of leptons, known as flavours, grouped in three generations. The first-generation leptons, also called electronic leptons, comprise the electron and the electron neutrino; the second are the muonic leptons, comprising the muon and the muon neutrino; and the third are the tauonic leptons, comprising the tau and the tau neutrino.
The first lepton identified was the electron, discovered by J.J. Thomson and his team of British physicists in 1897. Then in 1930, Wolfgang Pauli postulated the electron neutrino to preserve conservation of energy, conservation of momentum, and conservation of angular momentum in beta decay.
Nearly 40 years after the discovery of the electron, the muon was discovered by Carl D. Anderson in 1936. By 1962 different neutrinos had been identified.
The tau was first detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his colleagues at the SLAC LBL group. Like the electron and the muon, it too was expected to have an associated neutrino. The first evidence for tau neutrinos came from the observation of “missing” energy and momentum in tau decay, analogous to the “missing” energy and momentum in beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 by the DONUT collaboration at Fermilab, making it the second-to-latest particle of the Standard Model to have been directly observed.
Martin Lewis Perl (June 24, 1927 – September 30, 2014) was an American chemical engineer and physicist who won the Nobel Prize in Physics in 1995 for his discovery of the tau lepton.
DONUT (Direct observation of the nu tau, E872) was an experiment at Fermilab dedicated to the search for tau neutrino interactions.
Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a United States Department of Energy national laboratory specializing in high-energy particle physics.
The up and down quarks and the electron and electron neutrino leptons are enough to describe most of the known Universe. The other quarks and leptons aren’t needed to explain the world around us.
The proton is made up of 2 down and 1 up quarks and the neutron is made up of 1 down and 2 up quarks.
The name boson was coined by Paul Dirac to commemorate the contribution of Satyendra Nath Bose, an Indian physicist and professor of physics at University of Calcutta and at University of Dhaka.
https://en.wikipedia.org/wiki/Paul_Dirac (below left)
Paul Adrien Maurice Dirac OM FRS (8 August 1902 – 20 October 1984) was an English theoretical physicist who is regarded as one of the most significant physicists of the 20th century.
https://en.wikipedia.org/wiki/Satyendra_Nath_Bose (above right)
Satyendra Nath Bose, FRS (1 January 1894 – 4 February 1974) was an Indian mathematician and physicist specialising in theoretical physics.
The photon is a type of elementary particle. It is the quantum of the electromagnetic field including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic field. Photons are massless, and they always move at the speed of light in vacuum, 299792458 m/s. They are the reason why positive and negative charges attract each other.
The W and Z bosons are the force carriers which mediate the weak force. They are the reason why stars keep burning.
Gluons are the fundamental force carriers underlying the strong force. They are the reason why protons can be kept together in the nucleus despite the fact that positive should repel positive.
Gluons are the fundamental force carriers underlying the strong force. The strong nuclear force holds most ordinary matter together because it confines quarks into hadron particles such as the proton and neutron. In addition, the strong force binds these neutrons and protons to create atomic nuclei. Most of the mass of a common proton or neutron is the result of the strong force field energy.
Now the photon, gluon, W+, W- and Z bosons should all be massless, but only the photon is. Why?
In 1964 two groups of physicists wondered why most of the bosons had mass and they came up with the idea of another boson causing it. This boson was given the name Higgs, after one of the physicists working on this project.
The Higgs boson is an elementary particle in the Standard Model of particle physics, produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. It is named after physicist Peter Higgs, who in 1964, along with five other scientists, proposed the Higgs mechanism to explain why particles have mass. This mechanism implies the existence of the Higgs boson. The Higgs boson was initially discovered as a new particle in 2012 by the ATLAS and CMS collaborations based on collisions in the LHC at CERN, and the new particle was subsequently confirmed to match the expected properties of a Higgs boson over the following years.
On 10 December 2013, two of the physicists, Peter Higgs and François Englert, were awarded the Nobel Prize in Physics for their theoretical predictions. Although Higgs’s name has come to be associated with this theory (the Higgs mechanism), several researchers between about 1960 and 1972 independently developed different parts of it.
https://en.wikipedia.org/wiki/Peter_Higgs (below left)
Peter Ware Higgs CH FRS FRSE FInstP (born 29 May 1929) is a British theoretical physicist, Emeritus Professor in the University of Edinburgh, and Nobel Prize laureate for his work on the mass of subatomic particles.
https://en.wikipedia.org/wiki/Fran%C3%A7ois_Englert (above right)
François, Baron Englert (born 6 November 1932) is a Belgian theoretical physicist and 2013 Nobel prize laureate.
The above chart shows the time interval between theorising/explaining the existence of the particle and its discovery. The muon was the exception in that it was discovered before anyone knew it existed.
The standard model is successful but it isn’t enough. Research needs to continue.
The more we dig the more questions arise and the more we want to learn.
We want to understand how we got to this point in our knowledge. How did our Universe reach its current form?
The above image shows over 13 billion years in one plot and shows the evolution of the Universe.
Looking at stars allows us to work out the origin of the Universe to a certain point (the vertical blue line).
But if we want to understand how we got to this point from the Big Bang we need to go back in time particle wise.
The plot does refer to the LHC and protons and ions.
The Universe at the beginning of time was very hot and very energetic and this energy can be recreated/reproduced at the LHC.
The LHC can’t get to time zero but they are getting very close. They can produce particles that don’t occur in normal daily life. Some of these new particles we don’t know very much about.
We need to go further as the standard model isn’t enough to explain the history of the Universe. For instance, at the very beginning of the Universe there were equal quantities of matter and anti-matter. But some of the anti-matter disappeared somewhere leaving us, thankfully, with an excess of matter. Where did the anti-matter go?
Cosmological measurements have shown there is a type of matter that we can’t see (known as dark matter). We “know” it is there because of its gravitational effects, which can’t be explained by accepted theories of gravity.
Primary evidence for dark matter comes from calculations showing that many galaxies would fly apart, or that they would not have formed or would not move as they do, if they did not contain a large amount of unseen matter. Other lines of evidence include observations in gravitational lensing and in the cosmic microwave background, along with astronomical observations of the observable universe’s current structure, the formation and evolution of galaxies, mass location during galactic collisions, and the motion of galaxies within galaxy clusters.
Dark matter particles could have 5 times the mass of normal matter particles.
Problems with the standard model:
Why is gravity so different in respect to other forces?
Why are there only three families of each of the matter particles?
Why is normal matter just based on up, down, electron and electron neutrino particles?
Why do neutrinos have such a small mass?
In fact, the standard model didn’t predict a mass for neutrinos.
In the last ten years theorists have been working on, amongst other things, whether there are new particles to be discovered.
Could these particles have been present in the early Universe?
Dose supersymmetry exist?
In particle physics, supersymmetry (SUSY) is a conjectured relationship between two basic classes of elementary particles: bosons, and fermions (all quarks and leptons).
A type of spacetime symmetry, supersymmetry is a possible candidate for undiscovered particle physics, and seen by some physicists as an elegant solution to many current problems in particle physics if confirmed correct, which could resolve various areas where current theories are believed to be incomplete. A supersymmetrical extension to the Standard Model could resolve major hierarchy problems.
The Standard Model particles and their supersymmetric counterparts. Slightly under 50% of these particles have been discovered, and just over 50% have never showed a trace that they exist. Supersymmetry is an idea that hopes to improve on the Standard Model, but it has yet to make successful predictions about the Universe in attempting to supplant the prevailing theory. (CLAIRE DAVID / CERN)
Are there extra dimensions?
In our everyday lives, we experience three spatial dimensions, and a fourth dimension of time. How could there be more? Einstein’s general theory of relativity tells us that space can expand, contract, and bend. Now if one dimension were to contract to a size smaller than an atom, it would be hidden from our view. But if we could look on a small enough scale, that hidden dimension might become visible again. Imagine a person walking on a tightrope. She can only move backward and forward; but not left and right, nor up and down, so she only sees one dimension. Ants living on a much smaller scale could move around the cable, in what would appear like an extra dimension to the tightrope-walker.
How could we test for extra dimensions? One option would be to find evidence of particles that can exist only if extra dimensions are real. Theories that suggest extra dimensions predict that, in the same way as atoms have a low-energy ground state and excited high-energy states, there would be heavier versions of standard particles in other dimensions. These heavier versions of particles would have exactly the same properties as standard particles (and so be visible to our detectors) but with a greater mass. If CMS or ATLAS were to find a Z- or W-like particle (the Z and W bosons being carriers of the electroweak force) with a mass 100 times larger for instance, this might suggest the presence of extra dimensions. Such heavy particles can only be revealed at the high energies reached by the Large Hadron Collider (LHC).
Another way of revealing extra dimensions would be through the production of “microscopic black holes”. What exactly we would detect would depend on the number of extra dimensions, the mass of the black hole, the size of the dimensions and the energy at which the black hole occurs. If micro black holes do appear in the collisions created by the LHC, they would disintegrate rapidly, in around 10-27 seconds. They would decay into Standard Model or supersymmetric particles, creating events containing an exceptional number of tracks in our detectors, which we would easily spot. Finding more on any of these subjects would open the door to yet unknown possibilities.
What about the graviton?
Some theorists suggest that a particle called the “graviton” is associated with gravity in the same way as the photon is associated with the electromagnetic force. If gravitons exist, it should be possible to create them at the LHC, but they would rapidly disappear into extra dimensions. Collisions in particle accelerators always create balanced events – just like fireworks – with particles flying out in all directions. A graviton might escape the detectors, leaving an empty zone that we notice as an imbalance in momentum and energy in the event. We would need to carefully study the properties of the missing object to work out whether it is a graviton escaping to another dimension or something else. This method of searching for missing energy in events is also used to look for dark matter or supersymmetric particles.
So, the LHC is looking for new physics using the highest energies possible.
Why does the LHC have four experiments?
To provide unique pictures of what happens after the collisions. The detectors act in a similar way to digital cameras.
The 4 experiments are independent of each other – coming at the data from different directions and give a range of ideas to test. The experiments are suited to different things.
There are two general purpose detectors, CMS and ATLAS. They study a wide range of physics, from the discovery of the Higgs particle to investigating dark matter. Also investigating further into particles we already know exist.
The Compact (small but heavy) Muon Solenoid (CMS) experiment is one of two large general-purpose particle physics detectors built on the Large Hadron Collider (LHC) at CERN in Switzerland and France. The goal of CMS experiment is to investigate a wide range of physics, including the search for the Higgs boson, extra dimensions, and particles that could make up dark matter.
CMS is 21 metres long, 15 m in diameter, and weighs about 14,000 tonnes. Over 4,000 people, representing 206 scientific institutes and 47 countries, form the CMS collaboration who built and now operate the detector. It is located in a cavern at Cessy in France, just across the border from Geneva. In July 2012, along with ATLAS, CMS tentatively discovered the Higgs boson. By March 2013 its existence was confirmed.
View of the CMS endcap through the barrel sections. The ladder to the lower right gives an impression of scale.
ATLAS (A Toroidal LHC ApparatuS) is the largest, general-purpose particle detector experiment at the Large Hadron Collider (LHC), a particle accelerator at CERN (the European Organization for Nuclear Research) in Switzerland. The experiment is designed to take advantage of the unprecedented energy available at the LHC and observe phenomena that involve highly massive particles which were not observable using earlier lower-energy accelerators. ATLAS was one of the two LHC experiments involved in the discovery of the Higgs boson in July 2012. It was also designed to search for evidence of theories of particle physics beyond the Standard Model.
The experiment is a collaboration involving roughly 3,000 physicists from 183 institutions in 38 countries. The ATLAS detector is 46 metres long, 25 metres in diameter, and weighs about 7,000 tonnes; it contains some 3000 km of cable.
Computer generated cut-away view of the ATLAS detector showing its various components (1) Muon Detectors Magnet system: (2) Toroid Magnets (3) Solenoid Magnet Inner Detector: (4) Transition Radiation Tracker (5) Semi-Conductor Tracker (6) Pixel Detector Calorimeters: (7) Liquid Argon Calorimeter (8) Tile Calorimeter
Two not so general detectors are LHCb and ALICE. They are focused on certain types of physics.
The LHCb (Large Hadron Collider beauty) experiment is one of eight particle physics detector experiments collecting data at the Large Hadron Collider at CERN. LHCb is a specialized b-physics experiment, designed primarily to measure the parameters of CP violation in the interactions of b-hadrons (heavy particles containing a bottom quark). Such studies can help to explain the matter-antimatter asymmetry of the Universe.
It specializes in investigating the slight differences between matter and antimatter by studying a type of particle called the “beauty quark”, or “b quark”.
In particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry) while its spatial coordinates are inverted (“mirror” or P symmetry).
It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present universe, and in the study of weak interactions in particle physics.
Cut-away drawing of LHCb, which is shaped something like a wedge, with collisions occurring at one end.
ALICE (A Large Ion Collider Experiment) is one of eight detector experiments at the Large Hadron Collider at CERN.
Computer generated cut-away view of ALICE showing the 18 detectors of the experiment.
ALICE is optimized to study heavy-ion (Pb-Pb nuclei) collisions at a centre of mass energy up to 5.02 TeV per nucleon pair. The resulting temperature and energy density allow exploration of quark–gluon plasma, a fifth state of matter wherein quarks and gluons are freed. Similar conditions are believed to have existed a fraction of the second after the Big Bang before quarks and gluons bound together to form hadrons and heavier particles.
All four detectors work on similar principles. They have trackers which map the trajectory of particles. They have calorimeters designed to measure the energy of the particles. They all have very powerful magnets to bend the paths of charged particles. The amount of bending indicates the size and type of charge.
Centripetal force = Bqv = mv2/r
Bq = mv/r
B = magnetic field strength; q is the charge on the particle; m = mass of the particle; v = velocity of the particle; r = radius of the path of the particle.
If B, m and v are constant then q is inversely proportional to r. The greater the charge the smaller the amount of bending.
The above image is a transverse section through a detector (CMS)
The red curved line shows the path and electron would take. It is curved because of the applied magnetic field. The electron stops in the electromagnetic calorimeter and it’s energy is measured.
Faint blue dashed line represents the path of a photon. The path is not curved because photons are not charged.
Each section of the detector builds up a picture of what happens during after the collision.
It is imposible to store all the data from collisions. You can’t physically get it off the detector and analyse it. Not all the data is interesting.
All the experiments implement a trigger system, like a filter system, to save the interesting bits of data.
In particle physics, a trigger is a system that uses simple criteria to rapidly decide which events in a particle detector to keep when only a small fraction of the total can be recorded. Trigger systems are necessary due to real-world limitations in data storage capacity and rates.
The system takes 3 millionth of a second to make a decision to store the event information or not.
Very reliable electronics is required.
Particles are made to collide at the centre of the detectors. The inner most layers are very thin – to measure and track charged particles.
Calorimeters typically consist of layers of “passive” or “absorbing” high-density material – for example, lead – interleaved with layers of an “active” medium such as solid lead-glass or liquid argon.
Muons can move easily through the detectors as they are weakly interacting.
The detectors are arranged in cylinders. Their ends are capped to keep in as many particles as possible.
It is ten years since the LHC started working.
The discovery of the Higgs boson is its most famous result.
On 4 July 2012, the ATLAS and CMS experiments at CERN’s Large Hadron Collider announced they had each observed a new particle in the mass region around 125 GeV. This could be the Higgs boson. A momentous day.
As mentioned earlier the standard model was great except it couldn’t give a mass to some of the particles, which we knew had mass.
In the 1960s a group of theoreticians came up with a theory to solve this problem. The theory predicted a mechanism called the Higgs field and a manifestation of this field would be the Higgs boson.
The Higgs field is like the ocean and the Higgs boson is like a wave on that ocean.
The famous graphs.
Plots showing data from the ATLAS and CMS collaborations of signals that produced a decay pattern of two photons, one of the possible decay patterns of a Higgs boson. The bumps in the curves represent signals attributed to the decay of Higgs bosons. Image source: ATLAS and CMS public data
Higgs can be produced and decay in numerous ways.
First look for it by its decay into two photons. So, look for two photons and work out where they came from.
Any deviations from the curves implied something new. The black points are actual data. The red bumps show the energy/mass of the Higgs boson decaying into two photons.
Is there enough deviation to say we have something new?
The discovery of the Higgs boson won the Nobel Prize for physics in 2013. Its discovery was the start of a new era of physics, to understand the Higgs in much greater detail.
In August 2020 there was evidence of a Higgs particle decaying into two muons.
There have been may papers published since 2012 on other things besides the Higgs, such as re-discovering and re-measuring the standard model.
Summary of the cross-section measurements of standard model processes (called a dino-plot because of its shape). The horizontal axis shows the different processes/events. The left side involves the W and Z bosons, the carriers of the week force. The right side shows the presence of top quarks. The measurements gave a pattern that looked like the shape of dinosaurs. On the vertical axis all the points that are close to the “dinosaur head” are the processes that are frequent. The processes close to the “tails” are extremely rare.
The frequent processes allow the origin of the particles to be determined.
The top quarks are the heaviest. They were first discovered in 1995 and were very rare at the time. Now the LHC can produce 10 top quarks every second and their mass has been measured.
Unlike the leptons, quarks are confined inside hadrons and are not observed as physical particles. Quark masses therefore cannot be measured directly, but must be determined indirectly through their influence on hadronic properties (how they influence the particles they are in). Although we describe the mass of a quark in the same way as we describe the mass of a lepton, any quantitative statement about the value of a quark mass must make careful reference to the particular theoretical framework that is used to define it. It is important to keep this scheme dependence in mind when using the quark mass values listed.
The same approach was used in the discovery of the W boson in 1984. The LHC is still trying to dig into its characteristics.
The tail of the dino-plot shows very rare events.
The LHC is studying the behaviour of the Higgs.
New results from ATLAS and CMS experiments reveal how strongly the Higgs boson interacts with the heaviest known elementary particle, the top quark. Observing this rare process is a significant milestone for the field of high-energy physics; it allows physicists to test critical parameters of the Higgs mechanism in the Standard Model.
Visualization of a data event from the ttH(γγ). The event contains two photon candidates, with a diphoton mass of 125.4 GeV; in addition, six jets are reconstructed, including one jet that is b-tagged using a 77% efficiency working point; the photons correspond to the green towers in the electromagnetic calorimeter, while the jets (b-jets) are shown as yellow (blue) cones. Image credit: ATLAS Collaboration.
In 2018 ATLAS and CMS announced that a top quark pair and a Higgs particle had decayed into two photons.
There are deviations from predictions which could be understood if there is new physics. Supersymmetry, for instance.
In a review that was published in Nature (“A new era in the search for dark matter”), physicists Gianfranco Bertone (UvA) and Tim Tait (UvA and UC Irvine) argue that the time has come to broaden and diversify the experimental effort, and to incorporate astronomical surveys and gravitational wave observations in the quest for the nature of dark matter.
Over the past three decades, the search for dark matter has focused mostly on a class of particle candidates known as weakly interacting massive particles (or WIMPs). WIMPs appeared for a long time as a perfect dark matter candidate as they would be produced in the right amount in the early universe to explain dark matter, while at the same time they might alleviate some of the most fundamental problems in the physics of elementary particles, such as the large discrepancy between the energy scale of weak interactions and that of gravitational interactions.
While such a natural solution sounds like a very good idea, none of the many experimental strategies performed to search for WIMPs has found convincing evidence for their existence.
In their paper, Bertone and Tait argue that it is therefore time to enter a new era in the quest for dark matter – an era in which physicists broaden and diversify the experimental effort, leaving as they say “no stone left unturned”.
What makes the current time ripe for such a widened search is that several search methods for such a wider search already exist or are in the process of being completed.
Bertone and Tait in particular point towards astronomical surveys, where tiny effects in the shapes of galaxies, of the dark matter halos around them, and of the gravitationally bent light coming around them, can be observed to learn more about the potential nature of dark matter.
In addition, they mention the new method of observing gravitational waves, successfully carried out for the first time in 2016, as a very useful tool to study black holes – either as dark matter candidates themselves, or as objects with a distribution of other dark matter candidates around them.
Combining these modern methods with traditional searches in particle accelerators should give the search for dark matter a major boost in the near future.
A map of different explanations for the dark matter phenomenon that are currently being investigated. (Image: G. Bertone and T. Tait)
Source: Universiteit van Amsterdam
CMS as well as astrophysicists and cosmologists are looking into these.
The LHC and the birth of the Universe
Alice event display
One of the first heavy-ion collisions with stable beams recorded by ALICE on 25 November 2015. Photographs: Ronchetti, Federico
This is just one collision. There were many more.
For one month the LHC just collided lead ions. The collisions created a large volume of energy. 100000 times the temperature of the core of the sun. At these temperatures the protons and neutrons separate into there constituents to give free quarks and gluons (quark-gluon plasma). What the Universe was like 10 microseconds after the Big Bang.
LHC collisions are like mini-bangs. They only exist for a fraction of a second. During this time there are lots of collisions between quarks and gluons. Eventually particles are formed from the quark gluon plasma.
The lead collisions produce 10000 particles. Each line in the above pictures is a trajectory of a particle that has been produced. This allows an image of the processes involved in the early Universe using quantum chromo dynamics.
In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks and gluons, the fundamental particles that make up composite hadrons such as the proton, neutron and pion.
The QCD analogue of electric charge is a property called colour. Gluons are the force carrier of the theory, like photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics.
Jet quenching – So dense a very energetic object disappears
In high-energy physics, jet quenching is a phenomenon that can occur in the collision of ultra-high-energy particles. In general, the collision of high-energy particles can produce jets of elementary particles that emerge from these collisions. Collisions of ultra-relativistic heavy-ion particle beams create a hot and dense medium comparable to the conditions in the early universe, and then these jets interact strongly with the medium, leading to a marked reduction of their energy. This energy reduction is called “jet quenching”.
(Above left) Jet collimation: soft gluons emitted at small angles are transported outside the jet cone by their multiple random scatterings with medium components. (Above right) This event in the first lead-ion run of the LHC has two nearly back-to-back jets of particles from a single event. Their momenta should have about the same magnitude (conservation laws!) but the jet at the top right falls well short of the jet at the bottom left. The jet on the right seems to have interacted with the quark-gluon plasma and transferred some of its initial momentum to the particles which make up the plasma, resulting in a lower momentum measured in the calorimeter. This is “jet-quenching.” The plasma is like that which existed in the very beginning of our universe: could there have been similar effects as particles emerged from that hot primordial soup? Credit: Copyright CERN on behalf of the CMS Collaboration.
The left side of the above image shows two energetic objects, always flying in opposite directions. Depending where they come from in the quark-gluon plasma only one may be absorbed. The right side of the above image is what you actually see in the experiment.
LHC discovery: J/Psi Regeneration
The J/psi meson is a subatomic particle, a flavor-neutral meson consisting of a charm quark and a charm antiquark. Mesons formed by a bound state of a charm quark and a charm anti-quark are generally known as “charmonium”. It has a mean lifetime of 7.2 x 10−21 s.
Its discovery was made independently by two research groups, one at the Stanford Linear Accelerator Center and one at the Brookhaven National Laboratory. They discovered they had actually found the same particle, and both announced their discoveries on 11 November 1974. The importance of this discovery is highlighted by the fact that the subsequent, rapid changes in high-energy physics at the time have become collectively known as the “November Revolution”.
The Relativistic Heavy Ion Collider (RHIC) is the first and one of only two operating heavy-ion colliders, and the only spin-polarized proton collider ever built. Located at Brookhaven National Laboratory (BNL) in Upton, New York, and used by an international team of researchers, it is the only operating particle collider in the US. By using RHIC to collide ions traveling at relativistic speeds, physicists study the primordial form of matter that existed in the universe shortly after the Big Bang. By colliding spin-polarized protons, the spin structure of the proton is explored.
The J/psi meson doesn’t survive when it passes through a quark-gluon plasma because it is too hot. In lead-lead ion collisions a few of them are found.
When the quark-gluon plasma cools down the charm and anti-charm quarks find each other to form new J/psi mesons.
In the above diagrams the left is a cartoon where blue circle is the charm quark. The right side is a histogram that shows more J/psi mesons were produced at the LHC.
Initial findings of the regeneration of J/psi mesons were predicted by theory but had not been seen before.
Basic physics knows nothing about left or right. They should be the same, as should charges positive and negative. Same applies for the strong interaction but not for the weak interaction (the process that keeps the Sun shining).
A Parity (P) operation on a system of interacting particles means to replace that system with its mirror image. It is a spatial inversion operation that has the effect of changing left-handed particles to right-handed ones and vice versa. P violation occurs when the rate for a particle interaction is different for the mirror image of that interaction. Electromagnetic and strong nuclear forces have the same strength for left-handed and right-handed particles. So, parity is a good symmetry for these interactions and is said to be conserved by them. But the weak nuclear force is asymmetric for right-handed and left-handed particles and thus violates parity.
A Parity-Violation experiment with Clocks
Imagine building a Real Clock and placing it next to a mirror, so that you can view its mirror image. Then imagine building the real version of the mirror image, including left-handed screw and gear threads instead of right-handed ones. If there were a small asymmetry in the mechanical workings of the “Real” and “Real Mirror” Clocks, then one would observe them to keep different time. The size of the parity-violating mirror asymmetry in E-158 is 130 parts-per-billion (ppb). If the “Real” and “Real Mirror” Clocks had this size asymmetry in their mechanical workings, they would exhibit a time difference of 1 hour after about 1000 years! (Click here to view a QuickTime animation of the Parity-Violating Clocks!)
In particle physics, a kaon, also called a K meson and denoted K is any of a group of four mesons distinguished by a quantum number called strangeness. In the quark model they are understood to be bound states of a strange quark (or antiquark) and an up or down antiquark (or quark). They are seen in down and charm decays.
Kaons have proved to be a copious source of information on the nature of fundamental interactions since their discovery in cosmic rays in 1947. They were essential in establishing the foundations of the Standard Model of particle physics, such as the quark model of hadrons and the theory of quark mixing (the latter was acknowledged by a Nobel Prize in Physics in 2008). Kaons have played a distinguished role in our understanding of fundamental conservation laws: CP violation, a phenomenon generating the observed matter–antimatter asymmetry of the universe, was discovered in the kaon system in 1964 (which was acknowledged by a Nobel Prize in 1980). Moreover, direct CP violation was discovered in the kaon decays in the early 2000s by the NA48 experiment at CERN and the KTeV experiment at Fermilab.
Electrons were found to have an antiparticle. Initially positrons were only a theoretical particle to solve a problem.
The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. The positron has an electric charge of +1 e, a spin of 1/2 (the same as the electron), and has the same mass as an electron. When a positron collides with an electron, annihilation occurs. If this collision occurs at low energies, it results in the production of two or more photons.
Positrons can be created by positron emission radioactive decay (through weak interactions), or by pair production from a sufficiently energetic photon which is interacting with an atom in a material.
In 1928, Paul Dirac published a paper proposing that electrons can have both a positive and negative charge. This paper introduced the Dirac equation, a unification of quantum mechanics, special relativity, and the then-new concept of electron spin to explain the Zeeman effect. The paper did not explicitly predict a new particle but did allow for electrons having either positive or negative energy as solutions.
Carl David Anderson discovered the positron on 2 August 1932, for which he won the Nobel Prize for Physics in 1936.This was the first evidence of anti-matter.
Cloud chamber photograph by C. D. Anderson of the first positron ever identified. A 6 mm lead plate separates the chamber. The deflection and direction of the particle’s ion trail indicate that the particle is a positron.
Why study the difference between matter and anti-matter?
Mainly to explain why we have more matter than anti-matter in the Universe.
In the 1960’s positrons started to be used in the treatment of certain medical conditions.
Positron emission tomography (PET) is a medical technique that can be used to detect cancer, measure blood flow and detect coronary artery disease. In PET scans, “a small amount of radioactive substance is injected into a person, which produces positrons upon decaying within the body. By detecting the high-energy photons (gamma rays) produced in the annihilation of positrons with electrons in the body, a map can be made of where the substance has spread within the body. While antimatter may never be used as a bomb, it certainly has a positive future in life-saving medical diagnostic tools, the anti-weapon.
CERN is a unique international research infrastructure whose societal impacts go well beyond advancing knowledge in high-energy physics. These do not just include technological spillovers and benefits to industry, or unique inventions such as the World Wide Web, but also the training of skilled individuals and wider cultural effects. The scale of modern particle-physics research is such that single projects, such as the Large Hadron Collider (LHC) at CERN, offer an opportunity to weigh up the returns on public investment in fundamental science.
Upgrade of the LHC
The LHC runs in cycles that last about three years, followed by a shutdown. The shutdown is necessary for two reasons.
The first reason is for repair. The equipment has to work under severe radiation and strong magnetic fields. It is not easy to get in and replace things, especially whilst experiments are being carried out.
The second reason is to upgrade the equipment.
The High Luminosity Large Hadron Collider (HL-LHC; formerly SLHC, Super Large Hadron Collider) is an upgrade to the Large Hadron Collider started in June 2018 that will boost the accelerator’s potential for new discoveries in physics, starting in 2027. The upgrade aims at increasing the luminosity of the machine by a factor of 10, up to 1035 cm−2s−1, providing a better chance to see rare processes and improving statistically marginal measurements.
The current shutdown will last until 2022.
Between 2022 and 2024 it is expected that the LHC will collect more data and discover interesting stuff.
In the 2025 shutdown a lot of new stuff will be done. Things will be changed including replacing and upgrading magnets.
ALICE and CMS will have their detectors rebuilt.
Above right: A typical candidate event for the Higgs including two high-energy photons whose energy (depicted by dashed yellow lines and red towers) is measured in the CMS electromagnetic calorimeter. The yellow lines are the measured tracks of other particles produced in the collision. Overlapping collisions makes it hard to find what you want.
A proton-proton collision event at centre-of-mass energy 13TeV, collected by the CMS experiment in 2018, in which a candidate Higgs boson produced via the gluon fusion mode decays into a pair of muons, indicated by the red lines.
Around 100 simultaneous proton–proton collisions in an event recorded by the CMS experiment (Image: Thomas McCauley/CMS/CERN)
This will double after the next great upgrade.
The detectors need to be upgraded to cope with the environment and the upcoming increase in the LHC’s particle collision rate.
Since the LHC started running only 5% of the data expected in its lifetime has been collected.
Improvements are also needed in how data is analysed. CERN is deciding whether to use CPU or GPU.
A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry within a computer that executes instructions that make up a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program. The computer industry used the term “central processing unit” as early as 1955. Traditionally, the term “CPU” refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.
A graphics processing unit (GPU) is a specialized, electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die.
It is likely that GPUs will be chosen.
To test the GPUs they got a lot of teenagers to play games on computers that used them 24/7.
High energy physics is driving the need for better computers.
Hardware upgrades are required. ALICE is expected to produce 100 times more data when it resumes in 2022.
CMS hardware will be upgraded in five years.
Research and development takes place in the calibration institute.
Leading-edge pixel sensors
Monolithic pixel sensors used to track particles are also used in smartphones. They are very advanced.
At the moment ALICE uses 4000-pixel sensors which act like a huge digital camera. This is being upgraded to 12000Mpixels in the next few months and can be seen in the above picture, the golden fan.
Often what is needed pushes the technology.
Single chips are now used. They are lighter and only 50μm thick. Thinner than a human hair.
Reducing the thickness to 20μm allows them to be bent round the collision region. This gives better coverage and detection.
The three little images on the bottom of the above picture shows the design process for the bendy pixel sensor. It is 20cm longer than the current sensor.
Even better detectors are expected in five years. This technology could be used in mobile phones.
The Large Hadron Collider took about a decade to construct, for a total cost of about $4.75 billion and 1000s of physicists, engineers etc. were and are involved.
ALICE is hoping to find out more about the quark-gluon plasma. Produce lots of data to improve the precision and find new physics. There should be evidence of di-Higgs production.
Increasing the energy to 8TeV should produce lots of new particles and allow the exploration of new models of dark matter.
All the different types of leptons should behave in the same way but it has been found that the electron behaves a little differently from the muon and the muon behaves a little differently from the tau.
New data can cause the difference between the expected and actual values to disappear.
Display of a simulated HL-LHC collision event in an upgraded ATLAS detector. The event has an average of 200 collisions per particle bunch crossing. Event display of a top–antitop event with 200 pile-up events in the ATLAS Phase-II tracker. The zoom bottom left shows the collision region, stretched to be 9 cm long and 11 mm tall. The tracks from the top quarks are highlighted. They all come from the same interaction, and the displaced vertices from b-hadron decays have been reconstructed. (Image: ATLAS Collaboration/CERN)
The display of an event with a Higgs boson produced in the VBF process on top of 200 pile-up collisions. The efficient identification of the forward jets accompanying this process requires association of the calorimeter energy deposits with charged tracks. Image credit: CMS.
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, to the point where one or both theories break down under certain conditions (for example within known spacetime singularities like the Big Bang and black hole event horizons).
Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the “best step” towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics.
ATLAS and CMS have performed numerous new physics searches, probing a large region of parameter space for a variety of BSM models:
– In many cases, mass limits are now approaching or, in some cases, exceeding 10 TeV
Every search comes with assumptions on the nature of new particles:
– One is that new particles will be short-lived
Challenging these assumptions is more important than ever as there is no significant evidence of BSM physics at the LHC
● Long-lived particles (LLP) in particular are predicted by a wide range of theoretical models:
– Small coupling constants — e.g., SUSY with R-parity violating (RPV) couplings
– Very off-shell intermediate decay products — e.g., split SUSY where heavy intermediate squarks enhance the gluino lifetime
– Limited decay phase space — e.g., AMSB SUSY where the lightest neutralino and chargino are nearly degenerate
Experimentally, these models result in a rich mixture of possible signatures in the detectors ● Both ATLAS and CMS have extensive programs of searches for these signatures and have set limits on them across many orders of magnitude in lifetime.
Further research might lead to evidence of weakly interacting particles which might behave in an unconventional way,
Weakly interacting massive particles (WIMPs) are hypothetical particles that are one of the proposed candidates for dark matter. There exists no clear definition of a WIMP, but broadly, a WIMP is a new elementary particle which interacts via gravity and any other force (or forces), potentially not part of the Standard Model itself, which is as weak as or weaker than the weak nuclear force, but also non-vanishing in its strength. Many WIMP candidates are expected to have been produced thermally in the early Universe, similarly to the particles of the Standard Model according to Big Bang cosmology, and usually will constitute cold dark matter.
Question and answers
1) Does triggering bias findings?
The LHC take care not to be biased. They collect samples of unbiased events and controlled samples.
Do things get ignored if triggers are based on the standard model?
They use general criteria and capture as much as possible so focusing on what is already known doesn’t happen.
There are certain particles that might escape the detector. This might be recognised because a momentum imbalance occurs.
Try and trigger on a momentum imbalance as it could be a signature of lots of new physics giving a lot of information we don’t yet understand.
2) Is the graviton equivalent to the photon boson?
Graviton is not part of the standard model and gravity is very different from electromagnetic force.
The graviton could be an important part of any extra dimensions other than the three plus time that we already know about. The graviton “sees” these extra dimensions and could disappear into them, which is why they escape detection
3) Did Covid-19 effect the LHC?
It has probably caused a three-month delay. The computer centre was able to keep working
4) Can the public look at data?
Open-data will be operational in a few years. Some data is used for school activities
5) Desired outcomes from future work
a) Find out what dark energy and dark matter are
b) Prove the existence of SUSY particles or stop looking
c) More information about heavy ions
d) Observe di-Higgs,