Introduction to Accelerators Physics and Particle Colliders
Physics and Particle Colliders
6th July 2019: Organized by CERN, JAI, STFC & University of Oxford
Professor Emmanuel Tsesmelis
CERN & John Adams Institute for Accelerator Science
My notes from the lecture (if they don’t make sense then it is entirely my fault)
Colliders investigate the fundamental building blocks of nature
Accelerators and beams are the tools of discovery
Beam dynamics is used to understand the motion of particles in linear and circular accelerators, to understand the fundamentals of existing machines, to optimise and commission accelerators, to design new machines and to design novel machines.
To do this you need to know how to calculate the motion of a charged particle in a real magnetic field. This involves magneto-static configurations, time-dependent fields, optics, deciding on the approximations used, identifying how the particles interact with their surroundings and how particles in a bunch interact with themselves.
A particle accelerator is a machine that uses electromagnetic fields to propel charged particles to very high speeds and energies and to contain them in well-defined beams.
Large accelerators are used for basic research in particle physics. The most powerful accelerator currently is the Large Hadron Collider (LHC) near Geneva, Switzerland, built by the European collaboration CERN. It is a collider accelerator, which can accelerate two beams of protons to an energy of 6.5 TeV and cause them to collide head-on, creating centre-of-mass energies of 13 TeV. Other powerful accelerators are KEKB at KEK in Japan, RHIC at Brookhaven National Laboratory, and the Tevatron at Fermilab, Batavia, Illinois. Accelerators are also used as synchrotron light sources for the study of condensed matter physics. Smaller particle accelerators are used in a wide variety of applications, including particle therapy for oncological purposes, radioisotope production for medical diagnostics, ion implanters for the manufacture of semiconductors, and accelerator mass spectrometers for measurements of rare isotopes such as radiocarbon. There are currently more than 30,000 accelerators in operation around the world.
An accelerator propels charged particles, such as protons or electrons, at high speeds, close to the speed of light. They are then smashed either onto a target or against other particles circulating in the opposite direction. By studying these collisions, physicists are able to probe the world of the infinitely small.
When the particles are sufficiently energetic the energy of the collision is transformed into matter in the form of new particles, the most massive of which existed in the early Universe. This phenomenon is described by Einstein’s famous equation E=mc2, according to which matter is a concentrated form of energy, and the two are interchangeable.
In the first part of the accelerator, an electric field strips hydrogen atoms (consisting of one proton and one electron) of their electrons. Electric fields along the accelerator switch from positive to negative at a given frequency, pulling charged particles forwards along the accelerator. CERN engineers control the frequency of the change to ensure the particles accelerate not in a continuous stream, but in closely spaced “bunches”.
“To make the protons”, physicists inject hydrogen gas into the metal cylinder -Duoplasmatron- then surround it with an electrical field to break down the gas into its constituent protons and electrons. This process yields about 70 per cent protons.
The process is simplified as follows:
For the LHC beam,
2808 bunches x 1.15 x 1011 = 3 x 1014 protons per beam or, 6 x 1014 protons for the two beams (1)
A single cubic centimetre of hydrogen gas at room temperature contains
with P = 105 Pa ,, V = 10-6 m3 ,, T = 293 K
using PV = nRT –
n = 4 x 10-5 moles ,, N = 4 x 10-5 x 6 x 1023 = 2 x 4 x 1019 molecules
So, about 5 x 1019 atoms of hydrogen (2)
Taking into account (1) and (2), the LHC can be refilled about 100000 times with just one cubic centimetre of gas – and it only needs refilling twice a day!
The particles are accelerated by a 90 kV supply and leave the Duoplasmatron with 1.4% speed of light, i.e. ~ 4000 kms-1.
Then they are sent to a radio frequency quadrupole, QRF -an accelerating component that both speeds up and focuses the particle beam. From the quadrupole, the particles are sent to the linear accelerator (LINAC2).
Electrons are produced by thermionic emission and are then accelerated along an electron linear collider, e.g. the Lep Injector Linac (LIL). Some of the electrons are collided with a tungsten target to produce the positrons
The classical example of thermionic emission is that of electrons from a hot cathode into a vacuum (also known as thermal electron emission or the Edison effect) in a vacuum tube. The hot cathode can be a metal filament, a coated metal filament, or a separate structure of metal or carbides or borides of transition metals. Vacuum emission from metals tends to become significant only for temperatures over 1,000 K (730 °C).
Gamma-gamma colliders are all about making intense beams of gamma rays and having them collide so as to make elementary particles such as electrons and positrons. Constructing a gamma-gamma collider as an add-on to an electron-positron linear collider is possible with present technology and it does not require much additional cost.
Conventional positron sources smash a high-energy electron beam into a metallic target, creating pairs of electrons and positrons. Those positrons are then collected and formed into bunches in another accelerator structure. However, the collision process damages the target, and it couldn’t withstand the much higher intensity electron beams that would be required for future machines.
An alternative approach is to use the electron beam to generate a high-intensity beam of photons, which would then hit a target and release positrons. To generate the photons, the electron beam would travel through a device called an undulator, which uses magnets to force the beam along a path that causes the electrons to emit photons with the right properties. Conventional undulators wiggle the electrons back and forth, usually causing them to emit linearly polarized photons in the forward direction. But the post-LHC experiments will require polarized positrons—particles with their spins aligned—which can be created by circularly polarized photons. To make these, the undulator must force electrons to follow a helical path.
The LHC typically starts with neutral lead atoms which travel through a chain of smaller particle accelerators. Oscillating magnetic fields strip many of the electrons from the lead atoms. But the atoms lose their remaining electrons when they pass through a metal foil before entering the 17-mile-round ring. Adjusting the width can leave the lead atom with one electron attached.
Electron linacs for conventional radiation therapy, including advanced modalities:
•IntraOperative RT (IORT)
•Intensity Modulated RT
Low-energy cyclotrons for production of radionuclides for medical diagnostics
Medium-energy cyclotrons and synchrotrons for hadron therapy with protons (250 MeV) or light ion beams (400 MeV/u 12C-ions)
Neutron generators are neutron source devices which contain compact linear particle accelerators and that produce neutrons by fusing isotopes of hydrogen together. The fusion reactions take place in these devices by accelerating either deuterium, tritium, or a mixture of these two isotopes into a metal hydride target which also contains deuterium, tritium or a mixture of these isotopes. Fusion of deuterium atoms (D + D) results in the formation of a He-3 ion and a neutron with a kinetic energy of approximately 2.5 MeV. Fusion of a deuterium and a tritium atom (D + T) results in the formation of a He-4 ion and a neutron with a kinetic energy of approximately 14.1 MeV. Neutron generators have applications in medicine, security, and materials analysis.
Most of the new neutron sources (SNS, ISIS TS2, JPARC, ESS, CSNS) which have been or are being built in the world are based on spallation reactions with GeV- proton beams impinging on heavy metal targets (Hg, W, Pb).
A synchrotron light source is a source of electromagnetic radiation (EM) usually produced by a storage ring, for scientific and technical purposes. First observed in synchrotrons, synchrotron light is now produced by storage rings and other specialized particle accelerators, typically accelerating electrons. Once the high-energy electron beam has been generated, it is directed into auxiliary components such as bending magnets and insertion devices (undulators or wigglers) in storage rings and free electron lasers. These supply the strong magnetic fields perpendicular to the beam which are needed to convert high energy electrons into photons.
The major applications of synchrotron light are in condensed matter physics, materials science, biology and medicine. A large fraction of experiments using synchrotron light involve probing the structure of matter from the sub-nanometre level of electronic structure to the micrometre and millimetre level important in medical imaging. An example of a practical industrial application is the manufacturing of microstructures by the LIGA process.
A collider is a type of particle accelerator involving directed beams of particles. Colliders may either be ring accelerators or linear accelerators, and may collide a single beam of particles against a stationary target or two beams head-on.
Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the by-products of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for tiny periods of time, and therefore may be hard or impossible to study in other ways.
• Processing of Polymers (cross-linking, curing, gemstone colouring) – To make them stronger, heat resistance, heat shrinkable, dry faster or change the colour.
• Sterilisation – Use of electrons or X-rays to kill pathogens or prevent undesirable changes.
• Non-destructive testing (NDT) – Inspection of components for flaws or hidden features. High energy X-rays are need for thick components.
• Ion implanting in chip fabrication – To dope semiconductors to alter near surface properties by placing ions at specific locations and depths.
• Watewater and Flue gas treatment – To remove contaminants or bye products of other processes.
Ion implantation is a low-temperature process by which ions of one element are accelerated into a solid target, thereby changing the physical, chemical, or electrical properties of the target. Ion implantation is used in semiconductor device fabrication and in metal finishing, as well as in materials science research. The ions can alter the elemental composition of the target (if the ions differ in composition from the target) if they stop and remain in the target. Ion implantation also causes chemical and physical changes when the ions impinge on the target at high energy. The crystal structure of the target can be damaged or even destroyed by the energetic collision cascades, and ions of sufficiently high energy (10s of MeV) can cause nuclear transmutation.
Particle Accelerators Could Be Used to Produce Energy (and Plutonium) https://www.popsci.com/science/article/2010-08/forgotten-physics-paper-suggests-using-particle-accelerators-produce-energy/
What does particle physics have to do with solar energy? http://www.kopernik.org.pl/en/orientuj-sie/sloneczny-cern/
Overview of accelerator applications for security and defence https://www.osti.gov/servlets/purl/1259508
Industrial impact of particle physics https://inis.iaea.org/collection/NCLCollectionStore/_Public/46/058/46058406.pdf
An Industrial Physics Toolkit https://www.aps.org/units/fgsa/activities/upload/cumming.pdf
The fundamental building blocks of matter https://courses.physics.illinois.edu/phys150/fa2003/slides/lect24.pdf
The Basic Building Blocks of Matter https://www.learner.org/courses/physics/unit/pdfs/unit1.pdf
New Element 115 Takes a Seat at the Periodic Table http://science.time.com/2013/08/28/new-element-115-takes-a-seat-at-the-periodic-table/
The interdisciplinary field of materials science, also commonly termed materials science and engineering is the design and discovery of new materials, particularly solids.
Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest.
Medical Applications of Particle Physics https://indico.cern.ch/event/505656/contributions/2178989/attachments/1286957/1914803/Medical_applications_-_Sparsh.pdf
The Three Frontiers
At the Energy Frontier, for instance, we use high-energy colliders, such as the Tevatron and the Large Hadron Collider, to search for new particles and forces that provide information on the makeup of matter and space.
The Intensity Frontier is one of three broad approaches to particle physics research, each characterized by the tools it employs.
The strategy of research at the Intensity Frontier is to generate the huge numbers of particles needed to study rare subatomic processes, such as the transformation of one type of neutrino into another or the not-yet-observed conversion of a muon into an electron. This requires extreme machines, multi-megawatt proton accelerators that produce high-intensity beams of particles.
At the Cosmic Frontier, we scan the heavens with particle detectors and telescopes to learn more about cosmic rays, dark matter, and dark energy, and to understand the role they have played in the evolution of the universe.
Investigating cosmic rays was the start of the research
The origin of mass
Ordinary matter is made up of protons, neutrons and electrons. The electrons are extremely light. One nucleon weighs as much as 2,000 electrons. For all practical purposes, the mass of atoms is located in the nucleons (protons and neutrons).
It is commonly said that nucleons are made of three quarks, which is true to a point. It is logical to think that each quark has one third the mass of the nucleon, but that’s not actually true. The mass of the three quarks in the nucleons make up only about one to two percent of the mass of the nucleons. What makes up the other 98 percent?
A nucleon is not a static object with three ingredients. A nucleon consists of three very light quarks held together by the strong nuclear force. Those three quarks are moving at high velocities inside the nucleon. To picture this, imagine three ping pong balls in a lottery machine. Those ping pong balls aren’t the most important thing; rather, you should focus on what’s forcing them into motion. Think of nucleons as three quark flecks, tossed furiously inside a little subatomic tornado. The tornado is far more important than the tiny flecks.
This is related to mass through Einstein’s familiar equation, E = mc2. This equation says that mass and energy are one and the same. From what we know about the mass of nucleons, we see that approximately 98 percent of the mass of the universe isn’t mass in the usual way we think about it. Rather, the mass is stored in the energy of tiny subatomic energy dust devils.
How does the Higgs boson fit into all this? While the mass of the nucleons (and, by extension, most of the visible universe) is caused by the energy stored up in the force field of the strong nuclear force, the mass of the quarks themselves comes from a different source. The mass of the quarks and the leptons is thought to be caused by the Higgs boson. It’s important to remember that “is thought to be caused” merely means that this is the most popular theoretical proposal. In fact, we don’t really know why the quarks and leptons have the masses that they do. That’s why the search for the Higgs boson was so interesting. Trying to solve a mystery is always great fun.
However, no matter how interesting the question of the Higgs boson, it’s not the dominant source of mass in the universe. Well-understood physics, governed by strong nuclear force is why you have the mass you do.
In theoretical physics, a mass generation mechanism is a theory that describes the origin of mass from the most fundamental laws of physics. Physicists have proposed a number of models that advocate different views of the origin of mass. The problem is complicated because the primary role of mass is to mediate gravitational interaction between bodies, and no theory of gravitational interaction reconciles with the currently popular Standard Model of particle physics.
There are two types of mass generation models: gravity-free models and models that involve gravity.
The origin of mass is one of the central unresolved questions in modern physics. For quite a long time physicists have been looking for a reason why matter should display inertial behaviour. The search for the Higgs boson is one example of this. However, the Higgs model is not the answer to this question.
Newtonian mechanics suggested mass as a primary quality of matter without the need of further explanation; however Newtonian mass is just the start. The mass-concept is tremendously useful in the approximate description of baryon-dominated matter at low energy – that is, the standard “matter” of everyday life, and of most of science and engineering – but it originates from basic concepts. Most of the mass of standard matter, by far, arises dynamically, from the back-reaction of the colour gluon fields of quantum chromodynamics (QCD). Additional quantitatively small, though physically crucial, contributions come from the intrinsic masses of elementary quanta (electrons and quarks).
In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks and gluons, the fundamental particles that make up composite hadrons such as the proton, neutron and pion.
The Higgs field gives mass to fundamental particles—the electrons, quarks and other building blocks that cannot be broken into smaller parts. But these still only account for a tiny proportion of the universe’s mass.
The rest comes from protons and neutrons, which get almost all their mass from the strong nuclear force. These particles are each made up of three quarks moving at breakneck speeds that are bound together by gluons, the particles that carry the strong force. The energy of this interaction between quarks and gluons is what gives protons and neutrons their mass. Einstein’s famous E=mc2 equates energy and mass. That makes mass a secret storage facility for energy.
A neutrino (n) is a fermion (an elementary particle with half-integer spin) that interacts only via the weak subatomic force and gravity. The neutrino is so named because it is electrically neutral and because its rest mass is so small (-ino) that it was long thought to be zero. The mass of the neutrino is much smaller than that of the other known elementary particles. The weak force has a very short range, the gravitational interaction is extremely weak, and neutrinos, as leptons, do not participate in the strong interaction. Thus, neutrinos typically pass through normal matter unimpeded and undetected.
The neutrino was postulated first by Wolfgang Pauli in 1930 to explain how beta decay could conserve energy, momentum, and angular momentum (spin). In contrast to Niels Bohr, who proposed a statistical version of the conservation laws to explain the observed continuous energy spectra in beta decay, Pauli hypothesized an undetected particle that he called a “neutron”, using the same -on ending employed for naming both the proton and the electron. He considered that the new particle was emitted from the nucleus together with the electron or beta particle in the process of beta decay.
James Chadwick discovered a much more massive neutral nuclear particle in 1932 and named it a neutron also, leaving two kinds of particles with the same name. Earlier (in 1930) Pauli had used the term “neutron” for both the neutral particle that conserved energy in beta decay, and a presumed neutral particle in the nucleus; initially he did not consider these two neutral particles as distinct from each other. The word “neutrino” entered the scientific vocabulary through Enrico Fermi, who used it during a conference in Paris on July 1932 and at the Solvay Conference in October 1933, where Pauli also employed it.
In 1942, Wang Ganchang first proposed the use of beta capture to experimentally detect neutrinos. In the 20 July 1956 issue of Science, Clyde Cowan, Frederick Reines, F. B. Harrison, H. W. Kruse, and A. D. McGuire published confirmation that they had detected the neutrino, a result that was rewarded almost forty years later with the 1995 Nobel Prize.
In February 1965, the first neutrino found in nature was identified in one of South Africa’s gold mines by a group which included Friedel Sellschop.
Some neutrino projects
In particle physics, proton decay is a hypothetical form of particle decay in which the proton decays into lighter subatomic particles, such as a neutral pion and a positron. The proton decay hypothesis was first formulated by Andrei Sakharov in 1967. Despite a significant experimental effort, proton decay has never been observed. If it does decay via a positron, the proton’s half-life is constrained to be at least 1.67 x 1034 years.
Cosmic rays are a form of high-energy radiation, mainly originating outside the Solar System and even from distant galaxies. Upon impact with the Earth’s atmosphere, cosmic rays can produce showers of secondary particles that sometimes reach the surface. Composed primarily of high-energy protons and atomic nuclei, they are originated either from the sun or from outside of our solar system. Data from the Fermi Space Telescope (2013) have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernova explosions of stars. Active galactic nuclei also appear to produce cosmic rays, based on observations of neutrinos and gamma rays from blazar TXS 0506+056 in 2018.
In physical cosmology and astronomy, dark energy is an unknown form of energy which is hypothesized to permeate all of space, tending to accelerate the expansion of the universe. Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate.
The Big Bang should have created equal amounts of matter and antimatter in the early universe. But today, everything we see from the smallest life forms on Earth to the largest stellar objects is made almost entirely of matter. Comparatively, there is not much antimatter to be found. Something must have happened to tip the balance. One of the greatest challenges in physics is to figure out what happened to the antimatter, or why we see an asymmetry between matter and antimatter.
Antimatter particles share the same mass as their matter counterparts, but qualities such as electric charge are opposite. The positively charged positron, for example, is the antiparticle to the negatively charged electron. Matter and antimatter particles are always produced as a pair and, if they come in contact, annihilate one another, leaving behind pure energy. During the first fractions of a second of the Big Bang, the hot and dense universe was buzzing with particle-antiparticle pairs popping in and out of existence. If matter and antimatter are created and destroyed together, it seems the universe should contain nothing but leftover energy.
Nevertheless, a tiny portion of matter – about one particle per billion – managed to survive. This is what we see today. In the past few decades, particle-physics experiments have shown that the laws of nature do not apply equally to matter and antimatter. Physicists are keen to discover the reasons why. Researchers have observed spontaneous transformations between particles and their antiparticles, occurring millions of times per second before they decay. Some unknown entity intervening in this process in the early universe could have caused these “oscillating” particles to decay as matter more often than they decayed as antimatter.
Neither the standard model of particle physics, nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe is neutral with all conserved charges.
The Universe was thought to be born matter-antimatter symmetric, as the laws of physics dictate. But something must have happened during that first fraction of a second to preferentially create matter and/or destroy antimatter, leaving an overall imbalance. By the time we get to today, only the matter survives.
If our Universe somehow created a matter/antimatter asymmetry during these early stages, high energy physics should help us to figure out how this happened. Highly energetic interactions correspond to the high-temperature conditions present in the early Universe. Since the laws of physics remain unchanged over time, all we need to do is recreate those conditions and look for a possible cause of today’s asymmetry.
Andrei Sakharov identified the three conditions necessary for baryogenesis. They are as follows:
The Universe must be an out-of-equilibrium system.
It must exhibit C- and CP-violation.
There must be baryon-number-violating interactions.
In particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry) while its spatial coordinates are inverted (“mirror” or P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers, James Cronin and Val Fitch.
Charge conjugation, or C-symmetry, which is what you get if you swap out particles for their antiparticles
Parity, or P-symmetry, which is what you’ll see if you reflect particles in a mirror
Time reversal, or T-symmetry, which is what you’d obtain if you ran the clock backwards instead of forwards
You are allowed to violate any one or two of these in the Standard Model (e.g., C, P, or CP), although all three combined (CPT) must be conserved. In practice, only the weak interactions violate any of them; they violate C and P in very large amounts but violate CP together (and also T, separately) by only a little bit. In every interaction, we’ve ever observed, CPT is always conserved.
Dark matter is a form of matter thought to account for approximately 85% of the matter in the universe and about a quarter of its total energy density. The majority of dark matter is thought to be non-baryonic in nature, possibly being composed of some as-yet undiscovered subatomic particles. Its presence is implied in a variety of astrophysical observations, including gravitational effects which cannot be explained by accepted theories of gravity unless more matter is present than can be seen. For this reason, most experts think dark matter to be abundant in the universe and to have had a strong influence on its structure and evolution. Dark matter is called dark because it does not appear to interact with observable electromagnetic radiation, such as light, and is thus invisible to the entire electromagnetic spectrum, making it undetectable using existing astronomical instruments.
Primary evidence for dark matter comes from calculations showing many galaxies would fly apart instead of rotating, or would not have formed or move as they do, if they did not contain a large amount of unseen matter. Other lines of evidence include observations in gravitational lensing, from the cosmic microwave background, also astronomical observations of the observable universe’s current structure, the formation and evolution of galaxies, mass location during galactic collisions, and the motion of galaxies within galaxy clusters. In the standard model of cosmology, the total mass–energy of the universe contains 5% ordinary matter and energy, 27% dark matter and 68% of an unknown form of energy known as dark energy. Thus, dark matter constitutes 85% of total mass, while dark energy plus dark matter constitute 95% of total mass–energy content.
Because dark matter has not yet been observed directly, if it exists, it must barely interact with ordinary baryonic matter and radiation, except through gravity. The primary candidate for dark matter is some new kind of elementary particle that has not yet been discovered, in particular, weakly-interacting massive particles (WIMPs). Many experiments to directly detect and study dark matter particles are being actively undertaken, but none have yet succeeded. Dark matter is classified as “cold”, “warm”, or “hot” according to its velocity (more precisely, its free streaming length). Current models favour a cold dark matter scenario, in which structures emerge by gradual accumulation of particles.
Although the existence of dark matter is generally accepted by the scientific community, some astrophysicists, intrigued by certain observations which do not fit the dark matter theory, argue for various modifications of the standard laws of general relativity, such as modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. These models attempt to account for all observations without invoking supplemental non-baryonic matter.
Origin of the Universe
Particle physics is the study of the interactions of elementary particles at high energies, whilst physical cosmology studies the universe as a single physical entity. The interface between these two fields is sometimes referred to as particle cosmology. Particle physics must be taken into account in cosmological models of the early universe when the average energy density was very high. The processes of particle pair production, scattering and decay influence the cosmology.
Collider experiments can tell us about the first moments because, even though conditions were different just after the big bang, the laws of physics were the same. The laws we see at work today just have different effects at different energy scales.
The Relativistic Heavy Ion Collider at Brookhaven National Laboratory in New York and the Large Hadron Collider at CERN in Europe both collide particles at spectacular energies. Although these energies are not—and, by a wide margin, never will be—high enough to recreate the big bang itself, they do mimic some aspects of the early universe, which can tell us something about what it was like.
Unification of forces
A Grand Unified Theory (GUT) is a model in particle physics in which, at high energy, the three gauge interactions of the Standard Model that define the electromagnetic, weak, and strong interactions, or forces, are merged into a single force. Although this unified force has not been directly observed, the many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct.
Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single electroweak interaction. GUT models predict that at even higher energy, the strong interaction and the electroweak interaction will unify into a single electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a theory of everything (TOE) rather than a GUT. GUTs are often seen as an intermediate step towards a TOE.
The novel particles predicted by GUT models are expected to have extremely high masses of around the GUT scale of 1016GeV —just a few orders of magnitude below the Planck scale of 1019GeV—and so are well beyond the reach of any foreseen particle collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly and instead the effects of grand unification might be detected through indirect observations such as proton decay, electric dipole moments of elementary particles, or the properties of neutrinos. Some GUTs, such as the Pati-Salam model, predict the existence of magnetic monopoles.
GUT models which aim to be completely realistic are quite complicated, even compared to the Standard Model, because they need to introduce additional fields and interactions, or even additional dimensions of space. The main reason for this complexity lies in the difficulty of reproducing the observed fermion masses and mixing angles which may be related to the existence of some additional family symmetries beyond the conventional GUT models. Due to this difficulty, and due to the lack of any observed effect of grand unification so far, there is no generally accepted GUT model.
Analyses of accumulated data are still under way and they will probably spill well into the start run 3 in 2021. Many of the analyses already published have strong consequences for the predictions of unified theories, as are direct searches for supersymmetry, leptoquarks or other exotics at ATLAS and CMS. Upcoming results from ongoing and future analyses of the results from the LHC experiments may strengthen the bounds on light states as predicted by SUSY GUTs and other models, or they might show hints of the existence of new particles, whose relevance for GUTs would need to be determined. Upgraded versions of the LHC (HL-LHC, VLHC or FCC) or other future colliders (ILC, CLIC) will certainly boost this programme with increased accuracy and higher energies, which will further probe the low-hanging states predicted by GUTs.
New physics beyond the standard model
The scientists found that subatomic particles called B0 mesons don’t decay or fall apart in a manner that’s predicted by our current understanding of physics, a property referred to as lepton universality. “Deviations such as what we see now are very exciting in the sense that if there are new particles, it means we can eventually use those new building blocks,” said CERN physicist Freya Blekman in an interview with Wired. “Either lepton universality is not true, or there is something extra happening, for example, a new extra intermediate particle.”
There is strong experimental evidence that there is indeed some new physics lurking in the lepton sector.
What is the nature of dark matter and dark energy, which we now know form, respectively, 26.8% and 68.3% of the mass-energy of the universe? Is there a supersymmetry between the particles of matter and the particles that mediate their interactions – and can new particles required by this symmetry help to explain dark matter? What is the origin of the matter–antimatter imbalance that allows the existence of the matter universe? Is string theory, a leading contender for a quantum theory of gravity, verifiable?
Characterised by rapid progress for over a century
A particle accelerator is a machine that uses electromagnetic fields to propel charged particles to very high speeds and energies and to contain them in well-defined beams.
From cathode-ray tubes to the LHC
When J. J. Thomson discovered the electron, he did not call the instrument he was using an accelerator, but an accelerator it certainly was. He accelerated particles between two electrodes to which he had applied a difference in electric potential. He manipulated the resulting beam with electric and magnetic fields to determine the charge-to mass ratio of cathode rays. Thomson achieved his discovery by studying the properties of the beam itself—not its impact on a target or another beam, as we do today.
Sir Joseph John Thomson OM PRS (18 December 1856 – 30 August 1940) was an English physicist and Nobel Laureate in Physics, credited with the discovery and identification of the electron, the first subatomic particle to be discovered.
Cathode rays (electron beam or e-beam) are streams of electrons observed in vacuum tubes.
Thomson’s illustration of the Crookes tube by which he observed the deflection of cathode rays by an electric field (and later measured their mass-to-charge ratio). Cathode rays were emitted from the cathode C, passed through slits A (the anode) and B (grounded), then through the electric field generated between plates D and E, finally impacting the surface at the far end.
The Cockcroft–Walton (CW) generator, or multiplier, is an electric circuit that generates a high DC voltage from a low-voltage AC or pulsing DC input. It was named after the British and Irish physicists John Douglas Cockcroft and Ernest Thomas Sinton Walton, who in 1932 used this circuit design to power their particle accelerator, performing the first artificial nuclear disintegration in history.
This Cockcroft–Walton voltage multiplier was part of one of the early particle accelerators
Cockcroft and Walton’s apparatus for splitting the lithium nucleus
John Cockcroft’s idea of health and safety
A Van de Graaff generator is an electrostatic generator which uses a moving belt to accumulate electric charge on a hollow metal globe on the top of an insulated column, creating very high electric potentials. It was developed as a particle accelerator for physics research; its high potential is used to accelerate subatomic particles to great speeds in an evacuated tube. It was the most powerful type of accelerator of the 1930s until the cyclotron was developed.
An electrostatic nuclear accelerator is one of the two main types of particle accelerators, where charged particles can be accelerated by subjection to a static high voltage potential.
The Westinghouse Atom Smasher, an early Van de Graaff accelerator was built in 1937 at the Westinghouse Research Center in Forest Hills, Pennsylvania.
A cyclotron is a type of particle accelerator invented by Ernest O. Lawrence in 1929–1930 at the University of California, Berkeley, and patented in 1932. A cyclotron accelerates charged particles outwards from the centre along a spiral path. The particles are held to a spiral trajectory by a static magnetic field and accelerated by a rapidly varying (radio frequency) electric field. Lawrence was awarded the 1939 Nobel prize in physics for this invention.
Diagram showing how a cyclotron works. The magnet’s pole pieces are shown smaller than in reality; they must actually be as wide as the dees to create a uniform field.
Cyclotrons were the most powerful particle accelerator technology until the 1950s when they were superseded by the synchrotron, and are still used to produce particle beams in physics and nuclear medicine.
A synchrotron is a particular type of cyclic particle accelerator, descended from the cyclotron, in which the accelerating particle beam travels around a fixed closed-loop path. The magnetic field which bends the particle beam into its closed path increases with time during the accelerating process, being synchronized to the increasing kinetic energy of the particles. The synchrotron is one of the first accelerator concepts to enable the construction of large-scale facilities, since bending, beam focusing and acceleration can be separated into different components. The most powerful modern particle accelerators use versions of the synchrotron design. The largest synchrotron-type accelerator, also the largest particle accelerator in the world, is the 27-kilometre-circumference (17 mi) Large Hadron Collider (LHC) near Geneva, Switzerland, built in 2008 by the European Organization for Nuclear Research (CERN).
The synchrotron principle was invented by Vladimir Veksler in 1944. Edwin McMillan constructed the first electron synchrotron in 1945, arriving at the idea independently, having missed Veksler’s publication (which was only available in a Soviet journal, although in English). The first proton synchrotron was designed by Sir Marcus Oliphant and built in 1952.
A linear particle accelerator (often shortened to linac) is a type of particle accelerator that accelerates charged subatomic particles or ions to a high speed by subjecting them to a series of oscillating electric potentials along a linear beamline. The principles for such machines were proposed by Gustav Ising in 1924, while the first machine that worked was constructed by Rolf Widerøe in 1928 at the RWTH Aachen University. Linacs have many applications: they generate X-rays and high energy electrons for medicinal purposes in radiation therapy, serve as particle injectors for higher-energy accelerators, and are used directly to achieve the highest kinetic energy for light particles (electrons and positrons) for particle physics.
The design of a linac depends on the type of particle that is being accelerated: electrons, protons or ions. Linacs range in size from a cathode ray tube (which is a type of linac) to the 3.2-kilometre-long (2.0 mi) linac at the SLAC National Accelerator Laboratory in Menlo Park, California.
Animation showing how a linear accelerator works. In this example, the particles accelerated (red dots) are assumed to have a positive charge. The graph V(x) shows the electrical potential along the axis of the accelerator at each point in time. The polarity of the RF voltage reverses as the particle passes through each electrode, so when the particle crosses each gap the electric field (E, arrows) has the correct direction to accelerate it. The animation shows a single particle being accelerated each cycle; in actual linacs, a large number of particles are injected and accelerated each cycle. The action is shown slowed down enormously.
The linear accelerator could produce higher particle energies than the previous electrostatic particle accelerators (the Cockcroft-Walton accelerator and Van de Graaff generator) that were in use when it was invented.
The Large Hadron Collider (LHC) is the world’s largest and most powerful particle accelerator. It first started up on 10 September 2008 and remains the latest addition to CERN’s accelerator complex. The LHC consists of a 27-kilometre ring of superconducting magnets with a number of accelerating structures to boost the energy of the particles along the way.
From the discovery of the electron to the discovery of the Higgs boson
Near the end of the 19th century, physicists discovered that atoms are not the fundamental particles of nature first thought, but conglomerates of even smaller particles. The electron was discovered between 1879 and 1897 in works of William Crookes, Arthur Schuster, J. J. Thomson, and other physicists.
1910’s Nuclear hypothesis
The Geiger–Marsden experiments (also called the Rutherford gold foil experiment) were a landmark series of experiments by which scientists discovered that every atom contains a nucleus where all of its positive charge and most of its mass are concentrated. They deduced this by measuring how an alpha particle beam is scattered when it strikes a thin metal foil. The experiments were performed between 1908 and 1913 by Hans Geiger and Ernest Marsden under the direction of Ernest Rutherford at the Physical Laboratories of the University of Manchester.
1930’s Discovery of the neutron, meson and neutrino
The essential nature of the atomic nucleus was established with the discovery of the neutron by James Chadwick in 1932 and the determination that it was a new elementary particle, distinct from the proton.
In particle physics, mesons are hadronic subatomic particles composed of one quark and one antiquark, bound together by strong interactions.
From theoretical considerations, in 1934 Hideki Yukawa predicted the existence and the approximate mass of the “meson” as the carrier of the nuclear force that holds atomic nuclei together.
The first true meson to be discovered was what would later be called the “pi meson” (or pion). This discovery was made in 1947, by Cecil Powell, César Lattes, and Giuseppe Occhialini, who were investigating cosmic ray products at the University of Bristol in England, based on photographic films placed in the Andes mountains.
A neutrino is a fermion (an elementary particle with half-integer spin) that interacts only via the weak subatomic force and gravity.
The neutrino was postulated first by Wolfgang Pauli in 1930 to explain how beta decay could conserve energy, momentum, and angular momentum (spin).
In 1942, Wang Ganchang first proposed the use of beta capture to experimentally detect neutrinos. In the 20 July 1956 issue of Science, Clyde Cowan, Frederick Reines, F. B. Harrison, H. W. Kruse, and A. D. McGuire published confirmation that they had detected the neutrino, a result that was rewarded almost forty years later with the 1995 Nobel Prize.
1950’s An incredible number of particles are discovered
1960’s quarks are proposed
A quark is a type of elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. Due to a phenomenon known as colour confinement, quarks are never directly observed or found in isolation; they can be found only within hadrons, which include baryons (such as protons and neutrons) and mesons. For this reason, much of what is known about quarks has been drawn from observations of hadrons.
The quark model was independently proposed by physicists Murray Gell-Mann and George Zweig in 1964.
At the time of the quark theory’s inception, the “particle zoo” included, among other particles, a multitude of hadrons. Gell-Mann and Zweig posited that they were not elementary particles, but were instead composed of combinations of quarks and antiquarks.
In 1968, deep inelastic scattering experiments at the Stanford Linear Accelerator Center (SLAC) showed that the proton contained much smaller, point-like objects and was therefore not an elementary particle. The objects that were observed at SLAC would later be identified as up and down quarks as the other flavours were discovered.
Overtime the charm, strange, top and bottom quarks were discovered.
1970’s W and Z bosons and the charm quark are discovered
The W and Z bosons are together known as the weak or more generally as the intermediate vector bosons. These elementary particles mediate the weak interaction; the respective symbols are W+, W−, and Z. The W bosons have either a positive or negative electric charge of 1 elementary charge and are each other’s antiparticles. The Z boson is electrically neutral and is its own antiparticle.
The discovery of the W and Z bosons was considered a major success for CERN. First, in 1973, came the observation of neutral current interactions as predicted by electroweak theory. The huge Gargamelle bubble chamber photographed the tracks of a few electrons suddenly starting to move, seemingly of their own accord. This is interpreted as a neutrino interacting with the electron by the exchange of an unseen Z boson. The neutrino is otherwise undetectable, so the only observable effect is the momentum imparted to the electron by the interaction.
The discovery of the W and Z bosons themselves had to wait for the construction of a particle accelerator powerful enough to produce them. The first such machine that became available was the Super Proton Synchrotron, where unambiguous signals of W bosons were seen in January 1983 during a series of experiments made possible by Carlo Rubbia and Simon van der Meer.
1990’s The quarks are complete
The top quark, first observed at Fermilab in 1995, was the last to be discovered.
2012 Higgs Boson
The Higgs boson is an elementary particle in the Standard Model of particle physics, produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. It is named after physicist Peter Higgs, who in 1964, along with five other scientists, proposed the Higgs mechanism to explain why particles have mass. This mechanism implies the existence of the Higgs boson. The boson’s existence was confirmed in 2012 by the ATLAS and CMS collaborations based on collisions in the LHC at CERN.
Advances in accelerators require corresponding advances in accelerator technologies:
Magnets, vacuum systems, RF systems, diagnostics,…
But timelines becoming long, requiring:
Long-term planning; Long-term resources; Global collaboration
Accelerators have their own version of Moores law known as a Livingston plot. The energy of accelerators has increased by a factor of 10 every 10 years.
Milton Stanley Livingston (May 25, 1905 – August 25, 1986) was an American accelerator physicist, co-inventor of the cyclotron with Ernest Lawrence, and co-discoverer with Ernest Courant and Hartland Snyder of the strong focusing principle, which allowed the development of modern large-scale particle accelerators. He built cyclotrons at the University of California, Cornell University and the Massachusetts Institute of Technology. During World War II, he served in the operations research group at the Office of Naval Research.
Livingston was the chairman of the Accelerator Project at Brookhaven National Laboratory, director of the Cambridge Electron Accelerator, a member of the National Academy of Sciences, a professor of physics at MIT, and a recipient of the Enrico Fermi Award from the United States Department of Energy. He was Associate Director of the National Accelerator Laboratory from 1967 to 1970.
Above right: Livingston (left) and Lawrence (right) with a magnet of their 37-inch cyclotron.
Around 1950, Livingston made the following observation: Plotting energy of accelerator as a function of year of commissioning, on a semi-log scale, the energy gain has linear dependence.
Observations today: Exhibition of saturation effect. New technologies are needed. At the moment technology is not keeping up with the physics. Overall project cost increased. Project cost increased by a factor of 200 over the last 40 years. Cost per proton-proton ECM energy decreased by a factor of 10 over the last 40 years.
Scientific Challenge: to understand the very first moments of our Universe after the Big Bang
The chronology of the universe describes the history and future of the universe according to Big Bang cosmology. The earliest stages of the universe’s existence are estimated as taking place 13.8 billion years ago, with an uncertainty of around 21 million years at the 68% confidence level.
There is a gap in our knowledge
It has been speculated that, in the hot early universe, the vacuum (i.e. the various quantum fields that fill space) possessed a large number of symmetries. As the universe expanded and cooled, the vacuum underwent a series of symmetry-breaking phase transitions. For example, the electroweak transition broke the SU(2) x U(1) symmetry of the electroweak field into the U(1) symmetry of the present-day electromagnetic field. This transition is important to understanding the asymmetry between the amount of matter and antimatter in the present-day universe
In physical cosmology, baryogenesis is the hypothetical physical process that took place during the early universe that produced baryonic asymmetry, i.e. the imbalance of matter (baryons) and antimatter (antibaryons) in the observed universe.
Quark matter or QCD matter (quantum chromodynamic) refers to any of a number of phases of matter whose degrees of freedom include quarks and gluons.[clarification needed] These phases occur at extremely high temperatures and/or densities, and some of them are still only theoretical as they require conditions so extreme that they cannot be produced in any laboratory, especially not as equilibrium conditions. Under these extreme conditions, the familiar structure of matter, where the basic constituents are nuclei (consisting of nucleons which are bound states of quarks) and electrons, is disrupted. In quark matter, it is more appropriate to treat the quarks themselves as the basic degrees of freedom.
According to the Big Bang theory, in the early universe at high temperatures when the universe was only a few tens of microseconds old, the phase of matter took the form of a hot phase of quark matter called the quark–gluon plasma (QGP). Then, in a phase transition, they combined and formed hadrons, among them the building blocks of atomic nuclei, protons and neutrons.
What are the nature and properties of quark–gluon plasma, thought to have existed in the early universe and in certain compact and strange astronomical objects today? This will be investigated by heavy ion collisions, mainly in ALICE, but also in CMS, ATLAS and LHCb. First observed in 2010, findings published in 2012 confirmed the phenomenon of jet quenching in heavy-ion collisions.
Telescopes look back
The Hubble Space Telescope (often referred to as HST or Hubble) is a space telescope that was launched into low Earth orbit in 1990 and remains in operation.
The estimated age of the Universe is about 13.7 billion years according to data gleaned from Hubble.
While Hubble helped to refine estimates of the age of the universe, it also cast doubt on theories about its future. Astronomers from the High-z Supernova Search Team and the Supernova Cosmology Project used ground-based telescopes and HST to observe distant supernovae and uncovered evidence that, far from decelerating under the influence of gravity, the expansion of the universe may, in fact, be accelerating. Three members of these two groups have subsequently been awarded Nobel Prizes for their discovery. The cause of this acceleration remains poorly understood; the most common cause attributed is dark energy
By providing scientists with detailed images of stars and planets being born in gas clouds near our Solar System, and detecting distant galaxies forming at the edge of the observable Universe, which we see as they were roughly ten billion years ago, it lets astronomers address some of the deepest questions of our cosmic origins.
The Alpha Magnetic Spectrometer, also designated AMS-02, is a particle physics experiment module that is mounted on the International Space Station (ISS). The module is a detector that measures antimatter in cosmic rays, this information is needed to understand the formation of the Universe and search for evidence of dark matter.
The Very Large Telescope (VLT) is a telescope facility operated by the European Southern Observatory on Cerro Paranal in the Atacama Desert of northern Chile.
Using the VLT, astronomers have also estimated the age of extremely old stars in the NGC 6397 cluster. Based on stellar evolution models, two stars were found to be 13.4 ± 0.8 billion years old, that is, they are from the earliest era of star formation in the Universe.
For the first 380,000 years or so, the universe was essentially too hot for light to shine.
Roughly 380,000 years after the Big Bang, matter cooled enough for atoms to form during the era of recombination, resulting in a transparent, electrically neutral gas, according to NASA. This set loose the initial flash of light created during the Big Bang, which is detectable today as cosmic microwave background radiation. However, after this point, the universe was plunged into darkness, since no stars or any other bright objects had formed yet, “The Wall”.
The Study of Elementary Particles & Fields & Their Interactions
We want to be able to include gravity
Nobel Prize in Physics 2013
The Nobel Prize in Physics 2013 was awarded jointly to Peter W. Higgs and François Englert “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider”.
Open Issues in Particle Physics
Complete understanding of Higgs boson properties.
Why are there so many types of matter particles?
What is the cause of matter-antimatter asymmetry?
What are the properties of the primordial plasma?
What is the nature of the invisible dark matter?
Can all fundamental particles be unified?
Is there a quantum theory of gravity?
The present and future accelerator-based experimental programmes at colliders will address these questions and may well provide definite answers.
Accelerator Parameters (I)
Particle colliders designed to deliver two basic parameters to HEP user.
Particle physics (also known as high energy physics) is a branch of physics that studies the nature of the particles that constitute matter and radiation.
I. Centre-of-Mass Energy ECM
Higher energy produces more massive particles.
When particles approach the speed of light, they become more massive but not faster.
Only a tiny fraction of energy converted into mass of new particles (due to energy and momentum conservation)
Entire energy converted into the mass of new particles
De Broglie Wavelength
To probe smaller distances inside matter you need greater energies
Matter waves are a central part of the theory of quantum mechanics, being an example of wave–particle duality. All matter can exhibit wave-like behaviour. For example, a beam of electrons can be diffracted just like a beam of light or a water wave. The concept that matter behaves like a wave was proposed by Louis de Broglie in 1924. It is also referred to as the de Broglie hypothesis. Matter waves are referred to as de Broglie waves.
The de Broglie wavelength is the wavelength, l, associated with a massive particle and is related to its momentum, p, through the Planck constant, h:
l = h/p = (1.2fm/p[GeV/c]) = h/mv (m is the mass of the particle and v is its velocity)
Wave-like behaviour of matter was first experimentally demonstrated by George Paget Thomson’s thin metal diffraction experiment, and independently in the Davisson–Germer experiment both using electrons, and it has also been confirmed for other elementary particles, neutral atoms and even molecules.
Accelerator Parameters (II)
Particle colliders designed to deliver two basic parameters to HEP user.
Measure of collision rate per unit area Event rate for given event probability (“cross-section”):
For a Collider, instantaneous luminosity L is given by
–> Require intense beams, high bunch frequency and small beam sizes at the interaction point (IP).
Cross-sections at the LHC
High-field Accelerator Magnets
Magnetic rigidity B used to describe motion of relativistic particle of charge e and momentum p in magnetic field of strength B and bending radius
B𝛒 = p / e (in SI units)
B𝛒 [T.m] ∼ 3.3356 p [GeV/c]
Two approaches for raising collision energy:
Increase magnetic field of bending magnets.
Increase ring circumference and hence radius 𝛒
Final focus Quadrupoles
B Lq ≈ 1 / 𝛔*
Design quadrupoles for largest integrated field B Lq to obtain smallest beam size 𝛔* at IP
Varying the SCRF Frequency
Superconducting radio frequency (SRF) science and technology involves the application of electrical superconductors to radio frequency devices.
The most common application of superconducting RF is in particle accelerators. Accelerators typically use resonant RF cavities formed from or coated with superconducting materials. Electromagnetic fields are excited in the cavity by coupling in an RF source with an antenna. When the RF fed by the antenna is the same as that of a cavity mode, the resonant fields build to high amplitudes. Charged particles passing through apertures in the cavity are then accelerated by the electric fields and deflected by the magnetic fields. The resonant frequency driven in SRF cavities typically ranges from 200 MHz to 3 GHz, depending on the particle species to be accelerated.
Desire high energy;
Only ~10% of beam energy available for hard collisions producing new particles;
Need O(10 TeV) Collider to probe 1 TeV mass scale.
High-energy beam requires strong magnets to store and focus beam in reasonable-sized ring
Desire high luminosity;
Use proton-proton collisions;
High bunch population and high bunch frequency;
Anti-protons difficult to produce if beam is lost;
c.f. SPS Collider and Tevatron
Lepton Colliders (e+e–):
Synchrotron radiation – most serious challenge for circular colliders;
Energy loss of a particle per turn
Emitted power in circular machine is
For collider with ECM (centre of mass energy) = 1 TeV in the LHC tunnel with a 1 mA beam, radiated power would be 2 GW;
Would need to replenish radiated power with RF (radio frequency power);
Remove it from vacuum chamber;
Approach for high energies is Linear Collider.
Hadron collider at the frontier of physics
Huge QCD background
Not all nucleon energy available in collision
Lepton collider for precision physics
Well defined initial energy for the reaction
Colliding point like particles
A problem is synchrotron radiation
Synchrotron radiation is the electromagnetic radiation emitted when charged particles are accelerated radially, e.g., when they are subject to an acceleration perpendicular to their velocity.
Synchrotron radiation may occur in accelerators as a nuisance, causing undesired energy loss in particle physics contexts
A linear collider is the approach for high energies
Huge QCD background
Not all nucleon energy is available
ATLAS Z –> mm Interesting events
For precision physics
Well defined initial physics for the reaction
Colliding point-like particles gives more “clean” results, all events are usable
Circular versus Linear Collider
many magnets, few cavities, stored beam
higher energy → stronger magnetic field
→ higher synchrotron radiation losses (E4/m4R)
few magnets, many cavities, single-pass beam
higher energy → higher accelerating gradient
higher luminosity → higher beam power (high bunch repetition)
A Global Strategy
Encourage strategic studies and planning of international facilities for particle physics in different regions of the world:
CLIC/FCC in Europe https://en.wikipedia.org/wiki/Future_Circular_Collider
LBNF in US (neutrinos & the intensity frontier) https://lbnf.fnal.gov/
Encourage global coordination in planning future energy frontier colliders:
ILC and CLIC groups working together Linear Collider Board (and Linear Collider Collaboration) under ICFA
FCC and CEPC/SPPC