50 years of the GIM mechanism

Hong-Jian He, John Ellis, John Iliopoulos, Sheldon Lee Glashow, Verónica Riquer and Luciano Maiani at a celebration of 50 years of the GIM mechanism in Shanghai. Credit: J Liu

In 1969 many weak amplitudes could be accurately calculated with a model of just three quarks, and Fermi’s constant and the Cabibbo angle to couple them. One exception was the remarkable suppression of strangeness-changing neutral currents. John Iliopoulos, Sheldon Lee Glashow and Luciano Maiani boldly solved the mystery using loop diagrams featuring the recently hypothesised charm quark, making its existence a solid prediction in the process. To celebrate the fiftieth anniversary of their insight, the trio were guests of honour at an international symposium at the T. D. Lee Institute at Shanghai Jiao Tong University on 29 October, 2019.

The UV cutoff needed in the three-quark theory became an estimate of the mass of the fourth quark

The Glashow-Iliopoulos-Maiani (GIM) mechanism was conceived in 1969, submitted to Physical Review D on 5 March 1970, and published on 1 October of that year, after several developments had defined a conceptual framework for electroweak unification. These included Yang-Mills theory, the universal V−A weak interaction, Schwinger’s suggestion of electroweak unification, Glashow’s definition of the electroweak group SU(2)L×U(1)Y, Cabibbo’s theory of semileptonic hadron decays and the formulation of the leptonic electroweak gauge theory by Weinberg and Salam, with spontaneous symmetry breaking induced by the vacuum expectation value of new scalar fields. The GIM mechanism then called for a fourth quark, charm, in addition to the three introduced by Gell-Mann, such that the first two blocks of the electroweak theory are made each by one lepton and one quark doublet, [(νe, e), (u, d)] and [(νµ, µ), (c, s)]. Quarks u and c are coupled by the weak interaction to two superpositions of the quarks d and s: u ↔ dC , with dC the Cabibbo combination dC = cos θC d + sin θC s, and c ↔ sC , with sC the orthogonal combination. In subsequent years, a third generation, [(ντ, τ ), (t, b)] was predicted to describe CP violation. No further generations have been observed yet.

Problem solved

The GIM mechanism was the solution to a problem arising in the simplest weak interaction theory with one charged vector boson coupled to the Cabibbo currents. As pointed out in 1968, strangeness-changing neutral-current processes, such as KL → µ+µ and K0 – K0 mixing, are generated at one loop with amplitudes of order G sinθC cosθC (GΛ2), where G is the Fermi constant, Λ is an ultraviolet cutoff, and GΛ2 (dimensionless) is the first term in a perturbative expansion which could be continued to take higher order diagrams into account. To comply with the strict limits existing at the time, one had to require a surprisingly small value of the cutoff, Λ, of 2 − 3 GeV, to be compared with the naturally expected value: Λ = G-1/2 ~ 300 GeV. This problem was taken seriously by the GIM authors, who wrote that “it appears necessary to depart from the original phenomenological model of weak interactions”.

One-loop quark diagrams for K0 – K0 mixing in the light of the GIM mechanism. The charm-quark amplitudes have the same magnitude but opposite sign as for up-quark lines, leading to a perfect cancellation, cos θ sin θ + (- sin θ) cos θ = 0, in the case where mc = mu and suggesting an explanation for the suppression of processes with strangeness-changing neutral currents.

To sidestep this problem, Glashow, Iliopoulos and Maiani brought in the fourth “charm” quark, already introduced by Bjorken, Glashow and others, with its typical coupling to the quark combination left alone in the Cabibbo theory: c ↔ sC = − sinθC d + cosθC s. Amplitudes for s → d with u or c on the same fermion line would cancel exactly for m= mu, suggesting a more natural means to suppress strangeness-changing neutral-current processes to measured levels. For m>> mu, a residual neutral-current effect would remain, which, by inspection, and for dimensional reasons, is of order G sinθC cos θC (GMc2). This was a real surprise: the “small” UV cutoff needed in the simple three-quark theory became an estimate of the mass of the fourth quark, which was indeed sufficiently large to have escaped detection in the unsuccessful searches for charmed mesons that had been conducted in 1960s. With the two quark doublets included, a detailed study of strangeness changing neutral current processes gave mc ∼ 1.5 GeV, a value consistent with more recent data on the masses of charmed mesons and baryons. Another aspect of the GIM cancellation is that the weak charged currents make an SU(2) algebra together with a neutral component that has no strangeness changing terms. Thus, there is no difficulty to include the two quark doublets in the unified electroweak group SU(2)L×U(1)Y of Glashow, Weinberg and Salam. The 1970 GIM paper noted that “in contradistinction to the conventional (three-quark) model, the couplings of the neutral intermediary – now hypercharge conserving – cause no embarrassment.”

The GIM mechanism has become a cornerstone of the Standard Model and it gives a precise description of the observed flavour changing neutral current processes for s and b quarks. For this reason, flavour-changing neutral currents are still an important benchmark and give strong constraints on theories that go beyond the Standard Model in the TeV region.

 

Profiles of James Peebles, Michel Mayor, and Didier Queloz: 2019 Nobel Laureates in Physics

Neta Bahcall, Adam Burrows

Published in PNAS, 117, 2, 799 – 801 (January 2020)

Mankind has long been fascinated by the mysteries of our Universe: How old and how big is the
Universe? How did the Universe begin and how is it evolving? What is the composition of the
Universe and the nature of its dark-matter and dark-energy? What is our Earth’s place in the cosmos
and are there other planets (and life) around other stars?

The 2019 Nobel Prize in Physics honors three pioneering scientists for their fundamental contributions to basic cosmic questions – Professor James Peebles (Princeton University), Michel Mayor (University of Geneva), and Didier Queloz (University of Geneva and the University of Cambridge) – “for contributions to our understanding of the evolution of the universe and Earth’s place in the cosmos,” with one half to James Peebles “for theoretical discoveries in physical cosmology,” and the other half jointly to Michel Mayor and Didier Queloz “for the discovery of an exoplanet orbiting a solar-type star.” We summarize the historical and scientific backdrop to this year’s Physics Nobel.

Read more at https://arxiv.org/ftp/arxiv/papers/2001/2001.08511.pdf

How sensitive can your quantum detector be?

A new device measures the tiniest energies in superconducting circuits, an essential step for quantum technology

Illustration by Safa Hovinen, Merkitys

Quantum physics is moving out of the laboratory and into our everyday lives. Despite the big headline results about quantum computers solving problems impossible for classical computers, technical challenges are standing in the way of getting quantum physics into the real world. New research published in Nature Communications from teams at Aalto University and Lund University hopes to provide an important tool in this quest.

One of the open questions in quantum research is how heat and thermodynamics coexist with quantum physics. This research field, “quantum thermodynamics”, is one of the areas Professor Jukka Pekola, the leader of the QTF Centre of Excellence of the Academy of Finland, has worked on in his career. ‘This field has up to now been dominated by theory, and only now important experiments are starting to emerge’ says Professor Pekola. His research group has set about creating quantum thermodynamic nano-devices that can solve open questions experimentally.

Quantum states – like the qubits that power quantum computers – interact with their surrounding world, and these interactions are what quantum thermodynamics deals with. Measuring these systems requires detecting energy changes so exceptionally small they are hard to pick out from background fluctuations, like using only a thermometer to try and work out if someone has blown out a candle in the room you’re in. Another problem is that quantum states can change when you measure them, simply because you’ve measured them. This would be like putting a thermometer in a cup of cold water making the water start to boil. The team had to make a thermometer able to measure very small changes without interfering with any of the quantum states they plan to measure.

Doctoral student Bayan Karimi works in QTF and Marie Curie training network QuESTech. Her device is a calorimeter, which measures the heat in a system. It uses a strip of copper about one thousand times thinner than a human hair. ‘Our detector absorbs radiation from the quantum states. It is expected to determine how much energy they have and how they interact with their surroundings. There is a theoretical limit to how accurate a calorimeter can be, and our device is now reaching that limit’, says Karimi.

Read more at https://www.aalto.fi/en/news/how-sensitive-can-your-quantum-detector-be

New evidence shows that the key assumption made in the discovery of dark energy is in error

High precision age dating of supernova host galaxies reveals that the luminosity evolution of supernovae is significant enough to question the very existence of dark energy

Figure 1. Luminosity evolution mimicking dark energy in supernova (SN) cosmology. The Hubble residual is the difference in SN luminosity with respect to the cosmological model without dark energy (the black dotted line). The cyan circles are the binned SN data from Betoule et al. (2014). The red line is the evolution curve based on our age dating of early-type host galaxies. The comparison of our evolution curve with SN data shows that the luminosity evolution can mimic Hubble residuals used in the discovery and inference of the dark energy (the black solid line).

The most direct and strongest evidence for the accelerating universe with dark energy is provided by the distance measurements using type Ia supernovae (SN Ia) for the galaxies at high redshift. This result is based on the assumption that the corrected luminosity of SN Ia through the empirical standardization would not evolve with redshift.

New observations and analysis made by a team of astronomers at Yonsei University (Seoul, South Korea), together with their collaborators at Lyon University and KASI, show, however, that this key assumption is most likely in error. The team has performed very high-quality (signal-to-noise ratio ~175) spectroscopic observations to cover most of the reported nearby early-type host galaxies of SN Ia, from which they obtained the most direct and reliable measurements of population ages for these host galaxies. They find a significant correlation between SN luminosity and stellar population age at a 99.5% confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Since SN progenitors in host galaxies are getting younger with redshift (look-back time), this result inevitably indicates a serious systematic bias with redshift in SN cosmology. Taken at face values, the luminosity evolution of SN is significant enough to question the very existence of dark energy. When the luminosity evolution of SN is properly taken into account, the team found that the evidence for the existence of dark energy simply goes away (see Figure 1).

Commenting on the result, Prof. Young-Wook Lee (Yonsei Univ., Seoul) who was leading the project said; “Quoting Carl Sagan, extraordinary claims require extraordinary evidence, but I am not sure we have such extraordinary evidence for dark energy. Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption”.

Other cosmological probes, such as CMB (Cosmic Microwave Background) and BAO (Baryonic Acoustic Oscillations), are also known to provide some indirect and “circumstantial” evidence for dark energy, but it was recently suggested that CMB from Planck mission no longer supports the concordance cosmological model which may require new physics (Di Valentino, Melchiorri, & Silk 2019). Some investigators have also shown that BAO and other low-redshift cosmological probes can be consistent with a non-accelerating universe without dark energy (see, for example, Tutusaus et al. 2017). In this respect, the present result showing the luminosity evolution mimicking dark energy in SN cosmology is crucial and is very timely.

This result is reminiscent of the famous Tinsley-Sandage debate in the 1970s on luminosity evolution in observational cosmology, which led to the termination of the Sandage project originally designed to determine the fate of the universe.

Read more at https://astro.yonsei.ac.kr/galaxy/galaxy01/research.do?mode=view&articleNo=78249 and https://arxiv.org/abs/1912.04903

Spooky Action at a Global Distance

Resource-Rate Analysis of a Space-Based Entanglement-Distribution Network for the Quantum Internet

A hybrid global-quantum-communications network, in which a satellite constellation distributes entangled photon pairs (red wave packets; entanglement depicted by wavy lines) to distant ground stations (observatories) that host multimode quantum memories for storage. These stations act as hubs that connect to local nodes (black dots) via fiber-optic or atmospheric links. Using these nearest-neighbor entangled links, via entanglement swapping, two distant nodes can share entanglement. Note that this architecture can support inter-satellite entanglement links as well, which is useful for exploring fundamental physics , and for forming an international time standard

Sumeet Khatri, Anthony J. Brady, Renée A. Desporte, Manon P. Bart, Jonathan P. Dowling
Recent experimental breakthroughs in satellite quantum communications have opened up the possibility of creating a global quantum internet using satellite links. This approach appears to be particularly viable in the near term, due to the lower attenuation of optical signals from satellite to ground, and due to the currently short coherence times of quantum memories. These drawbacks prevent ground-based entanglement distribution using atmospheric or optical-fiber links at high rates over long distances. In this work, we propose a global-scale quantum internet consisting of a constellation of orbiting satellites that provides a continuous on-demand entanglement distribution service to ground stations. The satellites can also function as untrusted nodes for the purpose of long-distance quantum-key distribution. We determine the optimal resource cost of such a network for obtaining continuous global coverage. We also analyze the performance of the network in terms of achievable entanglement-distribution rates and compare these rates to those that can be obtained using ground-based quantum-repeater networks.

Read more at https://arxiv.org/abs/1912.06678

Read also “Why the quantum internet should be built in space

Top 10 quantum computing experiments of 2019

The last decade has seen quantum computing grow from a niche research endeavour to a large-scale business operation. While it’s exciting that the field is experiencing a surge of private funding and media publicity, it’s worth remembering that nobody yet knows how to build a useful fault-tolerant quantum computer. The path ahead is not “just engineering”, and in the coming decade we have to pay attention to all the “alternative approaches”, “crazy ideas” and “new ways of doing things”.

With this in mind, I created this subjective list of quantum computing research highlights of 2019. It highlights experimental achievements which show new exciting ways of controlling qubits. In such a vast space of literature, I have no doubt I missed some essential works, so I encourage you to get in touch and add your favourites to the list ….

Read more at https://medium.com/@msmalina/top-quantum-computing-experiments-of-2019-1157db177611

A Forbidden Transition Allowed for Stars

The discovery of an exceptionally strong “forbidden” beta-decay involving fluorine and neon could change our understanding of the fate of intermediate-mass stars.

Researchers have measured the forbidden nuclear transition between 20F and 20Ne. This measurement allowed them to make a new calculation of the electron-capture rate of 20Ne, a rate that is important for predicting the evolution of intermediate-mass stars.

Every year roughly 100 billion stars are born and just as many die. To understand the life cycle of a star, nuclear physicists and astrophysicists collaborate to unravel the physical processes that take place in the star’s interior. Their aim is to determine how the star responds to these processes and from that response predict the star’s final fate. Intermediate-mass stars, whose masses lie somewhere between 7 and 11 times that of our Sun, are thought to die via one of two very different routes: thermonuclear explosion or gravitational collapse. Which one happens depends on the conditions within the star when oxygen nuclei begin to fuse, triggering the star’s demise. Researchers have now, for the first time, measured a rare nuclear decay of fluorine to neon that is key to understanding the fate of these “in between” stars . Their calculations indicate that thermonuclear explosion and not gravitational collapse is the more likely expiration route.

The evolution and fate of a star strongly depend on its mass at birth. Low-mass stars—such as the Sun—transition first into red giants and then into white dwarfs made of carbon and oxygen as they shed their outer layers. Massive stars—those whose mass is at least 11 times greater than the Sun’s—also transition to red giants, but in the cores of these giants, nuclear fusion continues until the core has turned completely to iron. Once that happens, the star stops generating energy and starts collapsing under the force of gravity. The star’s core then compresses into a neutron star, while its outer layers are ejected in a supernova explosion. The evolution of intermediate-mass stars is less clear. Predictions indicate that they can explode both via the gravitational collapse mechanism of massive stars and by a thermonuclear process . The key to finding out which happens lies in the properties of an isotope of neon and its ability to capture electrons.

The story of fluorine and neon is tied to what is known as a forbidden nuclear transition. Nuclei, like atoms, have distinct energy levels and thus can exist in different energy states. For a given radioactive nucleus, the conditions within a star, such as the temperature and density of its plasma, dictate its likely energy state. The quantum-mechanical properties of each energy state then determine the nucleus’ likely decay path. The decay is called allowed if, on Earth, the decay path has a high likelihood of occurring. If, instead, the likelihood is low, the transition is termed forbidden. But in the extreme conditions of a star’s interior, these forbidden transitions can occur much more frequently. Thus, when researchers measure a nuclear reaction in the laboratory, the very small contribution from a forbidden transition is often the most critical one to measure for the astrophysics applications….

Read more at https://physics.aps.org/articles/v12/151