Roland Eotvos: scientist, statesman, educator

András PATKÓS, Institute of Physics, Eötvös University
This lecture recalls the memory of Baron Roland Eötvös, an outstanding figure of the experimental exploration of the gravitational interaction and “funding father” of applied geophysics. Beyond the scientific achievements his contribution to the development of the modern Hungarian schooling and higher educational system, most importantly, the foundation of an innovative institution of teacher’s training did not lose its contemporary significance. This lecture has been invited by the organizers of this Conference in response to the decision of UNESCO to commemorate worldwide the death centenary of the most outstanding Hungarian experimental physicist of modern times.

Read more at https://arxiv.org/ftp/arxiv/papers/2002/2002.05743.pdf

Joseph Polchinski: A Biographical Memoir

Raphael Bousso, Fernando Quevedo, Steven Weinberg
Joseph Polchinski (1954-2018), one of the the leading theoretical physicists of the past 50 years, was an exceptionally broad and deep thinker. He made fundamental contributions to quantum field theory, advancing the role of the renormalization group, and to cosmology, addressing the cosmological constant problem. Polchinski’s work on D-branes revolutionized string theory and led to the discovery of a nonperturbative quantum theory of gravity. His recent, incisive reformulation of the black hole information paradox presents us with a profound challenge. Joe was deeply devoted to his family, a beloved colleague and advisor, an excellent writer, and an accomplished athlete.
Read more at https://arxiv.org/pdf/2002.02371.pdf

Natural Philosophy versus Philosophy of Naturalness

Goran Senjanovic
I reflect on some of the basic aspects of present day Beyond the Standard Model particle physics, focusing mostly on the issues of naturalness, in particular on the so-called hierarchy problem. To all of us, physics as natural science emerged with Galileo and Newton, and led to centuries of unparalleled success in explaining and often predicting new phenomena of nature. I argue here that the long standing obsession with the hierarchy problem as a guiding principle for the future of our field has had the tragic consequence of deviating high energy physics from its origins as natural philosophy, and turning it into a philosophy of naturalness.
Read more at https://arxiv.org/pdf/2001.10988.pdf

State of the art in magnetic resonance imaging

As a clinical technology, MRI offers unsurpassed flexibility to look inside the human body.
In 1977, inspired by the observation that cancerous and healthy tissues produced different nuclear magnetic resonance signals, Raymond Damadian, Michael Goldsmith, and Lawrence Minkoff performed the first MRI scan of a live human body. In the early days of clinical MRI, scans took hours and provided low spatial resolution, but they have become essential for distinguishing between healthy and diseased tissues. By its 40th anniversary, MRI was a must-have tool in hospitals and clinics of all sizes. And it has found applications in image-guided interventions and surgeries, radiation therapy, and focused ultrasound. Advances in technology, meanwhile, have pushed the envelope of scanner performance with improvements to speed and spatial resolution.
At the frontiers of MRI development, work is focused on fast, quantitative imaging. Clinical needs increasingly demand functional information—on heart-muscle contractions, brain activity,3 chemical concentrations in tumors,4 and blood flow in and out of tissue5—in addition to anatomical structures. New approaches must also maintain a patient’s comfort and safety; MRI is well-known for sparing patients any exposure to ionizing radiation, yet it is not without hazards.

When biological tissue is placed in a magnetic field, nuclei with magnetic moments become magnetized. RF pulses are then applied that match the resonance, or Larmor, frequency of the nuclei, causing them to tip out of alignment with the external magnetic field and precess about it. The precessing nuclei, in turn, induce oscillating magnetic fields at the Larmor frequency; those oscillations are detected via Faraday induction of an electromotive force in a nearby coil of wire.
In practice, many nuclei must precess in phase with each other to produce a detectable signal. The loss of phase coherence among precessing nuclei over time is called T2 relaxation. And the orientations of the nuclei eventually return to their equilibrium orientation in the external magnetic field—a process called T1 relaxation. Early NMR experiments revealed that various tissues have distinct T1 and T2 relaxation times. For certain diseases, including cancer, changes in either time can distinguish between diseased and healthy tissue. That feature is useful in the case of lesions whose absorption of x rays is similar to that of surrounding healthy tissue, which makes them difficult to detect using radiography or x-ray computed tomography.
In NMR measurements, the timing of the applied RF pulse and of the RF readout signal from the tissue can be chosen so that the strongest signal is produced by tissue with the shortest T1 relaxation time. A measurement whose timing is chosen that way is called a T1-weighted measurement. Alternatively, the sequence timing can be chosen so that the strongest signal comes from tissue with the longest T2 relaxation time, a T2-weighted result.
In tissue, the hydrogen nucleus is the most abundant magnetizable nucleus. Its gyromagnetic ratio is 42.56 MHz/T, which results in operating at Larmor frequencies of roughly 64 MHz and 128 MHz for 1.5 T and 3 T MRI scanners, respectively. Magnets with a range between 0.2 T and 7 T are used for clinical scanning, and human scanning up to 10.5 T is currently available in research settings.
Using the NMR signals from tissue for clinical diagnosis requires that they be localized in three dimensions to form images. Three sets of electromagnetic gradient coils in the MRI scanner accomplish that task. Each produces a linearly varying magnetic field along one of three orthogonal axes. And each gradient can be switched on and off to produce different strengths depending on the current applied; the gradient fields are superimposed on the main magnetic field—usually 1.5 T or 3 T—to create a spatially dependent variation in Larmor frequency. If a gradient is applied for some time and then turned off, all signals have the same frequency, but their relative phase shifts, accumulated while the gradient was on, vary according to position along the gradient axis…..

Read more at https://physicstoday.scitation.org/doi/10.1063/PT.3.4408

Maxwell’s Demon and Its Fallacies Demystified

Milivoje M. Kostic
A demonic being, introduced by Maxwell, to miraculously create thermal non-equilibrium and violate the Second law of thermodynamics, has been among the most intriguing and elusive wishful concepts for over 150 years. Maxwell and his followers focused on ‘effortless gating’ a molecule at a time, but overlooked simultaneous interference of other chaotic molecules, while the demon exorcists tried to justify impossible processes with misplaced ‘compensations’ by work of measurements and gate operation, and information storage and memory erasure with entropy generation. The illusive and persistent Maxwell’s demon fallacies by its advocates, as well as its exorcists, are scrutinized and resolved here. Based on holistic, phenomenological reasoning, it is deduced here that a Maxwell’s demon operation, against natural forces and without due work effort to suppress interference of competing thermal particles while one is selectively gated, is not possible at any scale, since it would be against the physics of the chaotic thermal motion, the latter without consistent molecular directional preference for selective timing to be possible. Maxwell’s demon would have miraculous useful effects, but also some catastrophic consequences.

Read more at https://arxiv.org/ftp/arxiv/papers/2001/2001.10083.pdf

50 years of the GIM mechanism

Hong-Jian He, John Ellis, John Iliopoulos, Sheldon Lee Glashow, Verónica Riquer and Luciano Maiani at a celebration of 50 years of the GIM mechanism in Shanghai. Credit: J Liu

In 1969 many weak amplitudes could be accurately calculated with a model of just three quarks, and Fermi’s constant and the Cabibbo angle to couple them. One exception was the remarkable suppression of strangeness-changing neutral currents. John Iliopoulos, Sheldon Lee Glashow and Luciano Maiani boldly solved the mystery using loop diagrams featuring the recently hypothesised charm quark, making its existence a solid prediction in the process. To celebrate the fiftieth anniversary of their insight, the trio were guests of honour at an international symposium at the T. D. Lee Institute at Shanghai Jiao Tong University on 29 October, 2019.

The UV cutoff needed in the three-quark theory became an estimate of the mass of the fourth quark

The Glashow-Iliopoulos-Maiani (GIM) mechanism was conceived in 1969, submitted to Physical Review D on 5 March 1970, and published on 1 October of that year, after several developments had defined a conceptual framework for electroweak unification. These included Yang-Mills theory, the universal V−A weak interaction, Schwinger’s suggestion of electroweak unification, Glashow’s definition of the electroweak group SU(2)L×U(1)Y, Cabibbo’s theory of semileptonic hadron decays and the formulation of the leptonic electroweak gauge theory by Weinberg and Salam, with spontaneous symmetry breaking induced by the vacuum expectation value of new scalar fields. The GIM mechanism then called for a fourth quark, charm, in addition to the three introduced by Gell-Mann, such that the first two blocks of the electroweak theory are made each by one lepton and one quark doublet, [(νe, e), (u, d)] and [(νµ, µ), (c, s)]. Quarks u and c are coupled by the weak interaction to two superpositions of the quarks d and s: u ↔ dC , with dC the Cabibbo combination dC = cos θC d + sin θC s, and c ↔ sC , with sC the orthogonal combination. In subsequent years, a third generation, [(ντ, τ ), (t, b)] was predicted to describe CP violation. No further generations have been observed yet.

Problem solved

The GIM mechanism was the solution to a problem arising in the simplest weak interaction theory with one charged vector boson coupled to the Cabibbo currents. As pointed out in 1968, strangeness-changing neutral-current processes, such as KL → µ+µ and K0 – K0 mixing, are generated at one loop with amplitudes of order G sinθC cosθC (GΛ2), where G is the Fermi constant, Λ is an ultraviolet cutoff, and GΛ2 (dimensionless) is the first term in a perturbative expansion which could be continued to take higher order diagrams into account. To comply with the strict limits existing at the time, one had to require a surprisingly small value of the cutoff, Λ, of 2 − 3 GeV, to be compared with the naturally expected value: Λ = G-1/2 ~ 300 GeV. This problem was taken seriously by the GIM authors, who wrote that “it appears necessary to depart from the original phenomenological model of weak interactions”.

One-loop quark diagrams for K0 – K0 mixing in the light of the GIM mechanism. The charm-quark amplitudes have the same magnitude but opposite sign as for up-quark lines, leading to a perfect cancellation, cos θ sin θ + (- sin θ) cos θ = 0, in the case where mc = mu and suggesting an explanation for the suppression of processes with strangeness-changing neutral currents.

To sidestep this problem, Glashow, Iliopoulos and Maiani brought in the fourth “charm” quark, already introduced by Bjorken, Glashow and others, with its typical coupling to the quark combination left alone in the Cabibbo theory: c ↔ sC = − sinθC d + cosθC s. Amplitudes for s → d with u or c on the same fermion line would cancel exactly for m= mu, suggesting a more natural means to suppress strangeness-changing neutral-current processes to measured levels. For m>> mu, a residual neutral-current effect would remain, which, by inspection, and for dimensional reasons, is of order G sinθC cos θC (GMc2). This was a real surprise: the “small” UV cutoff needed in the simple three-quark theory became an estimate of the mass of the fourth quark, which was indeed sufficiently large to have escaped detection in the unsuccessful searches for charmed mesons that had been conducted in 1960s. With the two quark doublets included, a detailed study of strangeness changing neutral current processes gave mc ∼ 1.5 GeV, a value consistent with more recent data on the masses of charmed mesons and baryons. Another aspect of the GIM cancellation is that the weak charged currents make an SU(2) algebra together with a neutral component that has no strangeness changing terms. Thus, there is no difficulty to include the two quark doublets in the unified electroweak group SU(2)L×U(1)Y of Glashow, Weinberg and Salam. The 1970 GIM paper noted that “in contradistinction to the conventional (three-quark) model, the couplings of the neutral intermediary – now hypercharge conserving – cause no embarrassment.”

The GIM mechanism has become a cornerstone of the Standard Model and it gives a precise description of the observed flavour changing neutral current processes for s and b quarks. For this reason, flavour-changing neutral currents are still an important benchmark and give strong constraints on theories that go beyond the Standard Model in the TeV region.

 

Profiles of James Peebles, Michel Mayor, and Didier Queloz: 2019 Nobel Laureates in Physics

Neta Bahcall, Adam Burrows

Published in PNAS, 117, 2, 799 – 801 (January 2020)

Mankind has long been fascinated by the mysteries of our Universe: How old and how big is the
Universe? How did the Universe begin and how is it evolving? What is the composition of the
Universe and the nature of its dark-matter and dark-energy? What is our Earth’s place in the cosmos
and are there other planets (and life) around other stars?

The 2019 Nobel Prize in Physics honors three pioneering scientists for their fundamental contributions to basic cosmic questions – Professor James Peebles (Princeton University), Michel Mayor (University of Geneva), and Didier Queloz (University of Geneva and the University of Cambridge) – “for contributions to our understanding of the evolution of the universe and Earth’s place in the cosmos,” with one half to James Peebles “for theoretical discoveries in physical cosmology,” and the other half jointly to Michel Mayor and Didier Queloz “for the discovery of an exoplanet orbiting a solar-type star.” We summarize the historical and scientific backdrop to this year’s Physics Nobel.

Read more at https://arxiv.org/ftp/arxiv/papers/2001/2001.08511.pdf

How sensitive can your quantum detector be?

A new device measures the tiniest energies in superconducting circuits, an essential step for quantum technology

Illustration by Safa Hovinen, Merkitys

Quantum physics is moving out of the laboratory and into our everyday lives. Despite the big headline results about quantum computers solving problems impossible for classical computers, technical challenges are standing in the way of getting quantum physics into the real world. New research published in Nature Communications from teams at Aalto University and Lund University hopes to provide an important tool in this quest.

One of the open questions in quantum research is how heat and thermodynamics coexist with quantum physics. This research field, “quantum thermodynamics”, is one of the areas Professor Jukka Pekola, the leader of the QTF Centre of Excellence of the Academy of Finland, has worked on in his career. ‘This field has up to now been dominated by theory, and only now important experiments are starting to emerge’ says Professor Pekola. His research group has set about creating quantum thermodynamic nano-devices that can solve open questions experimentally.

Quantum states – like the qubits that power quantum computers – interact with their surrounding world, and these interactions are what quantum thermodynamics deals with. Measuring these systems requires detecting energy changes so exceptionally small they are hard to pick out from background fluctuations, like using only a thermometer to try and work out if someone has blown out a candle in the room you’re in. Another problem is that quantum states can change when you measure them, simply because you’ve measured them. This would be like putting a thermometer in a cup of cold water making the water start to boil. The team had to make a thermometer able to measure very small changes without interfering with any of the quantum states they plan to measure.

Doctoral student Bayan Karimi works in QTF and Marie Curie training network QuESTech. Her device is a calorimeter, which measures the heat in a system. It uses a strip of copper about one thousand times thinner than a human hair. ‘Our detector absorbs radiation from the quantum states. It is expected to determine how much energy they have and how they interact with their surroundings. There is a theoretical limit to how accurate a calorimeter can be, and our device is now reaching that limit’, says Karimi.

Read more at https://www.aalto.fi/en/news/how-sensitive-can-your-quantum-detector-be