New evidence shows that the key assumption made in the discovery of dark energy is in error

High precision age dating of supernova host galaxies reveals that the luminosity evolution of supernovae is significant enough to question the very existence of dark energy

Figure 1. Luminosity evolution mimicking dark energy in supernova (SN) cosmology. The Hubble residual is the difference in SN luminosity with respect to the cosmological model without dark energy (the black dotted line). The cyan circles are the binned SN data from Betoule et al. (2014). The red line is the evolution curve based on our age dating of early-type host galaxies. The comparison of our evolution curve with SN data shows that the luminosity evolution can mimic Hubble residuals used in the discovery and inference of the dark energy (the black solid line).

The most direct and strongest evidence for the accelerating universe with dark energy is provided by the distance measurements using type Ia supernovae (SN Ia) for the galaxies at high redshift. This result is based on the assumption that the corrected luminosity of SN Ia through the empirical standardization would not evolve with redshift.

New observations and analysis made by a team of astronomers at Yonsei University (Seoul, South Korea), together with their collaborators at Lyon University and KASI, show, however, that this key assumption is most likely in error. The team has performed very high-quality (signal-to-noise ratio ~175) spectroscopic observations to cover most of the reported nearby early-type host galaxies of SN Ia, from which they obtained the most direct and reliable measurements of population ages for these host galaxies. They find a significant correlation between SN luminosity and stellar population age at a 99.5% confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Since SN progenitors in host galaxies are getting younger with redshift (look-back time), this result inevitably indicates a serious systematic bias with redshift in SN cosmology. Taken at face values, the luminosity evolution of SN is significant enough to question the very existence of dark energy. When the luminosity evolution of SN is properly taken into account, the team found that the evidence for the existence of dark energy simply goes away (see Figure 1).

Commenting on the result, Prof. Young-Wook Lee (Yonsei Univ., Seoul) who was leading the project said; “Quoting Carl Sagan, extraordinary claims require extraordinary evidence, but I am not sure we have such extraordinary evidence for dark energy. Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption”.

Other cosmological probes, such as CMB (Cosmic Microwave Background) and BAO (Baryonic Acoustic Oscillations), are also known to provide some indirect and “circumstantial” evidence for dark energy, but it was recently suggested that CMB from Planck mission no longer supports the concordance cosmological model which may require new physics (Di Valentino, Melchiorri, & Silk 2019). Some investigators have also shown that BAO and other low-redshift cosmological probes can be consistent with a non-accelerating universe without dark energy (see, for example, Tutusaus et al. 2017). In this respect, the present result showing the luminosity evolution mimicking dark energy in SN cosmology is crucial and is very timely.

This result is reminiscent of the famous Tinsley-Sandage debate in the 1970s on luminosity evolution in observational cosmology, which led to the termination of the Sandage project originally designed to determine the fate of the universe.

Read more at https://astro.yonsei.ac.kr/galaxy/galaxy01/research.do?mode=view&articleNo=78249 and https://arxiv.org/abs/1912.04903

Spooky Action at a Global Distance

Resource-Rate Analysis of a Space-Based Entanglement-Distribution Network for the Quantum Internet

A hybrid global-quantum-communications network, in which a satellite constellation distributes entangled photon pairs (red wave packets; entanglement depicted by wavy lines) to distant ground stations (observatories) that host multimode quantum memories for storage. These stations act as hubs that connect to local nodes (black dots) via fiber-optic or atmospheric links. Using these nearest-neighbor entangled links, via entanglement swapping, two distant nodes can share entanglement. Note that this architecture can support inter-satellite entanglement links as well, which is useful for exploring fundamental physics , and for forming an international time standard

Sumeet Khatri, Anthony J. Brady, Renée A. Desporte, Manon P. Bart, Jonathan P. Dowling
Recent experimental breakthroughs in satellite quantum communications have opened up the possibility of creating a global quantum internet using satellite links. This approach appears to be particularly viable in the near term, due to the lower attenuation of optical signals from satellite to ground, and due to the currently short coherence times of quantum memories. These drawbacks prevent ground-based entanglement distribution using atmospheric or optical-fiber links at high rates over long distances. In this work, we propose a global-scale quantum internet consisting of a constellation of orbiting satellites that provides a continuous on-demand entanglement distribution service to ground stations. The satellites can also function as untrusted nodes for the purpose of long-distance quantum-key distribution. We determine the optimal resource cost of such a network for obtaining continuous global coverage. We also analyze the performance of the network in terms of achievable entanglement-distribution rates and compare these rates to those that can be obtained using ground-based quantum-repeater networks.

Read more at https://arxiv.org/abs/1912.06678

Read also “Why the quantum internet should be built in space

Top 10 quantum computing experiments of 2019

The last decade has seen quantum computing grow from a niche research endeavour to a large-scale business operation. While it’s exciting that the field is experiencing a surge of private funding and media publicity, it’s worth remembering that nobody yet knows how to build a useful fault-tolerant quantum computer. The path ahead is not “just engineering”, and in the coming decade we have to pay attention to all the “alternative approaches”, “crazy ideas” and “new ways of doing things”.

With this in mind, I created this subjective list of quantum computing research highlights of 2019. It highlights experimental achievements which show new exciting ways of controlling qubits. In such a vast space of literature, I have no doubt I missed some essential works, so I encourage you to get in touch and add your favourites to the list ….

Read more at https://medium.com/@msmalina/top-quantum-computing-experiments-of-2019-1157db177611

A Forbidden Transition Allowed for Stars

The discovery of an exceptionally strong “forbidden” beta-decay involving fluorine and neon could change our understanding of the fate of intermediate-mass stars.

Researchers have measured the forbidden nuclear transition between 20F and 20Ne. This measurement allowed them to make a new calculation of the electron-capture rate of 20Ne, a rate that is important for predicting the evolution of intermediate-mass stars.

Every year roughly 100 billion stars are born and just as many die. To understand the life cycle of a star, nuclear physicists and astrophysicists collaborate to unravel the physical processes that take place in the star’s interior. Their aim is to determine how the star responds to these processes and from that response predict the star’s final fate. Intermediate-mass stars, whose masses lie somewhere between 7 and 11 times that of our Sun, are thought to die via one of two very different routes: thermonuclear explosion or gravitational collapse. Which one happens depends on the conditions within the star when oxygen nuclei begin to fuse, triggering the star’s demise. Researchers have now, for the first time, measured a rare nuclear decay of fluorine to neon that is key to understanding the fate of these “in between” stars . Their calculations indicate that thermonuclear explosion and not gravitational collapse is the more likely expiration route.

The evolution and fate of a star strongly depend on its mass at birth. Low-mass stars—such as the Sun—transition first into red giants and then into white dwarfs made of carbon and oxygen as they shed their outer layers. Massive stars—those whose mass is at least 11 times greater than the Sun’s—also transition to red giants, but in the cores of these giants, nuclear fusion continues until the core has turned completely to iron. Once that happens, the star stops generating energy and starts collapsing under the force of gravity. The star’s core then compresses into a neutron star, while its outer layers are ejected in a supernova explosion. The evolution of intermediate-mass stars is less clear. Predictions indicate that they can explode both via the gravitational collapse mechanism of massive stars and by a thermonuclear process . The key to finding out which happens lies in the properties of an isotope of neon and its ability to capture electrons.

The story of fluorine and neon is tied to what is known as a forbidden nuclear transition. Nuclei, like atoms, have distinct energy levels and thus can exist in different energy states. For a given radioactive nucleus, the conditions within a star, such as the temperature and density of its plasma, dictate its likely energy state. The quantum-mechanical properties of each energy state then determine the nucleus’ likely decay path. The decay is called allowed if, on Earth, the decay path has a high likelihood of occurring. If, instead, the likelihood is low, the transition is termed forbidden. But in the extreme conditions of a star’s interior, these forbidden transitions can occur much more frequently. Thus, when researchers measure a nuclear reaction in the laboratory, the very small contribution from a forbidden transition is often the most critical one to measure for the astrophysics applications….

Read more at https://physics.aps.org/articles/v12/151

 

Aside

Is the expansion of the universe accelerating? All signs still point to yes

David Rubin, Jessica Heitlauf
Type Ia supernovae (SNe Ia) provided the first strong evidence that the expansion of the universe is accelerating. With SN samples now more than ten times larger than those used for the original discovery and joined by other cosmological probes, this discovery is on even firmer ground. Two recent, related studies (Nielsen et al. 2016 and Colin et al. 2019, hereafter N16 and C19, respectively) have claimed to undermine the statistical significance of the SN Ia constraints. Rubin & Hayden (2016) (hereafter RH16) showed N16 made an incorrect assumption about the distributions of SN Ia light-curve parameters, while C19 also fails to remove the impact of the motion of the solar system from the SN redshifts, interpreting the resulting errors as evidence of a dipole in the deceleration parameter. Building on RH16, we outline the errors C19 makes in their treatment of the data and inference on cosmological parameters. Reproducing the C19 analysis with our proposed fixes, we find that the dipole parameters have little effect on the inferred cosmological parameters. We thus affirm the conclusion of RH16: the evidence for acceleration is secure.

Read more at https://arxiv.org/abs/1912.02191

Read also: “No Dark Energy? No Chance, Cosmologists Contend

FASER’s new detector expected to catch first collider neutrino

The first-of-its-kind detector could initiate a new era in neutrino physics at particle colliders

Illustration of the FASER experiment. The new FASERν detector, which is just 25 cm wide, 25 cm tall and 1.35 m long, will be located at the front of FASER’s main detector in a narrow trench (yellow block in the bottom right of the image). (Image: FASER/CERN)

No neutrino produced at a particle collider has ever been detected, even though colliders create them in huge numbers. This could now change with the approval of a new detector for the FASER experiment at CERN. The small and inexpensive detector, called FASERν, will be placed at the front of the FASER experiment’s main detector, and could launch a new era in neutrino physics at particle colliders.

Ever since they were first observed at a nuclear reactor in 1956, neutrinos have been detected from many sources, such as the sun, cosmic-ray interactions in the atmosphere, and the Earth, yet never at a particle collider. That’s unfortunate, because most collider neutrinos are produced at very high energies, at which neutrino interactions have not been well studied. Neutrinos produced at colliders could therefore shed new light on neutrinos, which remain the most enigmatic of the fundamental particles that make up matter.

The main reasons why collider neutrinos haven’t been detected are that, firstly, neutrinos interact very weakly with other matter and, secondly, collider detectors miss them. The highest-energy collider neutrinos, which are more likely to interact with the detector material, are mostly produced along the beamline – the line travelled by particle beams in a collider. However, typical collider detectors have holes along the beamline to let the beams through, so they can’t detect these neutrinos.

Enter FASER, which was approved earlier this year to search for light and weakly interacting particles such as dark photons – hypothetical particles that could mediate an unknown force that would link visible matter with dark matter. FASER, supported by the Heising-Simons and Simons Foundations, will be located along the beamline of the Large Hadron Collider (LHC), about 480 metres downstream of the ATLAS experiment, so it will be ideally positioned to detect neutrinos. However, the detection can’t be done with the experiment’s main detector.

“Since neutrinos interact very weakly with matter, you need a target with a lot of material in it to successfully detect them. The main FASER detector doesn’t have such a target, and is therefore unable to detect neutrinos, despite the huge number that will traverse the detector from the LHC collisions,” explains Jamie Boyd, co-spokesperson for the FASER experiment. “This is where FASERν comes in. It is made up of emulsion films and tungsten plates, and acts both as the target and the detector to see the neutrino interactions.”

FASERν is only 25 cm wide, 25 cm tall and 1.35 m long, but weighs 1.2 tonnes. Current neutrino detectors are generally much bigger, for example Super-Kamiokande, an underground neutrino detector in Japan, weighs 50 000 tonnes, and the IceCube detector in the South Pole has a volume of a cubic kilometre.

After studying FASER’s ability to detect neutrinos and doing preliminary studies using pilot detectors in 2018, the FASER collaboration estimated that FASERν could detect more than 20 000 neutrinos. These neutrinos would have a mean energy of between 600 GeV and 1 TeV, depending on the type of neutrino produced. Indeed there are three types of neutrinos – electron neutrino, muon neutrino and tau neutrino – and the collaboration expects to detect 1300 electron neutrinos, 20 000 muon neutrinos and 20 tau neutrinos.

“These neutrinos will have the highest energies yet of man-made neutrinos, and their detection and study at the LHC will be a milestone in particle physics, allowing researchers to make highly complementary measurements in neutrino physics,” says Boyd. “What’s more, FASERν may also pave the way for neutrino programmes at future colliders, and the results of these programmes could feed into discussions of proposals for much larger neutrino detectors.”

The FASERν detector will be installed before the next LHC run, which will start in 2021, and it will collect data throughout this run.

https://home.cern/news/news/physics/fasers-new-detector-expected-catch-first-collider-neutrino

Etude des effets non linéaires observés sur les oscillations d’un pendule simple

Thomas Gibaud, Alain Gibaud
In this paper we present a study of the non-linear effects of anharmonicity of the potential of the simple pendulum. In a theoretical reminder we highlight that anharmonicity of the potential generates additional harmonics and the non-isochronism of oscillations. These phenomena are all the more important as we move away from the oscillations at small angles, which represent the domain of validity of the harmonic approximation. The measurement is apprehended by means of the acquisition box SYSAM-SP5 coupled with the Latis pro software and the Eurosmart pendulum. We show that only a detailed analysis by fitting the recorded curve can provide sufficient accuracy to describe the quadratic evolution of the period as a function of the amplitude of the oscillations. We we can detect the additional harmonics in the oscillations when the amplitude becomes very high.
read more at https://arxiv.org/ftp/arxiv/papers/1911/1911.11594.pdf