Syndicate content
Earth & Space Science News
Updated: 4 hours 52 min ago

Podcast: Plate Tectonics, the Theory That Changed Earth Science

Tue, 10/22/2019 - 16:51

Xavier Le Pichon was just starting his career when he stumbled upon a revolution in science.

Le Pichon came to Lamont Geological Observatory (now Columbia University’s Lamont-Doherty Earth Observatory) in 1959 to study theoretical geophysics.Le Pichon would go on to be one of the scientists on the forefront of the plate tectonics revolution—a theory that changed the way we understand our planet.

Lamont’s director, Maurice Ewing, instead convinced Le Pichon to travel for 4 months aboard the R/V Vema as a physical oceanography technician.

The research cruise set out to test the existence of the mid-ocean ridge system: a long chain of seismically active mountains running along the ocean floor. During his months at sea, Le Pichon helped map ridges along the South Atlantic and southwest Indian Oceans that corresponded to a belt of earthquakes.

After the cruise, Le Pichon was hooked. He gave up the idea of studying theoretical geophysics and instead decided to pursue a career in marine geophysics studying mid-ocean ridges.

Credit: Xavier Le Pichon

Le Pichon would go on to be one of the scientists on the forefront of the plate tectonics revolution—a theory that changed the way we understand our planet. Scientists discovered that instead of being a solid mass, Earth is made up of a series of plates that move and slide past each other, causing volcanic eruptions, earthquakes, and geological formations. The discovery of the mid-ocean ridge system and magnetic anomalies on the ocean floor led scientists to develop the theory of seafloor spreading, the process by which oceanic crust is renewed and a key piece in the development of plate tectonic theory.

Le Pichon recounts the changes that took place in Earth science during the 1950s and 1960s in a new Centennial episode of Third Pod from the Sun and a new paper in the AGU journal Tectonics. He details the discoveries that led to solidification of the theory, his own work developing the first global plate kinematic model and reconstructing how the planet looked billions of years ago, and what it was like to be a young scientist challenging deeply held scientific theories.

Le Pichon is also featured in this month’s Eos, where editor in chief Heather Goss tells the story of how, in 1990, Le Pichon unearthed an outline of a talk given by Jason Morgan at AGU’s Spring Meeting in Washington, D.C., in April 1967. Morgan had proved the theory of plate tectonics through seafloor spreading measurements, but he was largely ignored and was not fully credited with the discovery until Le Pichon published the outline more than 20 years later.

According to Le Pichon, the period was a “state of confusion and contradiction but also of extraordinary excitement in which we, Earth scientists, lived at this time.”

—Nanci Bompey (@nbompey), Public Information Officer, AGU

Forum Focuses on Climate and the 2020 U.S. Election

Tue, 10/22/2019 - 16:50

Mandy Gunasekara tried to defend the Trump administration’s record on climate change at a recent forum about the 2020 U.S. presidential election. That record includes a pledge to withdraw from the Paris climate accord, bolstering the fossil fuel industry, questioning climate science, and President Donald Trump calling climate change a hoax.

“I wouldn’t say the administration is of the denialist ilk. There is a clear recognition that there are some consensus elements around science,” claimed Gunasekara, former principal deputy administrator for the Environmental Protection Agency’s (EPA) Office of Air and Radiation under Trump.

“What this administration has done from the start [is] basically establish the premise that you do not have to choose between a growing energy industry, a growing economy, and environmental protection,” said Gunasekara, who spoke at the Society of Environmental Journalists’ conference in Fort Collins, Colo., on 11 October. She was part of a panel about environment and climate on the 2020 campaign trail.

“The results bear out in terms of clean air, clean water, [and] reducing greenhouse gas emissions, and that’s ultimately where it matters. We have leadership in this space, and it’s been borne out by the results,” said Gunasekara, founder and president of the Energy 45 Fund, which supports the White House energy policy.

Others, however, decried those results as well as the administration’s record on climate change and energy policy.

Heather McTeer Toney, former EPA regional administrator for the southeast region under former president Barack Obama, said that the Trump administration has rolled back dozens of environmental regulations and has spent the past several years “dividing us on a number of issues, including climate.”

Although Gunasekara said that climate change likely won’t be “the defining element of what swings the election one way or the other,” Toney said she thinks that climate change will become “the defining issue” of the presidential election.

Climate change is connected to many other issues, said Toney, national field director for Moms Clean Air Force, an environmental organization. “You cannot separate climate from homeland security. You cannot separate climate from health care” or from many other issues, she said.

Toney cautioned, though, that the issue of climate change has to “break through the noise” when it competes for news coverage with other big issues such as the impeachment inquiry.

The Year That Climate Change Matters

Guido Girgenti, a founding board member and communications adviser for the Sunrise Movement, agreed that climate change needs to remain a top tier issue among the public and in the presidential election.“If this is not the year that climate change matters, we’re really in for a breakdown of the stable climate that human civilization has depended on. So there’s a lot riding on it mattering.”

“If this is not the year that climate change matters, we’re really in for a breakdown of the stable climate that human civilization has depended on. So there’s a lot riding on it mattering,” said Girgenti, whose group advocates for the Green New Deal, a proposal to achieve net-zero greenhouse gas emissions by 2050, promote climate justice, and create jobs, among other goals.

Girgenti said that he is encouraged by the Democratic presidential candidates’ climate plans and many candidates’ support of the Green New Deal.

Joseph Pinion, founder and chair of the Conservative Color Coalition, spoke on behalf of Republicans who want to do something about climate change. He acknowledged that “there are individuals in my party who will have to be dragged kicking and screaming” to take action on climate change, but he also said that most young conservatives “believe that climate change is the real thing, believe that we need to take that issue seriously, and need to prioritize it.”

“The Green New Deal is literally the worst thing that’s ever happened to me as it pertains to getting people on the right to take climate change seriously.”However, Pinion said that “the Green New Deal is literally the worst thing that’s ever happened to me as it pertains to getting people on the right to take climate change seriously.” He said that “tethering” the deal to a federal jobs program and other items doesn’t keep the issue focused on climate change, substantially increases the price tag, and disregards conservatives’ concerns.

To Pinion, all options for dealing with climate change need to be on the table, he said, including nuclear power.

“If we’re saying this is the issue, the issue,” he emphasized, “then why would we choose not to focus on it with a laser-like precision and also do it in a manner that brings the people along with you who are on the other side of the aisle? And we know that we need to have people on the other side of the aisle to get the job done.”

—Randy Showstack (@RandyShowstack), Staff Writer

The Infrastructure Impacts of Solar Storms

Tue, 10/22/2019 - 16:48

Geomagnetic storms are a type of space weather event that can create geomagnetically induced currents (GICs) which, once they reach Earth’s surface, can interfere with power transmission transformers and other infrastructure. A new book, Geomagnetically Induced Currents from the Sun to the Power Grid, recently published by AGU, presents current knowledge about GIC prediction and impact. Here, one of the editors gives an overview of GIC hazards and the benefits of an interdisciplinary approach to addressing hazard and risk.

What are ‘geomagnetically induced currents’ and what impacts do they have on Earth’s surface?

Map of magnetic field perturbations over North America at 21:56 UT on 3 August 2017. Credit: Weimer [2019], Figure 4.3Geomagnetic storms have a range of effects on the Earth’s magnetosphere, the ionosphere, and the thermosphere (part of the upper atmosphere).

They can also have effects on the ground when the rapidly changing geomagnetic field interacts with the solid Earth.

This includes creating GICs, which are potentially damaging quasi-DC currents that can arise in long conductors, such as power transmission lines or oil pipelines.

Is it possible to forecast GICs and predict their impacts?

GICs caused by geomagnetic storms can happen at any time but vary in frequency, strength, and impact. The most extreme geomagnetic storms are very rare but moderate-intensity storms are more common.

Electrical resistivity model for the United States at upper‐crustal depth (3 km). Credit: Kelbert et al. [2019], Figure 8.5We can’t yet forecast the precise local impacts of GICs, but significant progress is being made in the space weather, geophysics, and power engineering communities to predict hazard conditions.

Scientists and power engineers are learning from these more common, moderate-intensity storms how to better understand, plan for and mitigate against disruptions due to GICs.

As the science advances and infrastructure risk is better understood, local forecasts will become more important in mitigation of potential GIC effects.

Why is an interdisciplinary approach to understanding GICs necessary but also a challenge?

GICs are a naturally interdisciplinary problem that involve processes from the Sun to the power grid, but the impacts are highly localized. It is an on-going challenge to bring together the broad areas of expertise in science and engineering that are needed to understand both the broad geophysical hazard and specific infrastructure risk.

What does your new book offer to people interested in this topic?

This book covers a range of topics relevant to GICs, from geoscience to power engineering, and from the space weather driver to power grid impacts.

The introductory and deeper-dive chapters can be useful to those interested in learning about the full GIC problem, whether they are new to the topic or a domain expert looking to better understand an adjacent discipline.

Geomagnetically Induced Currents from the Sun to the Power Grid, 2019, ISBN: 978-1-119-43438-2, list price, $199.95 (hardcover), $159.99 (e-book)

—Jennifer L. Gannon (gannon@cpi.com;  0000-0001-5524-6452), Computational Physics, Inc., USA

Editor’s Note: It is the policy of AGU Publications to invite the authors or editors of newly published books to write a summary for Eos Editors’ Vox.

Permafrost Thaws Rapidly as Arctic River Flooding Increases

Mon, 10/21/2019 - 11:18

Arctic regions are responding rapidly to modern climate change, as high latitudes have warmed more than twice as fast as the global average. Among the changes in recent decades are thawing and degradation of permafrost, and hydrologic shifts that include earlier snowmelt and higher river discharge.

Zheng et al. [2019] developed a heat-exchange model to investigate how changes in river flow affect permafrost within floodplains, and applied their model to the Kuparuk River, Alaska, where mean annual flow has increased by 35% since the 1970s and snowmelt floods now arrive earlier. Their results indicate that the changes to inundation extent and timing of river discharge cause floodplain permafrost to thaw more rapidly, as heat is transferred from the warmer floodwater down into the cooler subsurface. The model shows that the earlier arrival of spring flooding impacts permafrost warming more than a prolonged warm season would.

Accelerated degradation of permafrost due to more sustained floodwater inundation could enhance the release of old carbon, large quantities of which are currently stored in Arctic floodplains.

Citation: Zheng, L., Overeem, I., Wang, K., & Clow, G. D. [2019]. Changing Arctic river dynamics cause localized permafrost thaw. Journal of Geophysical Research: Earth Surface, 124. https://doi.org/10.1029/2019JF005060

—Amy East, Editor in Chief, JGR: Earth Surface

Europe’s Mightiest Glaciers Are Melting

Mon, 10/21/2019 - 11:17

When the photographer Walter Mittelholzer snapped pictures of Mont Blanc from his plane in 1919, he pointed his lens at the landscape’s rugged beauty. One century later, his images reveal the rapid loss of ice on the Alps’ highest peak.

This summer, researchers re-created Mittelholzer’s images of three Mont Blanc glaciers by photographing the glaciers 100 years later. The scientists triangulated Mittelholzer’s original location on the basis of nearby peaks and flew a helicopter to an elevation of 4,700 meters  at the same spot near the Mont Blanc summit, which straddles the border of Italy and France. Viewed side by side, the images show the drastic effect of climate change on the region.

The scientists chose three of the mountain’s largest glaciers: Argentière, Bossons, and Mer de Glace. In the photographs taken at Mer de Glace, the black-and-white image from 1919 shows a channel of ice nearly 2 kilometers wide in places flowing down a deep valley. In 2019, the glacier is sunken, covered in brown sediment, and peters out into a melt pond at what used to be the glacier’s far end.

Aerial images of Mer de Glace glacier taken in 1919 (left) and 2019 (right). Mer de Glace means “sea of ice” in French and is the largest glacier on Mont Blanc. Credit: Walter Mittelholzer, ETH-Bibliothek Zürich; Kieran Baxter, University of Dundee

University of Dundee scientist Kieran Baxter, who took the new images, said in a press release that “it was both a breathtaking and heartbreaking experience, particularly knowing that the melt has accelerated massively in the last few decades.”

The ice loss on Mont Blanc is hardly unique, said Baxter. Glaciers in the European Alps lost half their volume between 1850 and 1975, according to a study published in the Annals of Glaciology. Over the next 30 years, 40% of their remaining volume melted away.

“The ice loss visible in these pictures is representative of the type of melt that is happening to the vast majority of glaciers across the Alps and in other glaciated regions around the world,” Baxter told Eos.

Disappearing Landscape

Glaciers used to be viewed as permanent features of the landscape, said Baxter, and even Mittelholzer was more interested in mountain summits than glaciers. Now “we recognize that our actions have made [glaciers] something much more ephemeral,” Baxter said, and pointed to the example of mourners who gathered for a funeral for the Swiss Pizol glacier recently stripped of its title.

Two thirds of the ice in the Alps will vanish by 2100 under the best-case emissions scenario.Rapidly shrinking glaciers could be hazardous as well. Mont Blanc’s Planpincieux glacier grew so unstable in September that an Italian mayor called for evacuations and road closures. A recent report from the United Nations warns that climate change will bring disasters to high mountain regions.

The three photographs are just a “tiny fraction” of Mittelholzer’s collection, said Baxter, and the researchers hope to further explore his archives. They also plan to search for photographs in personal collections that may have been overlooked.

Future projections of glaciers in the Alps are grim: Two thirds of the ice will vanish by 2100 under the best-case emissions scenario, according to a study published in April 2019. Yet if emissions continue at their current rate, more than 90% could be gone by the end of the century. No matter what humans emit, the study found that half the Alpine ice will be gone by 2050.

Side-by-side images of Mont Blanc’s Bossons glacier in 1919 (left) and 2019 (right). Credit: Walter Mittelholzer, ETH-Bibliothek Zürich; Kieran Baxter, University of Dundee

When it comes to photography, Baxter said it’s a “race against time” to capture images before it’s too late.

“Unless we drastically reduce our dependence on fossil fuels, there will be little ice left to photograph in another hundred years,” Baxter said in a press release.

—Jenessa Duncombe (@jrdscience), News Writing and Production Fellow

21 October 2019: This article has been updated to accurately state the elevation of the photographs. 

Earthquake Statistics Vary with Fault Size

Mon, 10/21/2019 - 11:16

Many natural and human-made phenomena obey power law distributions. In one of the most well known examples, a power law distribution describes how small earthquakes occur much more frequently than large, potentially destructive ones.

Generally, the power law distribution in earthquake moment holds when considering seismic events over time on multiple faults. However, scientists still puzzle over the distribution of rupture sizes along individual faults. Exceptions to the power law statistics have been observed in rare sequences known as repeating earthquakes. Instead of many small events and a few large ones, these sequences are characterized by periodic earthquakes of fixed size, raising various questions for researchers. Why do these sequences depart from the otherwise ubiquitous power law statistics of earthquake sizes? And what distributions can we expect to occur on faults large enough to produce destructive earthquakes?

In a new theoretical study, Cattania explored the factors controlling earthquake statistics on a single isolated fault. The author used a two-dimensional earthquake cycle model of a simple fault experiencing both earthquakes and slow aseismic slip, or creep, and compared the results to records of earthquakes observed in nature.

The research revealed that although small seismic sources can produce identical and periodic earthquakes, tremors on large faults exhibit different traits, including the power law distributions observed in nature. For bigger faults, the rupture lengths of earthquakes may span several orders of magnitude and cluster in time—instead of spacing out more evenly as they do on small seismic sources. On the basis of straightforward physical concepts related to fault strain and the energy released during fracture formation, the study showed that the transition between these types of behavior is controlled by the ratio of a fault’s size to a length related to the earthquake nucleation dimension.

In essence, the study demonstrated that simple, isolated faults do not necessarily produce regular and periodic earthquakes, especially when the faults are relatively large. The conclusions offer insights into seismic hazard analysis. Although the simplified model used in the study may not adequately represent individual faults found in nature, the theory can be extended to more realistic cases. (Geophysical Research Letters, https://doi.org/10.1029/2019GL083628, 2019)

—Aaron Sidder, Freelance Writer

Scientific Integrity Act Passes House Committee

Fri, 10/18/2019 - 19:15

Legislation to protect scientific integrity in U.S. federal agencies was approved by the House of Representatives’ Committee on Science, Space, and Technology on 17 October in a 25–6 vote that included bipartisan support.

The bill, which now goes to the full House for approval, would require federal science agencies to adopt and enforce a scientific integrity policy, and it also formalizes these policies in law.

The policy requirements for agencies covered by the bill would prohibit any individual from “engaging in dishonesty, fraud, deceit, misrepresentation, coercive manipulation, or other scientific or research misconduct.” Among other measures, the bill also would prohibit “suppressing, altering, interfering with, delaying without scientific merit, or otherwise impeding the release and communication of, scientific or technical findings.”

“There are many specific principles [in the bill] addressing openness, transparency, and due process. At their essence, they are about protecting federal science and scientists from undue political influence and ensuring that the public can trust the science and scientific process informing public policy decisions,” committee chair Rep. Eddie Bernice Johnson (D-Texas) said at a markup of the legislation, which currently has 218 cosponsors. “This is important legislation, regardless of which party is in the White House.”

“The fact remains [that] whether a Democrat or a Republican sits in the speaker’s chair or the Oval Office, we need strong scientific integrity policies.”Committee member Rep. Paul Tonko (D-N.Y.), who introduced the legislation, said at the markup, “The fact remains [that] whether a Democrat or a Republican sits in the speaker’s chair or the Oval Office, we need strong scientific integrity policies. This bill, H.R. 1709, would do just that, insulating public scientific research and reports from the distorting influence of political special interests by ensuring strong scientific integrity standards at America’s science agencies.”

Tonko said that although more than 20 federal agencies already have some form of a scientific integrity policy, “the policies are uneven in their enforcement and in their scope.”

The legislation “is very timely,” said Rep. Suzanne Bonamici (D-Ore.), a member of the committee. “Last month, weather forecasting and science became contentious during Hurricane Dorian, and it jeopardized the safety of our communities,” she said, referring to an incident during which President Donald Trump publicly presented outdated hurricane forecasting information. “I want to acknowledge the public servants in the National Weather Service Birmingham [Alabama] office who helped defend scientific integrity in what unfortunately became a very political moment.”

Rep. Frank Lucas (R-Okla.), the ranking Republican on the committee, also supported the legislation. “We all agree [that] government scientists should be able to conduct their research free from suppression, intimidation, coercion, or manipulation,” he said. “Federal scientists, like all other federal employees, enjoy many protections in the workplace. In addition to these protections, research agencies already have specific scientific integrity policies in place through a standing executive order. Still, there’s room to improve our federal research enterprise.”

“A Critical Issue”

Lauren Kurtz, executive director of the Climate Science Legal Defense Fund, told Eos in a statement that the group supports the bill “because scientific integrity at federal agencies is a critical issue and presently, federal scientific integrity policies are piecemeal at best. Current agency scientific integrity policies were largely implemented under the Obama administration, and are designed to preserve scientific objectivity and protect science from being misrepresented. Unfortunately, agency policies have been inconsistently written and unevenly applied, agency scientific integrity officers (when they even exist) do not always operate with transparency, and scientists filing integrity complaints often do not have any clear right of appeal.”

She added that the bill “would help ensure that agency scientific integrity policies meet necessary minimum criteria regardless of changes in administration, and would help ensure that scientists have clear recourse if an agency fails to enforce its own policy.”

“Today, the remarkable happened: The Scientific Integrity Act passed the House Science Committee with support from both Republicans and Democrats.”In a statement, Michael Halpern, deputy director of the Center for Science and Democracy at the Union of Concerned Scientists, praised passage of the legislation but said it was disappointing that some language about scientists communicating with the media was removed from the bill’s original version. “The legislation is unfortunately silent on the right of experts to respond to interview requests from reporters. This is a mistake,” Halpern noted.

Lucas explained during the markup that a section in the initial legislation that allowed covered individuals to respond to media interview requests “got into the weeds on how scientists manage their media requests.” Lucas said that an amendment he presented, and Tonko agreed to, “strikes those provisions and simply leaves it up to the agencies and administrations to manage their own media policies. Many agencies already have media procedures in place as a part of their scientific integrity policies, and those would be able to continue under this bill. Every administration deserves the opportunity to shape policy and message.”

Overall, however, Halpern welcomed the committee’s approval of the bill. “Today, the remarkable happened: The Scientific Integrity Act passed the House Science Committee with support from both Republicans and Democrats,” he stated. “This is the first time this kind of legislation has passed out of a House committee. This is also the first time this kind of legislation has received public support from Republicans still in office.”

—Randy Showstack (@RandyShowstack), Staff Writer

Does Io Have a Magma Ocean?

Fri, 10/18/2019 - 11:18

The evolution and habitability of Earth and other worlds are largely products of how much these worlds are warmed by their parent stars, by the decay of radioactive elements in their interiors, and by other external and internal processes.

Of these processes, tidal heating caused by gravitational interactions among nearby stars, planets, and moons is key to the way that many worlds across our solar system and beyond have developed. Jupiter’s intensely heated moon Io, for example, experiences voluminous lava eruptions like those associated with mass extinctions on ancient Earth, courtesy of tidal heating. Meanwhile, less intense tidal heating of icy worlds sometimes maintains subsurface oceans—thought to be the case on Saturn’s moon Enceladus and elsewhere—greatly expanding the habitable zones around stars.

Tidal heating results from the changing gravitational attraction between a parent planet and a close-in moon that revolves around that planet in a noncircular orbit. (The same goes for planets in close noncircular orbits around parent stars.) Because its orbit is not circular, the distance between such a moon and its parent planet varies depending on where it is in its orbit, which means it experiences stronger or weaker gravitational attraction to its parent body at different times. These tightening and relaxing responses of the gravitational attraction change the orbiting moon’s shape over the course of each orbit and generate friction and heat internally as rock, ice, and viscous magma are pushed and pulled. (The same process causes Earth’s ocean tides, although the reshaping of the ocean generates relatively little heat because of water’s low viscosity.)

Tidal deformation is thus central to understanding a moon’s energy budget and probing its internal structure.The magnitude and phase of a moon’s tidally induced deformation depend on its interior structure. Bodies with continuous liquid regions below the surface, such as a subsurface water or magma ocean, are expected to show larger tidal responses and perhaps distinctive rotational parameters compared with bodies without these large fluid regions. Tidal deformation is thus central to understanding a moon’s energy budget and probing its internal structure.

The dissipation of tidal energy (or the conversion of orbital energy into heat) within a parent planet causes the planet’s moons to migrate outward. This process frequently drives the satellites into what are called mean-motion resonances with each other, in which their orbital periods—the time it takes a satellite to complete a revolution around its parent—are related by integer ratios. The multiple satellites within such orbital resonances exert periodic gravitational influences on each other that serve to maintain noncircular orbits (orbits with nonzero eccentricities), which drive tidal heating. Simultaneously, tidal dissipation and heating within orbiting satellites damp the orbital eccentricity excited by mean-motion resonances, move orbits inward, and power tectonism and potential volcanic activity. Without resonances, continued tidal energy dissipation would eventually lead to circular orbits that would minimize tidal heating.

For as much as we know, there remain fundamental gaps in our understanding of tidal heating. At a Keck Institute for Space Studies workshop [de Kleer et al., 2019] in October 2018, participants discussed the current state of knowledge about tidal heating as well as how future spacecraft missions to select solar system targets could help address these gaps.

Jupiter and the Galilean Satellites

Each time Ganymede orbits Jupiter once, Europa completes two orbits, and Io completes four orbits (Figure 1). This 1:2:4 resonance was discovered by Pierre-Simon Laplace in 1771, but its significance was realized only 200 years later when Peale et al. [1979] published their prediction that the resonance would lead to tidal heating and melting of Io, just before the Voyager 1 mission discovered Io’s active volcanism. The periodic alignment of these three large moons results in forced eccentric orbits, so the shapes of these moons periodically change as they orbit massive Jupiter, with the most intense deformation and heating occurring at innermost Io. Meanwhile, tidal heating of Europa (and of Saturn’s moon Enceladus) maintains a subsurface ocean that’s below a relatively thin ice shell and in contact with the moon’s silicate core, providing key ingredients for habitability.

Fig. 1. The Jovian and Saturnian systems: The top of each diagram shows the orbital architecture of the system, with the host planet and orbits to scale. Relevant mean-motion resonances are identified in red. The bottom of each diagram shows the moons to scale with one another. Physical parameters listed for each planet and moon include the diameter d, bulk density ρ, and rotational period P, which for all the moons is equal to the orbital period, as they are tidally locked with their host planet. Credit: James Tuttle Keane/Keck Institute for Space Studies

Although Peale et al. [1979] predicted the presence of a thin lithosphere over a magma ocean on Io, Voyager 1 revealed mountains more than 10 kilometers high (Figure 2). This suggests that Io has a thick, cold lithosphere formed by rapid volcanic resurfacing and subsidence of crustal layers. The idea of a magma ocean inside Io generally lost favor in subsequent studies, until Khurana et al. [2011] presented evidence from Galileo mission data of an induced magnetic signature from Io. Induced signatures from Europa, Ganymede, and Callisto (another of Jupiter’s moons that with Io, Europa, and Ganymede, make up what are known as the Galilean satellites) had previously been interpreted as being caused by salty oceans, which are electrically conducting—and molten silicates are also electrically conducting. Considerable debate persists about whether Io has a magma ocean.

Fig. 2. Haemus Mons, seen here in an image taken by Voyager 1, is a mountain near the south pole of Io. The mountain is about 100 kilometers wide × 200 kilometers long and rises 10 kilometers above the surrounding plains, comparable to Mount Everest, which rises roughly 9 kilometers above sea level. The bright material is sulfur dioxide frost. Credit: NASA/JPL/U.S. Geological Survey

The Jovian system provides the greatest potential for advances in our understanding of tidal heating in the next few decades. This is because NASA’s Europa Clipper and the European Space Agency’s Jupiter Icy Moons Explorer (JUICE) will provide in-depth studies of Europa and Ganymede in the 2030s, and the Juno mission orbiting Jupiter may have close encounters with the Galilean satellites in an extended mission. However, our understanding of this system will continue to be limited unless there is also a dedicated mission with close encounters of Io. The easily observed heat flow on Io (at least 20 times greater than that on Earth) from hundreds of continually erupting volcanoes makes it the ideal target for further investigation and key to understanding the Laplace resonance and tidal heating.

Advances from the Saturnian System

As discovered by Hermann Struve in 1890, the Saturnian system contains two pairs of satellites that each display 1:2 orbital resonance (Figure 1): Tethys-Mimas and Dione-Enceladus. More recently, the Cassini mission discovered that Enceladus and Titan are ocean worlds, hosting large bodies of liquid water beneath icy crusts.

Precise measurements of the Saturnian moon orbits, largely based on Cassini radio tracking during close encounters, have revealed outward migration rates much faster than expected. But extrapolating the Cassini migration measurement backward in time while using the conventional assumption of a constant tidal dissipation parameter Q, which measures a body’s response to tidal distortion, implies that the Saturnian moons would have, impossibly, been inside Saturn in far less time than the lifetime of the solar system. To resolve this contradiction, Fuller et al. [2016] proposed a new theory for tidally excited systems that describes how orbital migrations could accelerate over time.

The theory is based on the idea that the internal structures of gas giant planets can evolve on timescales comparable to their ages, causing the frequencies of a planetary oscillation mode (i.e., the planet’s vibrations) to gradually change. This evolution enables “resonance locking” in which a planetary oscillation mode stays nearly resonant with the forcing created by a moon’s orbital period, producing outward migration of the moon that occurs over a timescale comparable to the age of the solar system. This model predicts similar migration timescales but different Q values for each moon. Among other results, this hypothesis explains the present-day heat flux of Enceladus without requiring it to have formed recently, a point relevant to its current habitability and a source of debate among researchers.

Observing Tidally Heated Exoplanets

Beyond our solar system, tidal heating of exoplanets and their satellites significantly enlarges the total habitable volume in the galaxy. And as exoplanets continue to be confirmed, researchers are increasingly studying the process in distant star systems. For example, seven roughly Earth-sized planets orbit close to TRAPPIST-1 (Figure 3), a low-mass star about 40 light-years from us, with periods of a few Earth days and with nonzero eccentricities. Barr et al. [2018] concluded that two of these planets undergo sufficient tidal heating to support magma oceans and the other five could maintain water oceans.

Fig. 3. The TRAPPIST-1 system includes seven known Earth-sized planets. Intense tidal heating of the innermost planets is likely. The projected habitable zone is shaded in green for the TRAPPIST-1 system, and the solar system is shown for comparison. Credit: NASA/JPL-Caltech

Highly volcanic exoplanets are considered high-priority targets for future investigations because they likely exhibit diverse compositions and volcanic eruption styles. They are also relatively easy to characterize because of how readily volcanic gases can be studied with spectroscopy, their bright flux in the infrared spectrum, and their preferential occurrence in short orbital periods. The latter point means that they can be observed relatively often as they frequently transit their parent stars, resulting in a periodic slight dimming of the starlight.

Directions in Tidal Heating Research

The Keck Institute of Space Studies workshop identified five key questions to drive future research and exploration:

Properties of the erupted material can place strong constraints on the temperature and viscosity structure of a planet with depth.1. What do volcanic eruptions tell us about planetary interiors? Active eruptions in the outer solar system are found on Io and Enceladus, and there are suggestions of such activity on Europa and on Neptune’s large moon Triton. Volcanism is especially important for the study of planetary interiors, as it provides samples from depth and shows that there is sufficient internal energy to melt the interior. Eruption styles place important constraints on the density and stress distribution in the subsurface. And for tidally heated bodies, the properties of the erupted material can place strong constraints on the temperature and viscosity structure of a planet with depth, which is critical information for modeling the distribution and extent of tidal dissipation.

2. How is tidal dissipation partitioned between solid and liquid materials? Tidal energy can be dissipated as heat in both the solid and liquid regions of a body. The dissipation response of planetary materials depends on their microstructural characteristics, such as grain size and melt distribution, as well as on the timescales of forcing. If forcing occurs at high frequency, planetary materials respond via instantaneous elastic deformation. If forcing occurs at very low frequency, in a quasi steady state manner, materials respond with permanent viscous deformation. Between these ends of the spectrum, on timescales most relevant to tidal flexing of planetary materials, the response is anelastic, with a time lag between an applied stress and the resulting deformation.

Decades of experimental studies have focused on studying seismic wave attenuation here on Earth. However, seismic waves have much smaller stress amplitudes and much higher frequencies than tidal forcing, so the type of forcing relevant to tidally heated worlds remains poorly explored experimentally. For instance, it is not clear under what conditions tidal stress could alter existing grain sizes and/or melt distributions within the material being stressed.

3. Does Io have a magma ocean? To understand Io’s dynamics, such as where tidal heating occurs in the interior, we need to better understand its interior structure. Observations collected during close spacecraft flybys can determine whether Io has a magma ocean or another melt distribution (Table 1 and Figure 4). One means to study this is from magnetic measurements. Such measurements would be similar to the magnetic field measurements made by the Galileo spacecraft near Io but with better data on Io’s plasma environment (which is a major source of noise), flybys optimized to the best times and places for measuring variations in the magnetic field, and new laboratory measurements of electrical conductivities of relevant planetary materials.

If there is a continuous liquid layer within Io and the overlying lithosphere is rigid, libration amplitudes could be easily measurable with repeat images taken by a spacecraft.A second method to investigate Io’s interior is with gravity science, in which the variables k2 and h2 (Table 1), called Love numbers, express how a body’s gravitational potential responds on a tidal timescale and its radial surface deformation, respectively. Each of these variables alone can confirm or reject the hypothesis of a liquid layer decoupled from the lithosphere because their values are roughly 5 times larger for a liquid than a solid body. Although k2 can be measured through radio science (every spacecraft carries a radio telecommunication system capable of this), the measurement of h2 requires an altimeter or high-resolution camera as well as good knowledge of the spacecraft’s position in orbit and orientation.

Libration amplitude provides an independent test for a detached lithosphere. The orbit of Io is eccentric, which causes its orbital speed to vary as it goes around Jupiter. Its rotational speed, on the other hand, is nearly uniform. Therefore, as seen from Jupiter, Io appears to wobble backward and forward, as the Moon does from the vantage of Earth. Longitudinal libration arises in Io’s orbit because of the torque applied by Jupiter on Io’s static tidal and rotational bulge while it is misaligned with the direction toward Jupiter. If there is a continuous liquid layer within Io and the overlying lithosphere is rigid (as is thought to be needed to support tall mountains), libration amplitudes greater than 500 meters are expected—a scale easily measurable with repeat images taken by a spacecraft.

Table 1. Testing Models for Tidal Heating and Melt Distribution in Io Model Tidal k2 or h2 Libration Amplitude Magnetic Induction Major Lava Heat Flow I low small weak high-temperature basaltic more polar II low small weak basaltic more equatorial III high large strong very high temperature ultramafic equatorial or uniform IV low small strong very high temperature ultramafic equatorial or uniform


Fig. 4. Four scenarios for the distribution of heating and melt in Io. Credit: Chuck Carter and James Tuttle Keane/Keck Institute for Space Studies

4. Is Jupiter’s Laplace system in equilibrium? The Io-Europa-Ganymede system is a complex tidal engine that powers Io’s extreme volcanism and warms Europa’s water ocean. Ultimately, Jupiter’s rotational energy is converted into a combination of gravitational potential energy (in the orbits of the satellites) and heat via dissipation in both the planet and its satellites. However, we do not know whether this system is currently in equilibrium or whether tidal migration and heating rates and volcanic activity vary over time.

The orbital evolution of the system can be determined from observing the positions of the Galilean satellites over time. A way of verifying that the system is in equilibrium is to measure the rate of change of the semimajor axis for the three moons in the Laplace resonance. If the system is in equilibrium, the tidal migration timescale must be identical for all three moons. Stability of the Laplace resonance implies a specific equilibrium between energy exchanges in the whole Jovian system and has implications for its past and future evolution.

5. Can stable isotopes inform our understanding of the long-term evolution of tidal heating? We lack knowledge about the long-term evolution of tidally heated systems, in part because their geologic activity destroys the older geologic record. Isotopic ratios, which preserve long-term records of processes, provide a potential window into these histories. If processes like volcanic eruptions and volatile loss lead to the preferential loss of certain isotopes from a moon or planet, significant fractionation of a species may occur over the age of the solar system. However, to draw robust conclusions, we must understand the current and past processes that affect the fractionation of these species (Figure 5), as well as the primordial isotopic ratios from the body of interest. Measurements of isotopic mass ratios—in, for example, the atmospheres and volcanic plumes of moons or planets of interest—in combination with a better understanding of these fractionation processes can inform long-term evolution.

Fig. 5. There are many potential sources, sinks, and transport processes affecting chemical and isotopic species at Io. Credit: Keck Institute for Space Studies Missions to the Moons

Both the Europa Clipper and JUICE are currently in development and are expected to arrive at Jupiter in the late 2020s or early 2030s. One of the most important measurements made during these missions could be precision ranging during close flybys to detect changes in the orbits of Europa, Ganymede, and Callisto, which would provide a key constraint on equilibrium of the Jovian system if we can acquire comparable measurements of Io. JUICE will be the first spacecraft to orbit a satellite (Ganymede), providing excellent gravity, topography, magnetic induction, and mass spectrometry measurements.

The most promising avenue to address the five key questions is a new spacecraft mission that would make multiple close flybys of Io combined with laboratory experiments and Earth-based telescopic observations.The Dragonfly mission to Titan includes a seismometer and electrodes on the landing skids to sense electric fields that may probe the depth to Titan’s interior water ocean. Potential Europa and Enceladus landers could also host seismometers. The ice giants Uranus and Neptune may also finally get a dedicated mission in the next decade. The Uranian system contains six medium-sized moons and may provide another test of the resonance locking hypothesis suggested by Fuller et al. [2016], and Neptune’s active moon Triton is another strong candidate to host an ocean.

The most promising avenue to address the five key questions noted at the Keck Institute of Space Studies workshop is a new spacecraft mission that would make multiple close flybys of Io [McEwen et al., 2019], combined with laboratory experiments and Earth-based telescopic observations. An Io mission could characterize volcanic processes to address question 1, test interior models via geophysical measurements coupled with laboratory experiments and theory to address questions 2 and 3, measure the rate of Io’s orbital migration to determine whether the Laplace resonance is in equilibrium to address question 4, and determine neutral compositions and measure stable isotopes in Io’s atmosphere and plumes to address question 5.


We thank Michelle Judd and others at the Keck Institute for Space Studies and all participants in the tidal heating study. Part of the research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA.

The Bigger They Are, the Harder They Fall

Fri, 10/18/2019 - 11:12

Every year, millions of people flock to see the giant trees in California’s Sequoia and Kings Canyon National Parks and Redwood National and State Parks.

“It’s just emblematic of what people value. People value big, old trees,” said Nate McDowell, an Earth scientist at the Pacific Northwest National Laboratory in Richland, Wash.

Unfortunately, climate change puts these large trees at increased risk. “In the future, we might see droughts become more severe and more frequent,” said Xi Yang, an environmental scientist at the University of Virginia in Charlottesville.

Yang was the senior researcher on a study recently published in Nature Communications that reported that taller trees died at more than twice the rate of smaller trees at the end of extreme drought. “Of the trees above 30 meters tall, nearly half of them died during this study. Which is a staggering number,” said lead researcher Atticus Stovall.

Tracking a Tree’s Death from the Air

The researchers were able to track the mortality of 1.8 million trees in California from 2009 to 2016 in an area spanning over 40,000 hectares—roughly the size of Denver—using lidar measurements taken from an aircraft. Lidar is akin to radar but uses a light pulse instead of a sound pulse to determine the distance of a target.

From an airplane cruising at 1,000 meters, lidar has a vertical resolution on the order of centimeters, Stovall said. “You can see birds in lidar data sets, which is really crazy.”

With this level of granularity, researchers could identify individual trees and assess mortality across the landscape. “There’s no two same trees in the world,” Yang said.

Being able to track individual trees is important for understanding what biological or environmental factors drive its functioning. “This, to my knowledge, is the first study that has taken individual tree mapping and linked it with trends in mortality like this,” Stovall said.

Using lidar allowed researchers to survey a much larger landscape than previous studies, said McDowell, who was not involved with the study. Traditionally, scientists on the ground would survey plots of forest to monitor tree health. Although these plot measurements are robust, they miss the entirety of the forest for all the trees in it.

Case in point: There were only about 17 trees that were taller than 70 meters in the entire data set. “Statistically speaking, you would almost never find those trees in that 40,000 hectares,” Stovall said. “If you found one of those trees, it would be like you hit the jackpot.”

The Bigger They Are, the Harder They Fall

Although tree height was the strongest predictor of tree mortality, environmental factors like water and competition also played a role. Another primary risk factor was the vapor pressure deficit (VPD). The higher the VPD, which is associated with higher temperatures and lower humidity, is, the more prone trees are to drying out and dying.

“Something I feel is really important for the public to understand is that things aren’t going to get better. In their lifetimes, things won’t get better. Things will only get worse, especially for big trees.”Large trees died at even higher rates when the VPD was high. “In a way, a lot of it is physics,” Stovall said. The tree’s xylem is like a straw, he said, and large trees require larger straws to move more water from roots to the leaves. The stress of a drought likely makes these trees more susceptible to cavitation, “which is literally a column of water within that straw that I was describing being ripped apart,” said Stovall. The air bubble that forms in the water column “renders the entire water transport system useless—it’s pretty dramatic.”

For McDowell, the data showing the role VPD played in tree mortality were particularly striking. “Something I feel is really important for the public to understand is that things aren’t going to get better,” he said. “In their lifetimes, things won’t get better. Things will only get worse, especially for big trees.”

And what is bad for large trees is bad for the environment as a whole.

“They hold and filter tons of water,” Stovall said. “They’re literally sequestering carbon, so they’re helping us out there, too.”

McDowell agreed: “It’s the big old trees that store the most carbon. And we need that to mitigate climate warming.” Loss of these large trees would likely exacerbate global climate emissions.

There are short-term management strategies for preserving large trees, like clearing smaller trees to reduce competition for water, but “they’re a Band-Aid, they’re not a cure,” McDowell said.

In the end, saving the large trees “comes down to the hard things that everyone is well aware of,” Stovall said.

“You know, I don’t want to tell people how to live their lives,” Stovall continued. “But the science here is really pointing towards this reality that the decisions that everyday people make in terms of emissions and carbon footprint and everything really do have large-scale effects on the environment.”

—Richard J. Sima (@richardsima), Freelance Science Writer

Yet Again, Warmer Winter Looms for U.S.

Thu, 10/17/2019 - 21:33

The entire United States has a 50% or higher chance of a warmer than average winter this year. This is according to 3-month forecasts released today by the Climate Prediction Center (CPC) at the National Oceanic and Atmospheric Administration (NOAA).

“The greatest likelihoods for warmer than normal conditions are at Alaska and Hawaii,” Mike Halpert, CPC deputy director, said at a 17 October press conference. The winter outlook shows “more modest probabilities for above-average temperatures spanning large parts of the remaining lower 48 [states], from the West, across the South, and up the Eastern Seaboard.”

The center’s models for November, December, and January also weakly favor wetter than normal weather for Alaska, Hawaii, and states between the Rocky Mountains and the Mississippi Valley and dryer than normal conditions on the central West Coast and central Gulf Coast.

“Like last year, no part of the U.S. is favored to have below average temperatures this winter,” he said.

NOAA’s climate forecasts for November 2019, December 2019, and January 2020 predict warmer than normal conditions for the continental United States. The probability that a region will have below-average temperature is in blue, about average temperature is in gray scale, and hotter than average temperature is in orange (left). The probability that a region will have less precipitation than normal is in brown, as much precipitation as normal is in gray scale, and more precipitation than normal is in green (right). EC marks locations that have an equal chance of higher or lower values than normal. Credit: NOAA Climate Prediction Center Shifting Drought

The CPC’s seasonal outlooks also forecast a shift in which regions of the country may see drought this winter.

“After a very wet spring resulted in widespread flooding in some areas, drought has reemerged in the Southeast, Southwest, and Texas thanks to a very dry and warm late summer and early fall” in a rapid-onset flash drought, Halpert said.

“Drought is expected to improve in portions of the Southeast, Mid-Atlantic, Alaska, and Hawaii, while persisting in central Texas and the Southwest [and in Puerto Rico],” he said. “Drought development is likely to occur in parts of central California.”

This map of the 50 states and Puerto Rico shows the probability that drought will persist (brown), improve (tan), disappear (green), or develop (yellow) by January 2020. Credit: Brad Pugh, NOAA/NWS/NCEP/Climate Prediction Center Neutral El Niño Means More Variability

“This is not one of our most confident forecasts,” Halpert said. “Only over Alaska and in Hawaii does the probability reach 50% for any point.”

Part of the reason for the lower probabilities is that the El Niño–Southern Oscillation (ENSO) “is most likely to be in its neutral state in the winter, meaning that neither El Niño or La Niña is expected to develop,” Halpert said.

Strong ENSO conditions, whether trending toward the warm or cold phase in the cycle, typically overwhelm long-term trends, which “favor above-average temperatures across most of the South, along the East Coast, and in Alaska and Hawaii,” he said.

Shorter-term patterns like the Arctic Oscillation and the Madden-Julien Oscillation may also play a larger role. In this winter’s ENSO-neutral state, “any forcing this year from the tropics is likely to provide impacts on subseasonal timescales, potentially resulting in more variable conditions during the upcoming winter,” he said. These 2- to 4-week patterns are difficult to forecast far in advance and were not included in the winter outlook.Warmer sea temperatures in Alaska have resulted in declining sea ice coverage and ice thickness and progressively later freeze dates.

The higher confidence in the Alaska and Hawaii outlooks is likely because of strongly warmer than normal (by 1°C–3.5°C) sea surface temperatures in those two regions, the team said. In Alaska, this warming has resulted in strong long-term trends in declining sea ice coverage and ice thickness and progressively later freeze dates, the forecasters said.

“The winter outlook is probabilistic in nature,” Halpert cautioned. “Less likely outcomes will and must occur from time to time. That was certainly the case last winter, when a flip in the pattern during late January brought a very cold February to parts of the North and West,” as well as other regions that were also forecasted to be warmer than normal.

—Kimberly M. S. Cartier (@AstroKimCartier), Staff Writer

Award-Winning Photojournalism and Other News of the Week

Thu, 10/17/2019 - 13:12


The Decision-Makers’ Guide to Clean Energy.

The main lobby of the newly renovated AGU headquarters building. Credit: Beth Bagley, AGU

Two geosciences-related topics here: the ramifications of research into solar energy storage methods that involve injecting large amounts of heated fluids into underground rock formations (see fracking earthquakes), and a sidebar on the renovation of the AGU headquarters building. —Nancy McGuire, Contract Editor


Winners: SEJ 18th Annual Awards for Reporting on the Environment. The Society of Environmental Journalists (SEJ) has announced the winners of its annual awards for reporting on the environment. There’s lots of excellent reporting recognized here, including Reuters’ “Ocean Shock” series that looks at risks to the oceans from climate change, overfishing, and other threats. Other reporting focused, for instance, on the Flint, Mich., water crisis; activists risking their lives to stand up against mining interests in the Philippines; and climate politics.

—Randy Showstack, Staff Writer


2019 Earth Science Week Photo Competition Winners.

Check out some of this year’s winners of the @geolsoc 2019 Earth Science Week photo competition:

“The photographs showcase the rich diversity of environments in which scientists, live, work and travel.” https://t.co/AbIN6nTkfL pic.twitter.com/gMW3v8K29I

— EGU (@EuroGeosciences) October 16, 2019

Looking for a quick pick-me-up? Check out this collection of stunning Earth science–themed photographs. I wish competitions like this were held every day! —Timothy Oleson, Science Editor


Artemis Spacesuits Before Artemis Spacecraft.

NASA announced two prototypes of the updated spacesuit going to the Moon. The New York Times makes a good point: “In addition to updated spacesuits, the agency does not currently have a spacecraft capable of landing on the Moon.” As Artemis moves forward, it’s important to remember that NASA already had a plan to land on the Moon in 2028, but it was forced to halve its timeline.

—Kimberly Cartier, Staff Writer


Deforestation Could Exacerbate Drought in the Amazon.

The fish bone pattern of small clearings along new roads is the beginning of one of the common deforestation trajectories in the Amazon. Credit: NASA

In addition to decreasing the Amazon’s capacity as a carbon sink, loss of rain forest also decreases the amount of moisture it contributes to the atmosphere, because the pastures and fields that replace the forest have higher temperatures and lower evapotranspiration rates. “Ultimately, the effects of water flux in the Amazon will ripple beyond the rain forest and around the world,” the author writes.

—Faith Ishii, Production Manager


Plant “Takes” Botanical World’s First Selfie in London Zoo Experiment and Using Old Cellphones to Listen for Illegal Loggers.

Two stories in innovative instrument building to guard the planet’s forests: A camera powered by a plant’s waste energy could give scientists a new way to monitor remote rain forests. And 200 old cell phones, hoisted into treetops, powered by small solar panels, and harnessing an artificial intelligence software are now pinpointing the sound of chainsaws and other signs of deforestation.

—Heather Goss, Editor in Chief


Watch the #PNW Coastline Rebuild over a 6-Year Period After the World’s Largest Dam Removal.

Watch the #pnw coastline rebuild over a 6-year period after the world’s largest dam removal. Still curious –> https://t.co/wgx4TzoqGm pic.twitter.com/StX3kpyJLx

— USGS (@USGS) October 10, 2019

This is mesmerizing! It’s hard not to give a dam about sedimentation loss after watching this…. —Jenessa Duncombe, Staff Writer


Climate Visualizations Are Fun.

My personal fabourite so far: A Pop-Art collage with the colors representing the range of temperatures the celebrities did or will witness in their lifetime. pic.twitter.com/N4s89P3BmF

— Alexander Radtke (@alxrdk) October 13, 2019

There are so many great ways to communicate the climate crisis, from spirals to stripes to hockey sticks. These are a handful of clever ways to incorporate pop culture, too. —Caryl-Sue, Managing Editor


What Inflates the Solar Bubble? Voyagers Count What’s Missing

Thu, 10/17/2019 - 13:11

We’re all living in a bubble.

In fact, the Sun and the entire solar system exist in a bubble that separates us from interstellar space. But what keeps that bubble inflated? A recent paper found that scientists can account for only 82% of the pressure that steadies the solar bubble, or heliosphere, against pressure from galactic headwinds. The source of 18% of the pressure is still unknown.

“It’s been a question for a long time how the pressure balance occurs. It determines the size of the entire heliosphere,” said Jamie Rankin, lead researcher on the project and an astrophysicist at Princeton University in Princeton, N.J.

Rankin and her team combined data from the Voyager 1 and Voyager 2 spacecraft taken at the edge of the solar system, called the heliosheath, with remote measurements of this region from other telescopes. They then calculated the region’s total pressure and figured out how much of that pressure comes from different types of particles.

“This is the first observationally inferred evidence of the total pressure out there in the heliosheath during this time period,” Rankin said.

Blowing Bubbles

Imagine, if you will, a large bubble submerged in a swimming pool. Water presses inward from all sides, and air pressure keeps the bubble inflated. The inside and outside spaces can interact only by passing things—particles, energy, pressure—through the air-water boundary. In doing so, the two sides alter the size, shape, and permeability of the bubble.

In this analogy, the heliosphere is the bubble, and the interstellar medium—the stuff between stars—is the water. In reality, the boundary between our solar system and the rest of the galaxy, the heliosheath, has a finite thickness. The heliosheath and its outward facing edge, the heliopause, deflect most interstellar particles from entering the solar system (see video at right).

NASA’s Voyager 1 and Voyager 2 spacecraft, which were launched 42 years ago, have now traveled completely through the heliosheath and into interstellar space 20 billion kilometers away. In doing so, they have radically changed the picture of what inflates the solar bubble.

“The heliosphere is not just a closed system that’s blown out by the solar wind. It’s not the simplified version,” Rankin explained. The Voyagers found that outflowing pressure from the solar wind accounted for less than a quarter of the pressure needed to keep the bubble inflated. Some of the energy from solar wind particles was transferred to other outflowing particles, which made up some of the deficit.

Later studies of the heliosphere found that much of the remaining pressure comes from “neutral interstellar atoms that come in through the boundary,” she said. These so-called pickup ions “come in and eventually get energized by the solar wind, and then they’re carried back outwards. And it’s these particles that contribute a lot to the pressure. But they don’t explain the whole story, either.”

At the Edge

In 2012, an opportunity came to find out just how much of the story we were missing. Occasionally, a handful of individual solar ejection events join together as they travel to the edge of the solar system, like smaller weather fronts merging into one superstorm. One such solar superstorm, called a global merged interaction region, reached the heliosheath in 2012. A pressure wave rippled through the boundary, passing first Voyager 2 and then Voyager 1 four months later.

The 2012 global merged interaction region event (yellow) traveled outward from the center of the heliosphere, passed Voyager 2, and then passed Voyager 1. Credit: NASA/Goddard Space Flight Center/Mary Pat Hrybyk-Keith

Voyager 2 was traveling through the inner reaches of the heliosheath at the time, but “we were very fortunate because Voyager 1 had just crossed the heliopause,” Rankin said. Measuring how the heliosheath responds to a disturbance like this one can reveal its fundamental properties.

As the wave passed by, each craft measured a decrease in the flux of incoming galactic cosmic rays. The researchers combined those cosmic ray measurements with past observations of the heliosheath from the Voyagers and other telescopes to find the speed of sound and total pressure at the boundary to interstellar space.

Using the Voyager data, the team found that the speed of sound in the heliosheath is roughly 310 kilometers per second, which is 1,000 times faster than the speed of sound through air on Earth. The total pressure in the heliosheath is about 270 femtopascals, about 4 billion trillion times lower than air pressure at sea level. The team published these results in the Astrophysical Journal on 25 September.

With a measurement of the total pressure, the team was able to parse out the contributions from different types of particles. Solar wind contributes 15% of the total pressure that keeps the solar bubble inflated. High-energy particles like cosmic rays provide 22%, and a whopping 45% comes from pickup ions, Rankin said.

That’s only 82% of the total pressure. “And then there’s 18% that we can’t account for yet,” she said.

Our Bubble of Space

The total pressure was higher than the team expected on the basis of models of the heliosheath. It’s possible that the heliosheath is thinner, its temperature is hotter, or its energy dissipation is weaker than expected. Any of those factors could alter the pressure calculation, the team explained.

Voyager 1, unfortunately, doesn’t have a working instrument to study the plasma just outside the solar system. “But Voyager 2 just crossed into the very local interstellar medium, so those data and what we can find out about the plasma environment out there can constrain some of these questions,” Rankin said.

Just like the Sun, other stars create their own bubbles, called astrospheres, that stave off galactic headwinds. Here are astrospheres around three different stars, imaged in visible or ultraviolet light by three different telescopes. Credit: NASA/Goddard Space Flight Center

Studying more events like the one from 2012, some of which were detected by the Voyagers, will help refine these calculations, according to the team. The Interstellar Boundary Explorer is also helping to create a holistic view of the heliosphere by putting the Voyagers’ local measurements into a global context.

“There’s the question of, if we were out in space looking back at our own solar system, what would that look like?” Rankin said. “Combining these two types of data sets gives us a really good picture of that.”

—Kimberly M. S. Cartier (@AstroKimCartier), Staff Writer

Set to Music, Exoplanets Reveal Insights on Their Formation

Wed, 10/16/2019 - 12:50

In 1619, German astronomer Johannes Kepler published his Harmonices Mundi (Harmony of the World), a text that investigated how mathematics could help the planets of the solar system create celestial music based on their orbital resonances. Three hundred years later, an astronomer using discoveries from NASA’s Kepler mission has arranged thousands of exoplanets in their own grand sonata.

As telescopes have become more automated, astronomical data have evolved from a trickle to a roaring river. During its primary and extended missions, Kepler identified nearly 5,000 confirmed and candidate exoplanets. The first candidates came in small doses, allowing astronomers to get to know them well. But later observations came in giant batches that were more challenging to parse.

“There’s too many of them to look at individually,” said Jason Steffen, an astronomer at the University of Nevada, Las Vegas. “Now that there are so many, they are falling through the cracks.”

Steffen, who studied some music theory and said he knows “just enough to be dangerous,” realized that the Kepler worlds could produce scientific insights when set to music. At the same time, he came across a YouTube video claiming that sonification—setting data to sound—doesn’t produce much useful science.

“I took that as a challenge,” Steffen said. “Planetary systems have good insight that can be gained when you sonify the data.”

Steffen produced a YouTube video of the thousands of exoplanets that combines information about their orbits with sound.

The volume of the system is set by its largest planet, with louder chords corresponding to larger worlds. The lowest note is set by the orbital period of the largest world, with a lower note lining up with a longer orbital period. Finally, the music steps through different combinations of adjacent sets.

“Music is really multidimensional,” said Matt Russo, an astromusician who set the data for the seven-planet TRAPPIST-1 system to music. Music allows researchers to present different pitches and notes to create multiple layers, he said.

“There are some cool analogies one can draw, and maybe some real physical insights that can be had from this kind of sonification,” said Daniel Fabrycky, an exoplanet researcher at the University of Chicago.

Fabrycky, who studies the orbits of planets, was one of the colleagues Steffen sent his final composition to. “Systems that sound better—have more pleasing chords to the ear—might have formed in a more gentle way than ones that sound dissonant,” Fabrycky said.

“Sound Is in a Better Position Than Sight”

After a star is born, the leftover disk of gas and dust gives rise to planets. Some planets travel in resonance, orbits that correspond to how often they travel around their star. Neptune and Pluto, for instance, are in a 3:2 resonance; Neptune travels three around the Sun for every two orbits made by Pluto.

According to Fabrycky, planets don’t start off in resonance. Instead, gravity and interactions with the disk can cause the planets to move. “After both planets form, they can scoot towards each other and capture into that resonance,” Fabrycky said.

Orbital resonances can reveal insights about the later stages of planet formation, when the worlds are still embedded in the system’s protoplanetary disk, Fabrycky said. Exoplanets were first identified in the early 1990s, but astronomers didn’t spot the first resonant exoplanets until the early 2000s, and multiple exoplanets in resonance weren’t seen for another decade. Today, we know of fewer than 10 exoplanet systems where three or more worlds are locked into resonances.

An artist’s animation shows the four planets of Kepler-223, which move in resonance with one another. Each time the innermost planet orbits the star three times, the second planet orbits exactly four times. Credit: W. Rebel

Steffen’s sonification project may help to reveal more.

“For resonance structure, it seems sound is in a better position than sight to do the job of picking out frequencies in particular,” Fabrycky said.

In fact, Steffen’s sonification project has already raised Fabrycky’s interest in one planetary system. KOI-4032 has four confirmed worlds and a fifth candidate planet. To date, nothing has been published about the system, but the pleasing notes in Steffen’s simulation (at 1:52 in the video above) triggered Fabrycky’s interest.

That’s just the sort of thing Steffen hoped would happen. “There are certain systems that pop out that just haven’t gotten any attention that might be worth a deeper investigation,” he said.

With NASA’s Transiting Exoplanet Survey Satellite already hunting for new worlds and missions like the European Space Agency’s Planetary Transits and Oscillations of stars preparing for a 2026 launch, the treasure trove of exoplanets will only continue to expand. Sonification can help to sort through the data in a meaningful way, making sure the Kepler worlds don’t get lost in the shuffle.

“It would be a great present if we could give this back to Kepler,” Russo said. “He was looking for musical patterns in our solar system. To show him that we’ve found musical patterns in the form of orbital resonances in other systems—I would like to see his face if he could hear that.”

—Nola Taylor Redd (@NolaTRedd), Freelance Science Journalist

Correction, 16 October 2019: This article has been updated to correct the orbital resonance of Neptune and Pluto.

Einstein Says: It’s 309.7-Meter O’Clock

Wed, 10/16/2019 - 12:47

It’s about an hour’s walk from where Jakob Flury lives—through his village of Völksen in Germany, up a hill called the Kalenberg—to see a monument to his profession: “A geodesy marker from Gauss, where he did his observations, his triangulations.”

That would be Carl Friedrich Gauss, the famous German mathematician.

“It’s just a stone, a triangulation stone as we say,” explained Flury, a professor at Leibniz University in Hanover, Germany. “This was the benchmark. When [Gauss] did his measurements, they built a small tower so they could look over the trees, and sometimes also cleared the forest, so they could look for 100 kilometers or maybe even more. They did the angular measurements, and brought them down to the benchmark. And then the center of this stone had these very good coordinates.”

It was good enough for an early-19th-century scientist, at least. Gauss was assigned to survey the Kingdom of Hanover by covering it with imaginary triangles—their vertices anchored by hilltops and church towers, their sides accurately calculated by trigonometry.

Two hundred years later, geodesy, the science of measuring the Earth, demands more.

And it has more. From orbit, taking the measurement of the Earth was among the first tasks entrusted to satellites. Fleets of geolocation satellites, such as GPS, now allow people with a receiver to determine where they are within a few meters or, with advanced equipment, millimeters. Radio telescopes track the movement of the continental plates on which they rest, millimeter by millimeter, by staring in unison at quasars—active galactic nuclei billions of light-years away—in a process called very long baseline interferometry (VLBI).

But it is not enough. That’s why Flury, with colleagues across the world, is looking to incorporate into geodesy the most advanced theory of space—and time—available: general relativity. He recently gave a talk on the future of the new approach, relativistic geodesy, at the International Union of Geodesy and Geophysics (IUGG) General Assembly in Montreal, Que., Canada.

“We Are in This Four-Dimensional Reality”

Even though he’s been working at relativistic geodesy for years, it never ceases to fascinate Flury: “It’s really a new world, a new awareness. Here we are, in this four-dimensional reality, the curved space-time. It’s not just Euclidean space that we’re living in. Time is the fourth coordinate, and it’s where the irregularities due to gravity come in. We are now at the level at which this starts to be not purely theoretical anymore.”

Atomic clocks have gotten so good, measuring time in such small increments and with such stability, that the gravitational slowing of time can actually be observed at commonplace terrestrial height differences.Suppose Gauss in 1819 had installed a clock next to his triangulation stone on the Kalenberg, a clock that kept perfect time. Suppose he sent an identical clock 161 kilometers north and 309.7 meters down to the port city of Bremerhaven, at sea level. By now, after 200 years, these clocks would disagree. The clock on the water’s edge would be ever so slightly behind, about 2 ten-thousandths of a second. But it would not be wrong. It would just be keeping time in a different kind of space: closer to the center of the Earth, which is to say, deeper in its gravity well. In physical terms, the clock would exist at a lower gravitational potential.

This used to be a Gedankenexperiment, or “thought experiment,” as Albert Einstein called the rigorous but imaginary experiments that led him to groundbreaking discoveries about space and time. His special theory of relativity predicted that twin siblings would no longer be the same age if one of them made a very fast trip into space and back. And his general theory of relativity predicted that twin clocks would not keep time at the same clip if one of them were nearer to an attracting—or rather, a space-time-bending—mass.

Innovation has caught up to imagination. Atomic clocks have gotten so good, measuring time in such small increments and with such stability, that the gravitational slowing of time can actually be observed at familiar terrestrial height differences. The best clock so far, constructed at the National Institute of Standards and Technology (NIST) in the United States, uses light emitted by ytterbium atoms stimulated by laser light. The light’s wavelength, or frequency, is so stable that this clock loses or gains only 1.4 × 10-18 of a second per second, which would add up to an error of less than 1 second over the age of the universe.

The National Institute of Standards and Technology’s Yb lattice clock, above, uses light emitted by ytterbium atoms stimulated by laser light. This clock loses or gains on the order of 10-18 of a second per second, or not quite 1 second, over the age of the universe. Credit: NIST

Equally important, methods have been devised to transport time signals produced by multiple atomic clocks across long distances over glass fiber links to compare them in one place.

These developments will soon put general relativity into the geodesic tool kit. If two identical clocks are out of sync, you have in fact a direct measurement of the difference in their local gravity fields. And this difference is essential for any correct description of their difference in height.

“We say that our satellites are now our church towers,” Flury told Eos. “And in the future, the church towers could be the atoms in these devices.”

Struggle of Geometry and Gravity

Even in the days when actual church towers and surveying equipment were the only tools of the trade for geodesists, the geodesists were taking gravity into account whenever they wanted to determine the elevation of some place. If you wanted to know the height of a hill compared to where you were, the standard approach was (and often still is) to aim a leveling instrument horizontally at a measuring rod somewhat farther up the hill. You record the height and repeat the process from that location until you are at the top. Each time, you know the instrument is horizontal only because a spirit level says so. That’s where gravity comes in, and it is essential to the interpretation of the measurement.

For depending on where you are on Earth, and the distribution of mass within it, “straight down,” which is by definition in the direction of Earth’s attraction, may not be in the direction you’d expect.

The history of geodesy is the story of the struggle between geometry as the eye sees it and gravity as the body feels it.This, among other factors, makes spirit leveling problematic when done over large distances, said Jürgen Müller, also a professor at Leibniz University and who also gave a talk on relativistic geodesy at the IUGG General Assembly in Montreal. “You start at sea level, at the tide gauge, and you use the leveling approach to go where you want. Errors accumulate with distance. If you go through the U.S. this way from the East Coast to the West Coast, you have an error of 1 or 2 meters. And it takes a long time to resolve where those errors come from, what are the right values.”

The history of geodesy is the story of this struggle between geometry as the eye sees it and gravity as the body feels it.

The outcome has been that two surfaces are in use to represent the shape of Earth.

One is the reference ellipsoid, a flattened sphere that is essentially an improved version of the classical spherical globe found in libraries and classrooms. It serves the same function: to point out locations by latitude and longitude. It’s more accurate than a sphere because Earth happens to be slightly flattened, a result of the competing forces of gravity and rotation. Isaac Newton already noted in his Philosophiae Naturalis Principia Mathematica that a rotating planet that was completely fluid would have an ellipsoid as its equilibrium surface.

Gravity is determined by mass. Earth’s mass is not distributed equally, and it also changes over time. This visualization of a gravity model (the geoid) was created with data from NASA’s Gravity Recovery and Climate Experiment (GRACE) and shows variations in Earth’s gravity field. Red shows areas where gravity is relatively strong, and blue reveals areas where gravity is weaker. Credit: NASA/JPL/Center for Space Research, University of Texas at Austin

In addition to latitude and longitude, GPS measurements provide height in relation to this ellipsoid. However, to make sense, these heights have to be recalculated to refer to a more physically meaningful shape of Earth: the geoid. The geoid is a surface of constant gravitational potential—the energy that would be required to lift 1 kilogram from the center of Earth to that level. The geoid hugs the ellipsoid but has hills and valleys because mass is not equally distributed in Earth’s core, mantle, ocean, crust, and atmosphere.

The geoid ideally would correspond to sea level. Imagine the ocean without tides, currents, or winds and somehow extending under the continents. Measuring heights with respect to the geoid guarantees you won’t calculate water flowing spontaneously between places at equal elevations, or even uphill, which is possible when heights are calculated using the ellipsoid. Ideally, geoid heights are expressed not in meters but in joules, a unit of energy, to account for the varying strength of Earth’s gravity at different heights.

From Decimeters to Millimeters and Beyond

For mapmaking, the ellipsoid is the surface to use, but there is a constant need to keep track of where everything is. “Nothing on Earth is fixed, everything is moving, [and] the Earth itself is wobbling and deforming in many different ways,” Flury explained.

To deal with that, geodesy uses a combination of techniques to construct an International Terrestrial Reference Frame that works well enough for practical applications. “It has a couple of hundred very accurate benchmarks, in a nice global distribution,” Flury said. “So in the coordinate frame, every point has some movement, but as a set, geodesists can very well define the frame.”

To add the height of any location to such maps, it is necessary to recalculate the height that GPS provides as the distance to wherever the geoid is in that place, above or below the ellipsoid.

That’s where gravity measurements come in. Up until now, such measurements have been performed for the large-scale undulations of the geoid by satellites, such as those of the Gravity Recovery and Climate Experiment (GRACE) and Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) missions and the current GRACE Follow-On mission.

Satellites, such as those of NASA’s GRACE-FO mission, are essential to geodesists measuring Earth’s gravity. (a) When both spacecraft are over the ocean, the distance between them is relatively constant. (b) When the leading spacecraft encounters land, the land’s higher gravity pulls it away from the trailing spacecraft, which is still over water. (c) Once the second satellite also encounters the land, it too is pulled toward the higher mass and consequently toward the leading spacecraft. (d) When both spacecraft are over water again, the trailing spacecraft is slowed by land before returning to its original distance behind the leading spacecraft. Credit: NASA

Local measurements are performed from airplanes and on the ground with gravimeters, instruments that measure the gravitational acceleration of objects that are falling or bobbing up and down on springs. But to get at the gravitational potential, these measurements of gravity’s strength have to be combined with much less precise assumptions about the complete mass distribution underfoot.

Clocks are a promising addition to this arsenal, because they allow a direct measurement of the gravitational potential itself.

For now, Flury and Müller are validating the approach with strontium clocks, which have an accuracy of 2–3 × 10-17, which corresponds to decimeter-level accuracy in height. Recently, transportable strontium clocks have become available, enabling measurements of the gravity potential anywhere a small trailer can be hauled to. Transporting clock signals over glass fiber connections and via satellites, researchers envisage creating networks of clocks, measuring gravity in real time for geodetic and geophysical purposes.

For many of these applications, technology still has to advance quite a bit.

“When you talk to the clock people, at this point they can establish heights at the decimeter level,” Flury said. This includes the uncertainty introduced by the communication link between the clocks that are being compared. “There are some who can show very solid error budgets that make clear they are at the centimeter level. There are those at NIST who can show a path forward towards millimeters. There is even theoretical work on a thorium clock, that could be orders of magnitude better. But this is fiction at this point.”

Moving “Sea Level” to the Moon

Centimeter accuracy would put relativistic geodesy on par with GPS measurements, and with carefully corrected spirit leveling over distances of tens of kilometers. With millimeter accuracies, clock-based height measurements could be used for much more than maps and civil engineering projects.

“One of the most important things will be time variations of the gravity,” Flury said. “Take a volcano: All those processes going on inside lead to tiny variations of the gravity. You could actually observe tectonics. Even now with GPS we can see the uplift of some mountain chains, but this would be a new way to observe that. Coastal subsidence or uplift processes can be pretty complex and not so easy to monitor—take the Gulf Coast, for example, New Orleans. Millimeter precision would be a wonderful tool to monitor coasts.”

“We would say, ‘This frequency, of this clock on the Moon, is our new height reference.’”According to Müller, one consequence of the use of frequencies as stand-ins for height could be that the official reference height goes from sea level to a place completely off the planet. “You need a reference point. But we could have that by putting a clock on the surface of the Moon, as a well-controlled outside reference frequency. This would change the whole concept of our height reference frames. We would say, ‘This frequency, of this clock on the Moon, is our new height reference.’”

Even then, geodesy wouldn’t have quite a steady foothold. The Moon isn’t completely rigid either; it deforms regularly due to its tidal attraction to Earth, and even the influence of the Sun and the other planets in our solar system will need to be taken into account.

“But you have good models of the Moon,” Müller said optimistically. “It’s just an idea; we have to see what we can gain.”

—Bas den Hond (bas@stellarstories.com), Freelance Journalist

First Class of Austin Student Travel Endowment Grantees Awarded

Tue, 10/15/2019 - 18:18

We are pleased to announce the inaugural Austin Endowment grantees, 15 student recipients who represent the diversity, depth, and breadth of the Earth and space science community. Last October, AGU kicked off the Austin Student Travel Grant Challenge, a historic campaign intended to grow AGU’s capacity to support student travel to meetings. Scientist and AGU Development Board member Jamie Austin pledged to match all donations made by AGU membership and the Earth and space science community to the Austin Endowment for Student Travel up to the amount of $1 million. AGU members, Board of Directors, Council, Development Board, Centennial Steering Committee, and staff took the challenge to heart, donating over 2,800 gifts totaling more than $410,000.

When complete, the Austin Student Travel Grant Challenge will permanently increase the number of grants awarded to students wishing to travel to Fall Meeting by 40%.Each year, nearly 7,000 students attend AGU’s Fall Meeting. Many more would like to attend, but the expense of traveling to and registering for the meeting can be prohibitive. However, student applications for travel grant support to the Fall Meeting far exceed the number of available grants. Annually, student travel grant applications have grown to approximately 1,500. AGU is currently able to support only 220 student travel grants each year. When complete, the Austin Student Travel Grant Challenge will permanently increase the number of grants awarded to students wishing to travel to Fall Meeting by 40%.

“I have been a member of AGU since the mid-1970s. I joined the Union as a graduate student. Nothing has been more important to me professionally through the decades than regular attendance at the Fall Meeting,” said Jamie Austin. “In these challenging times for scientific research, an investment in the next generation is paramount. I urge all of you to join me in getting a larger cohort of young scientists to the Fall Meeting. They are our future.”

As AGU Development Board chair Carlos A. Dengo noted, “Fall Meeting is much more than an annual science meeting. It is the largest annual gathering of international Earth and space scientists in the world where researchers who span generations and scientific disciplines join together to advance the scientific enterprise. It is a place where a young scientist can hear about the latest work of leading researchers in their field and also present their own work—some for the first time—to their peers. It is a place where lifelong professional connections and relationships, some of which influence their professional careers, are built and strengthened. It is a place to discover career opportunities and find support. Virtually nowhere else can a young scientist—or for that matter a scientist of any age or career stage—experience all of these opportunities in one place.”

The Austin Student Travel Grant “means that I can actually attend the world’s largest conference for environmental scientists this year!”This year’s grantees hail from seven countries and represent the wide spectrum of the Earth and space sciences, including atmospheric sciences, biogeosciences, hydrology, paleoceanography and paleoclimatology, space physics, and aeronomy. Grantees are in all stages of their education, with undergraduate, graduate, and Ph.D. students represented among the cohort.

“I am delighted to receive the Austin Student Travel Grant. It has actually made my decision of attending the Fall Meeting 2019 more firm. The grant I have received will help me cover my expenses and aid me financially,” said Nirashan Pradhan, a student at St. Xavier’s College, Maitighar, Kathmandu, Nepal. “I hope to learn a lot more and be exposed to other areas of science. It is a huge opportunity for me.”

“Becoming one of the fortunate recipients for an Austin Student Travel grant is very exciting! My decision to attend AGU this year was already determined, but I am grateful to now have financial help to assist me with getting to San Francisco,” said Jonese Pipkin, a student at the University of North Carolina at Charlotte.

Antonia Fritz is among the first class to attend AGU’s Fall Meeting with assistance from an Austin Student Travel Grant. Credit: Antonia Fritz

For Antonia Fritz, a student at the University of Bayreuth, Bayreuth, Germany, the Austin Student Travel Grant “means that I can actually attend the world’s largest conference for environmental scientists this year! The AGU Fall Meeting also opens up many opportunities for me including getting to know new fields of research in the environmental sciences, especially atmospheric sciences, which I may not yet have properly heard of and to find out which research questions fascinate me the most. I will also use AGU Fall Meeting to get to know scientists from all over the world, to learn from their experiences, and to benefit from their knowledge.”

As climate change and other challenges facing humanity become ever more pressing and the scientific enterprise continues to be under attack, the fact-based solutions provided by Earth and space scientists are more important than ever. To help turn this tide, I can think of no better way to do this than by supporting AGU’s Austin Endowment for Student Travel, which helps ensure that young Earth and space scientists entering the field become connected to our members and our community.

As we enter the final stretch of Jamie Austin’s challenge, I urge you to remember your first Fall Meeting and remember the excitement you felt, the opportunities you learned about, and the relationships that you built. Act now and double your impact by making a donation to the Austin Endowment for Student Travel of any amount that will support the next generation of Earth and space scientists. Thank you for your support thus far, and know that your efforts are making a difference to the future of Earth and space science.

—Chris McEntee (agu_execdirector@agu.org), CEO/Executive Director, AGU

How Forest Structure Influences the Water Cycle

Tue, 10/15/2019 - 11:20

Forests are a critical cog in the global water cycle: Trees pull water from the ground and release it into the atmosphere as vapor through pores in their leaves in a process called transpiration, which can drive temperatures and rainfall across the globe. Forests are also dynamic ecosystems, with both natural events, such as pest infestations and droughts, and anthropogenic activities like logging potentially causing dramatic changes in forest structure. Despite the important roles forests play, the relationship between forest structure and the global water cycle is not well understood.

To help fill gaps in our understanding of this relationship, Aron et al. compared two forest sites in Michigan to find out how disturbances in forest structure can influence water transport from the land surface to the atmosphere.

The team selected two adjacent field sites at the University of Michigan Biological Station in Northern Lower Michigan: an undisturbed, control site dominated by bigtooth aspen and paper birch and a site where researchers in 2008 purposefully killed aspen and birches, giving this disturbed site a much more open canopy than the control. The arrangement of trees within a forest influences the amount of light and heat that reaches the ground, affecting not just transpiration but also other processes like evaporation and entrainment—the process by which air above the canopy is mixed into the canopy—which also contribute to the amount of water vapor that reaches the atmosphere.

Taking advantage of the fact that each of these processes results in distinct isotopic signals in water vapor, the researchers measured stable water isotopes at six heights in the two forest sites during the spring, summer, and fall of 2016.

The results revealed that the disturbed canopy was both drier and warmer than the undisturbed control site. The control site also exhibited a more stratified isotopic profile, suggesting less vertical mixing of the air in the forests, whereas the more open canopy appeared to encourage more mixing. The differences between the two sites were most prominent in the summer and spring.

The study demonstrates that forest canopy can regulate the rate at which moisture and energy are returned to the atmosphere at a local scale, which can in turn influence water retention and the makeup of forest ecosystems. The results provide important context for researchers interested in modeling how both forest ecology and water cycles will evolve as climate change progresses. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1029/2019JG005118, 2019)

—Kate Wheeling, Freelance Writer

Million-Degree Experiment Complicates Solar Science

Tue, 10/15/2019 - 11:19

A group of researchers at Sandia National Laboratories has found that the astronomical model that predicts the Sun’s behavior (the standard solar model, or SSM) underestimates the amount of energy blocked by iron atoms, adding a layer of complexity to a long-standing mystery in solar science.

Stars like our Sun produce energy in their cores from nuclear fusion of hydrogen. The energy produced in the core must travel through several layers of ionized materials (plasma) to reach the surface, where it’s radiated to space.

Certain elements, such as iron, can resist the transmission of energy by absorbing and then reemitting light. This blockage effect is known as opacity and plays an important role in how astronomers model the interior of the Sun.Researchers were able to heat samples of iron and other metals to 2.1 million degrees for a fraction of a second, briefly reproducing the environment inside a star.

Although each element contributes differently to the total opacity within the Sun, the SSM relies on complex calculations to gauge their particular contributions since it’s extremely difficult to directly test their opacities in the lab.

Thanks to a facility called the Z machine that uses powerful X-ray pulses to produce very high temperatures, the Sandia researchers were able to heat samples of iron and other metals to 2.1 million degrees for a fraction of a second, briefly reproducing the environment inside a star. When the samples, each of them the size of a grain of sand, are heated, they turn into plasma, allowing researchers to measure their opacities.

The team found large discrepancies between their measurements and the modeled opacities, meaning that our understanding of the Sun and other stars isn’t as good as astronomers assumed. The results were published online in the 14 June issue of Physical Review Letters.

The new experiments tried to replicate the conditions inside the Sun at 0.7 solar radius, a boundary zone where convection becomes the main energy transport mechanism, replacing radiation.

“If calculated iron opacity is wrong only at 0.7 solar radii, the problem is not too worrisome,” said Taisuke Nagayama, a physicist at Sandia National Laboratories and first author of the new study. “However, if there are similar or bigger discrepancies at different conditions, concerns will be spread out to almost all applications that rely on solar or stellar models.”

Reimagining the Sun’s Interior

In the early 2000s, new observations led some astronomers to think that the amount of elements heavier than hydrogen and helium in the Sun was lower than previously thought. However, the new observations didn’t agree with existing numerical models of the Sun. Simply put, when astronomers put the revised values into their models, they didn’t work anymore.

The red graph shows greater opacity of iron as determined in the new experiments. The blue graph shows the earlier theoretical calculation. Credit: Taisuke Nagayama/Sandia National Laboratories

Astronomers considered several possible explanations for this problem: The new heavy-element abundances could be inaccurate, the standard solar model could be wrong, or the opacity values used in the SSM could be wrong.

The last explanation was probably the easiest to assess from an astronomer’s viewpoint because it wouldn’t require substantive changes to the SSM, just finding out the right values for the opacities. But this path is turning out to be very challenging as well.

“The fact is we know opacities are uncertain, so the question is, how uncertain?”“Just changing the opacities won’t solve all the problems,” said Sarbani Basu, an astronomer at Yale University who authored a commentary about the new research in Physics magazine. “But the fact is we know opacities are uncertain, so the question is, how uncertain?”

To gauge this uncertainty, Nagayama and his colleagues tested not only iron but also chromium and nickel. These elements bracket iron in the periodic table, so researchers expected they would behave similarly. But they didn’t. Instead, the experimental opacity values for nickel and chromium largely agree with the model predictions.

“This experiment has created more problems, because we expected nickel to have the least disagreement, iron to have a bit more, and chromium to have the most disagreement,” Basu said. “Clearly, we are missing things, and we don’t know what.”

Implications for Astronomy

Opacities are key to determining if the new estimates of heavy-element abundances in the Sun are correct. This isn’t trivial because abundances of every other object in the universe are usually measured in relation to solar abundances. These values are then used to create models of other stars and even galaxies.

“You can’t find the age of a star without making a model, so if you want to find the age of an exoplanetary system, you need to make a model of the host star to figure out how old it is,” Basu said.

The lack of reliable metallicity values for the Sun introduces a lot of uncertainty when estimating the ages of other stars, an error that could amount to billions of years. “Anything that deals with stars would be affected,” Basu said.

Looking for an Explanation

To explain these intriguing experimental results, physicists are exploring new possibilities. One of them is considering that certain elements could absorb two photons at a time instead of one. This phenomenon is called two-photon opacity.

If this mechanism is able to solve the discrepancy for iron, researchers would be closer to a solution since nickel and chromium already agree with the models. Other elements should then be tested to see whether or not they agree with the models.

In the near future, the team plans to continue testing other elements, including oxygen, which is a key contributor to overall solar opacity.

—Javier Barbuzano (@javibarbuzano), Freelance Science Journalist

Space Weather Aviation Forecasting on a Global Scale

Mon, 10/14/2019 - 12:39

On November 7, 2019, in response to an International Civil Aviation Organization (ICAO) mandate, the world’s major space weather centers will start issuing global advisories related to disruptions in: high-frequency radio communications; communications via satellite; Global Navigation Satellite System (GNSS)-based navigation and precision location; and enhanced radiation risk to aircraft occupants.

These new forecasting efforts transcend national boundaries.As indicated by the “I” in ICAO, these new forecasting efforts transcend national boundaries. Primary responsibility for the global-24/7, watch, and advisory duties will rotate bi-weekly among the space weather centers [National Oceanic and Atmospheric Administration, 2019].

Initially, this will involve a trio of global service providers: NOAA’s Space Weather Prediction Center (SWPC) in the United States, the Pan-European Consortium for Aviation Space Weather User Services (PECASUS), and the consortium of Australia, Canada, France, and Japan (ACFJ). Additional consortia may be added to the roster of global services providers in the future.

This represents a ‘change-of-state’ for the space weather discipline.In a real sense this represents a ‘change-of-state’ for the space weather discipline, thus aligning the discipline with expectations from the meteorology community. While space weather has been part of military aviation mission-planning for some time, civil aviators have not had consistent, world-wide access to space weather information.

Civil aviators have not had consistent, world-wide access to space weather information.ICAO recognizes that better preparing flight crews, operators, air-navigation service providers, and civil aviation authorities for potential impacts of space weather will improve safety and efficiency of aviation operations. Thus, ICAO is establishing unified advisory thresholds and dissemination procedures [ICAO, 2018].

These unified-threshold products are approximately 100 years in the making. World War I saw the first use of aviation wireless transmissions, leading to a strong post-war interest in radio communications. In 1919 the Union Radio Scientifique Internationale (URSI) was created to study radio science and radio-telegraphy. Radio communications ‘effects’ from sunspots and electric and magnetic disturbances in Earth’s upper atmosphere were apparent. URSI inaugurated a daily service of radio (dot-dash) bulletins broadcast from France’s Eiffel tower in 1928. One year later the United States replicated the service for land, marine and aerial needs [Kennelly, 1937]. In 1939 The US Department of Commerce, National Bureau of Standards–Radio Section initiated a formal service for forecasting radio transmission information and maximum useable frequencies [Gilliland et al., 1939; Caldwell et al., 2017]. Eos, Transactions of the American Geophysical Union (AGU) first published the principle method to forecast the monthly mean sunspot number, then a key parameter for high frequency radio propagation, in 1949 [McNish & Lincoln, 1949].

Subsequently during wartime and peacetime, aviation has flourished. Flight safety is now tightly coupled to aviation communication, positioning, tracking and avionics integrity, all of which can suffer from solar-terrestrial disturbances. In this century the World Meteorological Organization recognized that aircraft operating in newly opened polar routes could be subject to solar radiation storms that could affect ‘health, communication and the global positioning system’.

The system will be used by the national forecasting agencies, federal and international civil aviation authorities, domestic and international commercial airlines, and private companies.Since 2002 stakeholders have convened dozens of international meetings to assist ICAO in enhancing global civil aviation safety and efficiency.

As a result, ICAO has structured a space weather advisory system that will be used by the national forecasting agencies, federal and international civil aviation authorities, domestic and international commercial airlines, and private companies.

AGU is a staunch supporter and publisher of research that has elevated space weather as a discipline, especially in Space Weather journal. As the space weather advisories are integrated into the global aviation system there will likely be new opportunities to research: unanticipated geophysical conditions reported by aviators, radiation effects on humans and avionics, and the scope of radio disruptions on aviation radars, communication and navigation.

We anticipate fruitful research partnerships as the new space weather advisory system takes form.No doubt, space weather benchmarking efforts will benefit from such information (e.g., US National Science and Technology Council, 2015). New research and benchmarking will support a realistic, integrated assessment of impacts on the portfolio of technologies used in civil aviation. We anticipate fruitful research partnerships as the new space weather advisory system takes form and the space weather forecasting advances to a new state.

—Delores J. Knipp (delores.knipp@colorado.edu;  0000-0002-2047-5754) and Michael A. Hapgood ( 0000-0002-0211-0241), Editors, Space Weather

Dusting Off the Arid Antiquity of the Sahara

Mon, 10/14/2019 - 12:38

The Sahara desert looms large in Africa, stretching across the northern third of the continent. The intensely arid nature of the Sahara means that wind is able to loft sediment—especially dust— into the air and carry it to all corners of the globe.

The largest warm desert in the world, the Sahara is currently hyperarid, although it has experienced periods of wetter weather in the past. The age of the desert is unknown. Researchers have tried to pin it down using various geologic records but have come up with vastly varying ages and a healthy amount of controversy.

But new research published in Palaeogeography, Palaeoclimatology, Palaeoecology has narrowed in on the age of the Sahara. Using a combination of paleosols, well-dated basalts, and geochemistry, researchers concluded that the Sahara has been an arid, dust-producing region for millions of years.

The Saharan Dust Machine

The Sahara is the largest source of dust in the world, said Daniel Muhs, a Quaternary researcher at the U.S. Geological Survey in Denver, Colo., and lead author of the new study. This dust “is a major source of soil parent material for every place that’s downwind” and has “significance in at least three major ways,” he noted.

Prior estimates of the age of the Sahara “range all the way from [the] Holocene, sometime in the last 10,000 years, back to the Miocene, around 7 million years.”First, soils on the nearby Canary Islands, and soils across the ocean in the Amazon, get an influx of dust from Sahara. Second, Muhs said, Saharan dust fertilizes the oceans, providing bioavailable iron to phytoplankton. Last, Saharan dust in the atmosphere can affect albedo, creating a warming or cooling effect on the planet.

While Saharan dust has played (and continues to play) a big role around the world, research has placed “quite variable” ages on the desert itself, Muhs said. Prior estimates “range all the way from [the] Holocene, sometime in the last 10,000 years, back to the Miocene, around 7 million years.”

Muhs and his colleagues wanted to narrow in on the age of the Sahara, so they turned to a stack of paleosols and basalts on the nearby Canary Islands. Muhs said that between basalt flows—previously dated to be about at least 4.8 million years and 0.4 million years old—a half dozen buried soils were stacked. This meant that the paleosols were deposited during that 4.4-million-year stretch. .

Reddish paleosols studied by researchers in the Canary Islands were buried beneath dark, 3-million-year-old lava flows (basalts). Credit: Daniel R. Muhs

. The Canarian paleosols contained a lot of dust, but where the sediment originated was unknown. The team collected samples for mineralogy and geochemistry testing to see whether the dust reflected a Saharan source.

Dust Signatures

The researchers looked at both the bulk mineralogy and the clay mineralogy of the soil samples and looked for two specific minerals: mica and quartz. “If you find mica or quartz on a basaltic island, it didn’t come from there,” said Muhs. “It came from somewhere else.”

The presence of quartz and mica meant the dust came from a continental source, but that didn’t necessarily mean Africa. However, Muhs said, “the other alternatives are North America, where the wind’s going the wrong way; Europe, where there the wind’s going the wrong way; Asia, where the wind would have to go all the way around the world to get there…. So the logical player here is Africa.”

To back up their theory of an African source for the dust, the researchers completed a geochemical survey of the sediment. They focused on robust, resistant trace elements like scandium, thorium, and lanthanum in the paleosol samples to infer the source of the material.

Previous work on African dusts in the Caribbean gave the team a great picture of what chemical signatures to look for. “We know what African dust looks like,” Muhs said.

Age of Aridity

Muhs likened the dust contributions on the Canary Islands to ingredients in a tossed salad. “We analyzed the tossed salad and figured out the flavors,” he said. “You have both red peppers and green peppers—some of them are from Africa, some of them are from local volcanics.”

This meant that dust from an arid Sahara had been accumulating on the Canary Islands for much of the Pliocene (~4.8 million years ago) and into the Pleistocene (0.4 million years ago).

“The great novelty of Muhs and his collaborators’ study is that they have analyzed a new type of archives—aeolian dusts preserved in paleosols,” said Mathieu Schuster, a researcher at the University of Strasbourg in France who was not involved in the study.

“If you want to tell and understand the story of the Earth, then it is critical to know the age of any rock, fossil, and geodynamical event.”Schuster added that the researchers used the well-constrained ages of the basalts to help refine the geochronology, leading to “an elegant and robust demonstration of the antiquity of the Sahara desert.”

“If you want to tell and understand the story of the Earth, then it is critical to know the age of any rock, fossil, and geodynamical event,” said Schuster. “Here, the question is not simply to know how old…the Sahara desert [is] but, more precisely, to know since when this large part of Africa experience[d] hyperarid conditions.”

Schuster said that understanding when the Sahara first became arid allows researchers to investigate other important questions. “For example, we do not know if the onset of arid conditions was abrupt or progressive, how ecosystems have been impacted by the aridification, and how prehuman and human societies have adapted.”

Understanding past climates is crucial for understanding the future climate, said Muhs, and that’s why every Intergovernmental Panel on Climate Change report has a chapter on paleoclimate.

“Climate is on all of our minds these days,” he stated. “A lot of what we don’t understand about what we might be headed for as a future climate, we can sometimes try to learn from past climates.”

—Sarah Derouin (@Sarah_Derouin), Freelance Journalist

Thoughtfully Using Artificial Intelligence in Earth Science

Fri, 10/11/2019 - 11:55

Artificial intelligence (AI) methods have emerged as useful tools in many Earth science domains (e.g., climate models, weather prediction, hydrology, space weather, and solid Earth). AI methods are being used for tasks of prediction, anomaly detection, event classification, and onboard decision-making on satellites, and they could potentially provide high-speed alternatives for representing subgrid processes in climate models [Rasp et al., 2018; Brenowitz and Bretherton, 2019].

Although the use of AI methods has spiked dramatically in recent years, we caution that their use in Earth science should be approached with vigilance and accompanied by the development of best practices for their use. Without best practices, inappropriate use of these methods might lead to “bad science,” which could create a general backlash in the Earth science community against the use of AI methods. Such a backlash would be unfortunate because AI has much to offer Earth scientists, helping them sift through and gain new knowledge from ever-increasing amounts of data. Thus, it is time for the Earth science community to develop thoughtful approaches for the use of AI.

Easy Access to Powerful New Methods

Setting up and running experiments using AI methods used to require sophisticated computer science knowledge. This is no longer the case.Setting up and running experiments with AI methods used to require sophisticated computer science knowledge. This is no longer the case. The recent success of AI in other domains has spurred the development of free and highly efficient software packages that are extremely easy to learn and use. Even complex artificial neural networks can be set up in a few lines of code, and countless tutorials and examples are available to guide the novice user. Furthermore, as algorithms become more efficient and computational power gets cheaper and more available on the cloud, access to high-performance computing is no longer a limiting factor. All of these developments bring powerful AI methods to Earth scientists’ fingertips.

Earth scientists have a long tradition of using methods based on physics (e.g., dynamical models) and sophisticated statistics (e.g., empirical orthogonal function analysis and spectral analysis). They have thus accepted statistical methods, which are a type of data-driven method, as useful tools. However, the sudden rise of AI methods—another type of data-driven method—in Earth science, coupled with a terminology and culture unfamiliar to Earth scientists, may make AI methods seem more foreign than they actually are. AI simply provides an extended set of new data-driven methods, many of which are derived from statistical principles. For example, one basic type of artificial neural network (deep learning) is essentially a linked series of linear regression models interspersed with scalar nonlinear transformations.

We address here the question of how best to leverage both physics-based and data-driven methods simultaneously by outlining several proposed steps for researchers. For brevity we use only the term “AI methods” below, although most of our discussion applies equally to all data-driven methods.

Step 1: Ask Guiding Questions

We suggest that Earth scientists ask themselves the following questions before choosing a specific AI approach:

Why exactly do I want to use AI for my application? Is this application for prediction, understanding, or both? The answer is important for choosing an AI method that satisfies the desired trade-off between transparency and performance. How can scientific knowledge be integrated into the AI method? There are many ways to combine expert scientific knowledge of underlying physical processes (e.g., physics and chemistry) and AI methods; every effort should be made to merge these approaches, as discussed more below in step 2. Which tools from explainable AI are available? The emerging field of explainable AI (XAI) provides many new tools for the visualization and interpretation of AI methods [Samek et al., 2019]. McGovern et al. [2019], for example, show the enormous potential of these tools for weather-related applications. These tools have the potential to transform the use of AI methods in Earth science by increasing transparency and thus building trust in their reasoning. Does my approach generalize to address all conditions in which it will be used? AI methods rely on “training data” to learn the characteristics of a system. Special attention must be paid to testing and ensuring that the resulting AI model works under changing conditions, including regime shifts. Generalization can be greatly enhanced by fusing scientific knowledge and can be tested by methods ranging from cross validation to the AI technique of generating adversarial examples. Is my approach reproducible? Am I following the findable, accessible, interoperable, and reusable (FAIR) data principles? Is my method easily accessible to the community? What am I hoping to learn in terms of science? Because of the complexity and abstract nature of many AI techniques, the answer to this question is crucial to set up the problem and method so that there is something to learn from the AI approach and to be able to interpret the results. The availability of explainable AI tools discussed above creates many novel opportunities to gain new scientific insights into Earth science processes (E. A. Barnes et al., Viewing forced climate patterns through an AI lens, submitted to Geophysical Research Letters, 2019).

We encourage researchers to thoroughly reflect on these questions to select the best AI method for their application. Furthermore, to promote substantial advances in scientific research, editors of Earth science journals may need to create guidelines for the review of AI-focused manuscripts to ensure that findings are explained clearly and placed in the context of existing Earth science. Likewise, editors might encourage comparison to standard or simpler approaches to discern the scientific advances that AI offers.

Step 2: Explore Fusing Scientific Knowledge into AI

The field of theory-guided data science investigates ways in which AI and scientific knowledge can be combined into hybrid algorithms that incorporate the best of both worlds.The field of theory-guided data science investigates ways in which AI and scientific knowledge can be combined into hybrid algorithms that incorporate the best of both worlds [Karpatne et al., 2017]. Earth science applications have several properties that make it imperative to integrate scientific knowledge as much as possible, such as

the desire of Earth scientists to gain scientific insights rather than just “get numbers” from an algorithm the availability of extensive existing knowledge the high complexity of the Earth system the limited sample size and lack of reliable labels in many Earth science applications

Integrating scientific knowledge into AI approaches greatly improves transparency because the more scientific knowledge is used, the easier it is to follow the reasoning of the algorithm. Generalization, robustness, and performance are also improved because the scientific knowledge can fill many gaps left by small sample sizes, as explained below.

How can we integrate scientific knowledge? One excellent practice uses a two-step approach. For a given task, first identify all subtasks that can easily and efficiently be addressed by physics-driven methods, and apply those. Although this guideline may appear obvious, it is often overlooked, likely because applying an off-the-shelf AI algorithm to the entire task appears to require less work than carefully analyzing and addressing several subtasks.

For remaining subtasks, consider using AI algorithms while still leveraging scientific knowledge. Most AI methods have some kind of optimization procedure at their core; that is, they search for an optimal model over a large parameter space. Without using scientific knowledge, the search space is often large, and many solutions may exist, only some of which might be physically meaningful. By leveraging known physical relationships (e.g., by including them as constraints in the optimization problem), the optimization is guided toward only physically meaningful solutions (T. Beucler et al., Enforcing analytic constraints in neural-networks emulating physical systems, arXiv:1909.00912). This approach offers additional benefits: Convergence tends to be faster because of the smaller search space, and the resulting method tends to generalize better because of the use of established physical relationships. Thus, although this approach requires more work at the onset, it tends to result in much better overall solutions.

Step 3: Foster Interdisciplinary Collaboration and Education

Innovative approaches, such as fusing scientific knowledge and AI methods, require deep knowledge integration across disciplinary boundaries [Pennington, 2015]. This integration is best achieved by close collaboration between Earth scientists and AI researchers (see guidelines for such collaborations). The Association for Interdisciplinary Studies organizes conferences and provides general strategies for effective interdisciplinary collaboration.

There is a tremendous need to develop guidelines and best practices to prepare the future Earth science workforce for innovative, interdisciplinary research bridging Earth science and AI.It might not always be possible to find suitable collaborators, so one option is to join learning communities, such as the National Science Foundation–sponsored EarthCube Research Coordination Network IS-GEO: Intelligent Systems Research to Support Geosciences. The increasing number of sessions (e.g., coordinated by AGU’s Earth and Space Science Informatics section), workshops (e.g., Climate Informatics), and conferences (e.g., the American Meteorological Society’s Conference on Artificial Intelligence for Environmental Science) dedicated to AI research in Earth science is encouraging, yet there is still a large need for additional events that engage Earth science and AI researchers simultaneously and build bridges between these communities.

Furthermore, there is a tremendous need to develop guidelines and best practices to educate the future Earth science workforce to be well prepared for innovative, interdisciplinary research bridging Earth science and AI. Numerous institutions are starting to incorporate data science and AI courses into their curricula (e.g., Cornell’s Institute for Computational Sustainability and National Science Foundation Research Traineeship programs at the University of Chicago, University of California, Berkeley, and Northwestern University). The community can support these efforts by collecting and disseminating educational resources and by developing guidelines on which topics are most beneficial to integrate into Earth science education [Pennington et al., 2019].

The availability of AI methods provides many new and exciting research avenues for Earth scientists, but it also requires the community to reflect on when and how these methods should be used because using such methods without careful consideration can lead to bad science. The two most promising safeguards against such bad science are the integration of scientific knowledge into AI methods and the use of visualization tools to maximize their transparency.


Support was provided by National Science Foundation grants AGS-1749261 (E.A.B.) and AGS-1445978 (I.E.).

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer