EOS

Syndicate content Eos
Science News by AGU
Updated: 1 day 2 hours ago

Catching Elves in Argentina

Fri, 06/05/2020 - 13:26

Lightning Science Strikes Lightning Research Flashes Forward   Planetary Lightning: Same Physics, Distant Worlds   Studying Earth’s Double Electrical Heartbeat   Catching Elves in Argentina   Returning Lightning Data to the Cloud   Understanding High-Energy Physics in Earth’s Atmosphere   Mapping Lightning Strikes from Space   Investigating the Spark

The Pierre Auger Observatory in Argentina was not designed to catch elves, but in recent years it has been doing exactly that. Through a bit of serendipity, we discovered that the world’s largest cosmic ray detector provides new capabilities to observe rare, ring-shaped emissions of ultraviolet (UV) and visible light high above thunderstorms [Aab et al., 2020]. Studying these elves, short for emissions of light and very low frequency perturbations due to electromagnetic pulse sources [Fukunishi et al., 1996], could reveal new insights into the physics and effects of strong lightning, including lightning of the highest energy.

Where Elves Come From

Lightning produces familiar large bolts and flashes, but strong lightning—lightning carrying more than about 100 kiloamperes of current—can also generate expanding rings of light overhead, at the base of the ionosphere, a plasma layer roughly 85 kilometers above Earth’s surface [Inan et al., 1991]. These “airglow enhancements,” first recorded by a camera aboard the space shuttle Discovery in 1989 [Boeck et al., 1992], appear when a fast change in the electrical current generated by lightning produces an electromagnetic pulse (EMP). When the pulse intersects the base of the ionosphere, it transfers energy to free electrons in this plasma. The energized free electrons can then excite electronic transitions when they collide with atmospheric molecules. As these excited molecules relax again to lower-energy states, they emit a wide-frequency spectrum of light in a process known as fluorescence; in particular, some nitrogen molecules will emit UV light.

The expanding ring of light emissions arise as the roughly spherical electromagnetic pulse crosses the much flatter base of the ionosphere.The expanding ring of light emissions arise as the roughly spherical EMP crosses the much flatter base of the ionosphere. By looking at the patterns in the light emissions, we can reconstruct the geodetic location, altitude, and time of the lightning stroke, as well as more fundamental physics about the stroke itself and about the ionosphere. This fundamental information has a variety of practical applications. For example, engineers test ways of protecting aircraft from severe weather using measurements of the amount of current in a lightning stroke and how long it took for the current to ramp up during the stroke process. In addition, GPS signals are corrected for attenuation caused by the boundary between the mesosphere and the ionosphere. As the brightness of elves is dependent on the concentration of free electrons and molecules in the ionospheric plasma, their observation may potentially be used to validate previous electron density measurements at the base of the ionosphere.

The World’s Largest Cosmic Ray Observatory

The accidental elves catcher, the Pierre Auger Observatory, was built by a collaboration of almost 500 physicists from 19 countries; construction was completed in 2008. Occupying 3,000 square kilometers, it is the largest cosmic ray observatory on Earth and was designed to open a new window on the cosmos by measuring the highest-energy subatomic particles known to exist [Aab et al., 2015]. The energy in these tantalizing cosmic rays, which originate from faraway sources outside our solar system, can exceed 1020 electron volts. This is roughly the kinetic energy of a tennis ball in play at the U.S. Open, but this energy is carried by a single proton or nucleus.

A thunderstorm looms on the horizon near the hillside site of one of the Pierre Auger Observatory’s fluorescence detectors (the same site depicted in the figure at the beginning of this article). Credit: Pierre Auger Observatory, CC BY-SA 2.0

The efforts of the Pierre Auger Collaboration have led to discoveries about the origins and nature of these elusive particles [Aab et al., 2017a, 2017b] and to contributions to the emerging field of multimessenger astrophysics [Abbott et al., 2017]. At the time of its construction, we did not foresee that the Auger Observatory could also catch elves. Furthermore, we discovered that the Auger Observatory can catch them across an area that’s 1,000 times larger than the 3,000-square-kilometer area it uses to catch cosmic rays. In the case of the Auger Observatory, this larger area overlaps with a region known for its high rates of strong lightning and large convective storms.

A Large Net to Catch Rare Events

The highest-energy cosmic rays are rare—on average, only one 1020-electron-volt ray lands in a 10-square-kilometer area (about 3 times as large as New York City’s Central Park) per century. The Auger Observatory catches these rays indirectly by measuring the UV light and the particles they give off, essentially using the atmosphere as a large particle physics-type calorimeter. High-energy cosmic rays deposit their energy in the troposphere through the creation of cascades of secondary lower-energy particles, including many electrons.

One of the 24 fluorescence detector (FD) telescopes, showing (left to right) the edge of the mirror, the custom camera, and the light aperture. Credit: Pierre Auger Observatory, CC BY-SA 2.0

The surface detector (SD) of the Auger Observatory, comprising a gridded array of more than 1,600 instrumented water tanks separated from each other by 1.5 kilometers, samples the footprint of these cascades as they hit the ground. The fluorescence detector (FD), consisting of 24 telescopes arranged at four sites around the perimeter of the SD array, views the atmosphere above the SD [Abraham et al., 2010]. Unlike astronomical telescopes, the FD telescopes have much wider fields of view, about 30° × 30°, and they point in fixed directions just above the horizon.

Electrons in cosmic ray cascades absorb energy from the cosmic ray, and these energetic electrons can excite other electrons in atmospheric nitrogen atoms into higher-energy states. Much like what happens in elves, when these excited electrons in nitrogen atoms return to their ground state, they fluoresce, giving off some of their extra energy as UV light. The FD telescopes at the Auger Observatory operate at night to record this UV fluorescence, which is obscured by sunlight during daylight hours. The amount of UV light emitted from the cascades is nearly proportional to the energy of the incoming cosmic ray. The FD records the evolution of cosmic ray cascades in the troposphere and provides the energy calibration reference (the mathematical relation between the energy of the cosmic ray and the number of photons that the camera records) for the Auger Observatory.

The light aperture of each FD telescope is covered by an optical filter (the honeycomb structure at left) that transmits light only in the 300- to 420-nanometer region of the ultraviolet spectrum. Credit: Pierre Auger Observatory, CC BY-SA 2.0

To capture the faint UV light from the cosmic rays, the light aperture of each FD telescope is relatively large (2.2 meters in diameter) and is covered by a UV-transmitting optical filter that screens out visible light in the atmosphere. A custom camera consisting of 440 photomultiplier tubes at the mirror focus generates images of 20 × 22 pixels  at the rate of 10 million frames per second. This impressive acquisition rate enables us to observe cosmic ray showers, the cascades of relativistic particles crossing the sky at the speed of light, in detailed “slow motion.”

If the Auger Observatory is focused on seeing cosmic rays, how do elves appear in our cameras? A cosmic ray shower looks something like a meteor moving at the speed of light, but elves appear as expanding wavefronts propagating down across the focal plane of the camera (Figure 1). The brightest edge of the front appears to travel through the atmosphere faster than the speed of light! Is this a violation of relativity, or of causality? Not at all: Such an artifact is known in physics as the “shadow effect,” and it is clearly visible thanks to the high time resolution and large light-gathering power of this instrument.

Fig. 1. A diagram of one telescope of the Auger FD and the setup for the observation of elves from storms that occur below the horizon (top). A plot showing the propagation time of the light pattern for a cosmic ray air shower (bottom left). A plot displaying the propagation time of the light pattern for an elves in one telescope of the Auger FD (bottom right). Observations of Elves: From Accidental to Intentional

The first serendipitous observations of three elves candidates occurred between 2005 and 2007 during the construction of the detector [Mussa et al., 2012; A. Aab et al., The Pierre Auger Observatory: Contributions to the 33rd International Cosmic Ray Conference (ICRC 2013), EarthArXiv, http://arxiv.org/abs/1307.5059]. A “lightning veto” algorithm had been built into the data collection software to filter out flashes of light coming from lightning strokes close to the detector. This algorithm prevented all but a few elves from being recorded, keeping just the signals from light arriving from cosmic rays.

The Auger Observatory’s footprint for catching elves is about 3 million square kilometers, the largest ground-based area ever used for detecting elves.After we realized that the Auger FD was able to detect elves, we had to develop a simple selection algorithm to recognize the propagation pattern of light arriving from elves, as well as a data-collecting format dedicated to their intentional observation. The Auger Observatory started taking data with these new selection criteria in January 2014. We and our colleagues recently published a map that we made using the data from 2014 to 2016 to show that the Auger Observatory’s footprint for catching elves is about 3 million square kilometers, the largest ground-based area ever used for detecting elves [Aab et al., 2020].

By studying data from the Tropical Rainfall Measuring Mission, scientists identified northern and central Argentina as having the highest rate of lightning flashes in the tallest thunderstorms on Earth [Zipser et al., 2006]. Studying these tall, intense storms over Argentina is expected to provide the scientific community with insights into extreme weather patterns in the rest of the world, such as severe convective cells that form over the Colorado Rockies in summer.

The location of the Auger Observatory, on a dry highland with relatively low cloud coverage, makes it an ideal spot to study the transient luminous events produced by these strong thunderstorms at far distances. Earth’s curvature prevents arrival of the direct light from the lightning bolts and allows for clean observations of light emissions from the base of the ionosphere. In addition, FD sites facing west exploit the Andes mountain range, which screens direct light from rare storms above the Pacific Ocean that produce elves visible from as far away as 1,000 kilometers.

By combining the detailed measurements of elves from the Auger Observatory with data from other lightning experiments across Argentina, we hope to contribute to current research in atmospheric electricity physics. In one of the first analyses, we used a time correlation to match lightning strokes recorded by the World Wide Lightning Location Network with elves detected at the Auger Observatory, and we demonstrated that these elves are created by high-energy lightning strokes. Thus, the Auger Observatory is naturally selecting intense electrical events in the severe Argentinian thunderstorms that occur during the austral summer.

Elves Reveal Unexplained Features

Within our 3-year data set, 18% of the approximately 1,600 elves have more than one peak in the signal recorded by the cameras at Auger. Elves with one peak in their emissions pattern are created by a cloud-to-ground lightning stroke, whereas, in accordance with classical electromagnetism theory, elves with two peaks in their emissions pattern are expected to be created by a lightning stroke that is not touching the ground—an intracloud lightning stroke. The first reported observation of elves with two peaks in their photo traces was in 1999, in New Mexico [Barrington-Leigh and Inan, 1999].

The very low frequency of the EMP emitted by a lightning stroke allows it to reflect many times between the ground and the ionosphere, and to propagate over thousands of kilometers. The ultrafast (10-megahertz) data acquisition rate and the light-collecting power of the Auger FD enables us to see very fine structure in the light emissions of elves. As a result, some of our events have more than two peaks in their photo traces.

We believe that elves with more than two peaks in their photo traces are also created by intracloud activity, but because of the timing between peaks, we are not convinced that they are created solely by multiple reflections of the EMP in the waveguide created between the ground and the ionosphere. Using standard physical models, we can estimate that any two peaks in one elves event cannot be separated by more than 40–50 microseconds. However, for an event on 4 March 2016 (Figure 2), the Auger Observatory detected three distinct peaks recorded by more than 40 photomultiplier tubes across two FD sites. The third peak in the photo trace occurred 100 microseconds after the second peak. Consequently, other processes in the physics of atmospheric electricity must be responsible for creating elves with complex light emission curves.

Fig. 2. On 4 March 2016 at about 2:00 a.m. local time, FD telescopes recorded an elves event with three distinct peaks in the numbers of photons detected (left). One frame of the acquired movie for this event represents all pixels in one FD telescope, showing the third peak propagating across the camera after the first two peaks (right).

Among such processes, we cannot exclude multiple initial breakdown pulses, energetic intracloud pulses, compact intracloud discharges, or even gigantic jets as sources for such elves. For example, an energetic intracloud pulse is a type of high-energy electrical discharge associated with the creation of terrestrial gamma ray flashes. The ramp-down of the electrical current in one of these pulses could be sufficiently rapid to create an additional EMP.

Recent research provides clues as to the most likely source of complex elves. This year, researchers reported the first coincident observation of a terrestrial gamma ray flash and an elves event by the Atmosphere–Space Interactions Monitor (ASIM) aboard the International Space Station [Neubert et al., 2020]. These observations, coupled with the large number of superbolts recorded over northern Argentina [Holzworth et al., 2019], suggest that energetic intracloud pulses could be a reasonable candidate mechanism for creating the elves with complex emissions patterns detected by the Auger Observatory.

A New Look at Earth’s High-Energy Physics

The Auger Observatory has unexpectedly shed light on the dynamics of plasma accelerators on our planet, such as those hiding behind the flash of a lightning bolt.The Auger Observatory was designed to investigate and discover the most powerful particle accelerators in the universe, which can boost cosmic rays above 1020 electron volts. But it has also unexpectedly shed light on the dynamics of plasma accelerators on our planet, such as those hiding behind the flash of a lightning bolt. We are continuing to analyze data collected by the observatory to pursue more detailed reconstructions of elves created by intracloud discharges and to study the brightness and sizes of elves.

We welcome all collaboration in interpreting elves with multiple peaks. We are also interested in coincident observations with satellite instruments, such as ASIM. The Pierre Auger Observatory will continue year-round operations with full horizon coverage until at least 2030.

Acknowledgments

This article was written by the authors on behalf of the entire Pierre Auger Collaboration. The full list of members taking part in the collaboration is available on the Pierre Auger Observatory’s website.

Sunburned Surface Reveals Asteroid Formation and Orbital Secrets

Fri, 06/05/2020 - 13:01

Apparently, even asteroids can get sunburns.

Newly analyzed high-resolution images from the Hayabusa2 landing on the near-Earth asteroid Ryugu revealed a reddish hue to surface materials. Scientists interpret that coloration to be a result of a brief orbital excursion close to the Sun. When combining this information with previously collected data from Ryugu, scientists can now paint a clearer picture of how and when the asteroid formed, how its orbit has changed over time, and what its surface looks like.

Learning About an Asteroid Finding a soft landing spot on rocky Ryugu was very difficult for the engineers at JAXA’s Hayabusa2 mission. Credit: JAXA/U. Tokyo/Rikkyo U./Nagoya U./Chiba Inst. Tech/Meiji U./U. Aizu/AIST

The Japan Aerospace Exploration Agency’s (JAXA) spacecraft Hayabusa2 took off for Ryugu in 2014. It arrived in orbit in 2018 and touched down on the asteroid in February 2019 to collect samples for a return to Earth, which will occur later this year. Data from Hayabusa2 have been steadily painting a clearer picture of the asteroid while also revealing surprises at every turn.

Ryugu is 900 meters in diameter and shaped like a spinning top. It is a carbonaceous asteroid, the most common type of asteroid known. Although scientists won’t know for certain until samples return to Earth, they think Ryugu is likely to contain hydrous forms of minerals. The asteroid is covered in highly mobile pebbles and multimeter boulders but no soft dust.

In the latest study on Ryugu, published in May in Science, Tomokatsu Morota of the University of Tokyo and colleagues reported that the asteroid’s surface is very young, is “black like coal with a slightly reddish tint,” and “has lower surface gravity than many other planets and small bodies.” These and other characteristics had previously been reported by scientists studying Ryugu: its rubble pile nature, the presence of some hydrous minerals, and similarities to some carbonaceous chondrite meteorites found on Earth. Together, these features “suggest that Ryugu was formed by accumulation of fragments ejected from a primitive or dehydrated carbonaceous parent asteroid by an impact,” Morota said. In fact, the team reported, it may have formed from several cataclysmic events.

“We were surprised to see that the subsurface material was darker than the surface material.”But perhaps the most interesting part of the recent findings is the asteroid’s color. “We were surprised to see that the subsurface material was darker than the surface material,” said Makoto Yoshikawa, Hayabusa2 mission manager at JAXA.

Ryugu’s subsurface was revealed as Hayabusa2 touched down and its thrusters lifted particles in the subsurface layer. The particles were dark black with a reddish tint, whereas the surface was bluer. Ryugu’s boulders also tended to be bright and blueish with sporadic reddish singeing on top. Surface coloration varied by latitude, with bright rocks at the equator and poles and reddish materials at midlatitudes.

Coloration was also revealed as Hayabusa2 shot a tantalum bullet into the surface to make an artificial crater and collect the ejecta for the sample return mission. A boulder that broke in the procedure was found to be 1.5 times brighter inside than out.

The strange coloration hints at two important findings: the asteroid’s young age and its changing orbital history.

Thermal Metamorphism

Potential causes for the coloration include pummeling by small impacts, exposure to space weather, and heating by the Sun. Scientists concluded that solar heating is the most likely reason for the red coloring: The reddish materials were several meters deep, but space weathering usually affects only the top 100 nanometers of surface materials, and there are not enough craters or other evidence to indicate significant pulverization by small impacts.

Basically, Ryugu got a sunburn.The team used size frequency distributions of blueish to reddish craters to estimate that Ryugu’s surface reddening occurred between 0.3 million and 8 million years ago, Morota said, whereas the asteroid itself is estimated to have formed between 8.8 million and 16.6 million years ago. The dating suggests that Ryugu spent 8.5 million years in the main asteroid belt and between 300,000 and 8.1 million years in near-Earth orbit. As it shifted orbits, it came near enough to the Sun to be physically impacted by solar heat. This proximity probably lasted less than 1 million years, Morota said.

Basically, Ryugu got a sunburn, Yoshikawa said.

“The conclusion—that Ryugu must have approached the Sun close to the orbit of Mercury—is very interesting,” Yoshikawa added, “because while we know that theoretically such an event is possible, this paper shows that such a thing really happened in the past.” But, he noted, the team will learn much more once they get the samples back later this year.

Scientists interpret the cause of Ryugu’s surface reddening as solar heating that occurred while Ryugu came temporarily closer to the Sun than its present orbit. Click image for larger version. Credit: Morota et al., 2020, https://doi.org/10.1126/science.aaz6306 Comparative Analysis

Learning about asteroids like Ryugu and Bennu, another carbonaceous asteroid from which the NASA Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) mission will collect samples later this year, is important for understanding how our solar system evolved.

Moreover, such asteroids “contain natural resources such as water, organics, and precious metals,” which might, “in the future, fuel the exploration of the solar system by robotic and crewed spacecraft,” said Grey Hautaluoma, a NASA spokesperson.

It will be interesting to compare the results from these two missions, Yoshikawa added. “Then I think we will get better understanding of the evolution of small near-Earth asteroids.”

—Megan Sever (@MeganSever4), Science Writer

This Week: The Best of Eos

Fri, 06/05/2020 - 12:58

Visualizing Science: How Color Determines What We See. Obviously, this feature has startlingly beautiful illustrations. But that’s not the only reason I like it. It’s a timely reminder of how we interpret the world around us with our eyes and how much we understand or miss through the embedded coding and expectations we carry with us. And it’s a reminder of the art of science, which is a truly human endeavor. Plus it’s a good read, with a breadth of information that any scientist should digest before tackling their next AGU poster. —Naomi Lubick, International Editor

 

Power Outages, PG&E, and Science’s Flickering Future. One of the most important issues we cover here at Eos is how scientists continually adapt to our changing world. In this two-part series, Jenessa Duncombe reported on how the scheduled blackouts in California due to wildfires affect science experiments that can’t just be paused and restarted. This isn’t a one-time issue: The area’s aging power grid will likely have to be shut down periodically over the next decade. Scientists will need to adapt—as they also adapt to new circumstances like the pandemic—and I look forward to more Eos coverage that reports on solutions to these challenges. —Heather Goss, Editor in Chief

 

Basalts Turn Carbon into Stone for Permanent Storage.

Iceland’s Hellisheiði Geothermal Power Station, above, is the third largest geothermal power station in the world and the site of ongoing mineral carbonation experiments. Credit: Árni Sæberg

This was by far my favorite article to write, from start to finish. It began with a trip to Iceland for a totally separate reason, but while I was there I had the opportunity to visit the Hellisheiði Geothermal Power Station and learn about its operation. The seed for the article was a throwaway line during a tour and one sentence on a display: “We’re researching new ways to lock away the carbon waste in the ground.” And I went, “Huh, that sounds neat. What’s that?” What followed you can read for yourself in the article, but the experience drove home two important lessons for me: One, writers should keep their minds open to new ideas everywhere they go because you never know where you’ll find inspiration. And two, to overcome the climate crisis, society needs to not only drastically cut our greenhouse gas emissions but also invest in new, innovative ways to reverse the damage we’ve already done. —Kimberly Cartier, Staff Writer

 

Basalts Turn Carbon into Stone for Permanent Storage. I have to agree with Kim on this one—I loved this story about a new carbon-storing technology. Kim’s whip-smart reporting and keen ear found this story on an international trip, and I’m so glad she did. It’s rare that we get to talk about uplifting progress when writing about climate change, so Kim’s story stood out as both novel and fascinating. None of us knows what the next decade will bring (or which technologies will bring change), but I know one thing for sure: We need detailed reporting like this on climate change solutions. This is just one of the many reasons I feel lucky to have Kim on staff here at Eos. —Jenessa Duncombe, Staff Writer

 

Lessons from a Post-Eruption Landscape.

Young vegetation greens the landscape near Mount St. Helens in this view from June 2017, 37 years after the volcano’s last major eruption. Credit: Jon Major, U.S. Geological Survey

Retrospective articles about major disasters often amount to retellings of the same series of occurrences leading up to, during, and immediately after the main event. These can be interesting and informative, for sure, particularly when they offer new details or perspectives, but in the case of the May 1980 eruption of Mount St. Helens, so much has been written that it’s difficult to find fresh ground. This article takes a different tack from most, however, focusing on the eruption’s catastrophic effects on the surrounding landscape and the 4 decades of critical research that have gone into studying the recovery and evolution of the slopes, streams, and life around the volcano. As the authors note, “Long-term research on the biophysical responses at Mount St. Helens has provided important new insights, challenged long-standing ideas, and provided many societal benefits,” such as informing our understanding of hazards created by the massive quantities of ash and sediment washed down local rivers. —Timothy Oleson, Science Editor

 

During a Pandemic, Is Oceangoing Research Safe? This was by far my favorite story to report on. I wanted to capture what it felt like to be in the field when the coronavirus outbreak accelerated in March, so I looked around for scientific expeditions under way. I was fortunate enough to find Rainer Lohmann, an oceanographer in the middle of the Atlantic Ocean just off Cape Verde, to share his story with me. I will never forget talking with Lohmann over Skype while his ship idled off the island, hoisting its quarantine flag before heading to port. Lohmann—and so many other scientists and researchers—found themselves in unimaginable situations, and they had to improvise on the fly to stay safe. Even though many of us were sheltering at home during this time, we can all relate to navigating the uncharted waters of the pandemic. —Jenessa Duncombe, Staff Writer

 

Deepwater Horizon and the Rise of the Omics.

Photograph of oil beneath the surface of the Gulf of Mexico during the Deepwater Horizon spill (background). In the inset, microscopic specimens of Candidatus Macondimonas diazotrophica are visible both inside and around the edges of oil droplets (large round shapes) in this microscope image. Credits: Rich Matthews/AP images (photo); Shutterstock/CoreDESIGN (DNA illustration); and Shmruti Karthikeyan (inset)

One of the most interesting stories we’ve published this year, this feature delves into genomics research into microbial communities in the Gulf of Mexico following one of the world’s worst marine oil spills. Before reading the article, I’d thought of gene sequencing only in relation to biology and medicine, so I was fascinated to learn about its use in microbiology and about microbes, including a newly discovered “superbug,” that consume the components of oil, with implications for response to and mitigation of future oil spills. The article is well written and, though not a supereasy read, understandable to this nonscientist. —Faith Ishii, Production Manager

 

Don’t @ Me: What Happened When Climate Skeptics Misused My Work. This article is at the top of my “couldn’t stop scrolling” list for Eos this year, and I’ve read it multiple times. Many of us have heard tales of climate scientists targeted by deniers for just doing their regular job. Most of those stories are about research professors, but what about when this happens to early-career scientists and students? One excerpt that affects me every time is this: “I found myself scrolling through pages of posts jeering at climate scientists and dismissing science as politically motivated propaganda. I felt sick to my stomach that my work had become part of messaging targeting legitimate climate science.” Lucas Zeppetello’s story invokes emotion and offers practical advice for anyone found in the same situation. —Kimberly Cartier, Staff Writer

 

Human Brains Have Tiny Bits of Magnetic Material. Pre-COVID, this fascinated me. Geohealth: Science’s First Responders. During COVID, this made me grateful for geohealth researchers. —Melissa Tribur, Production Specialist

 

Are We Seeing a New Ocean Starting to Form in Africa?



I love this story about Ethiopia’s Erta Ale volcano from Erik Klemetti—it’s an intriguing and fascinating introduction for nonscientists and an endless conversation starter for geoscientists. Plus, you know, hot lava. —Caryl-Sue, Managing Editor

#GeoGRExit: Why Geosciences Programs Are Dropping the GRE

Thu, 06/04/2020 - 13:59

A lot is changing this year in higher education. Amid the ongoing pandemic caused by the infectious coronavirus disease 2019 (COVID-19), universities and graduate schools have had to adapt to entirely online instruction and have canceled fieldwork, closed labs, and faced declining revenues. These immediate changes have been forced upon programs by necessity, and they, along with negative impacts on many students from the current pandemic, will likely continue affecting higher education in the near future by, for example, decreasing application numbers. To bolster fall admissions, some graduate programs are temporarily dropping the Graduate Record Examinations (GRE) as an admissions requirement. However, dropping the GRE altogether, as a step toward equity and inclusivity in graduate admissions and education, has been a longer-term battle, with many terming it #GRExit on social media.

“The GRE does not test the skill set and knowledge base to be a strong scientist. Nor does it test the ability to form strong research questions, conduct research, and synthesize results for consumption by other scientists and the public.”The GRE is a standardized test widely used as a requirement for U.S. and Canadian graduate admissions since the 1950s. The earliest versions of the GRE were first tested on students at Harvard, Yale, Princeton, and Columbia in 1936, 3 decades before those universities became fully coed, with the test standardized by 1949. The test was overhauled in 2011, but research continues to show that it is not an accurate predictor of graduate school success, that scores are commonly misused and misinterpreted by admissions committees, and that the test is biased against women compared to men and against people of color compared to white and Asian people [Miller and Stassun, 2014]. The burden of taking the test, and the impact of low scores, limits graduate school access to underrepresented groups [Miller et al., 2019].

The geosciences are some of the least diverse science, technology, engineering, and mathematics (STEM) fields, especially at higher levels. More than 90% of geoscience doctoral degrees in the United States are awarded to white people, and there has been no significant change in 40 years [Bernard and Cooperdock, 2018]. Structural and social barriers result in underrepresented minority students, both undergraduate and graduate students, leaving the field, which compounds the lack of diversity at the faculty level. The lack of diversity and inclusion hurts the geosciences, excluding voices that can help solve Earth’s most critical problems. Geoscience faculty must understand, acknowledge, and address individual and institutional biases to improve inclusion in our field. One simple way to improve diversity in geoscience graduate programs is to drop the GRE requirement for graduate admissions.

Why #GRExit?

First, “the GRE does not test the skill set and knowledge base to be a strong scientist,” Shirley Malcom, director of education and human resources programs at the American Association for the Advancement of Science, told us recently. “Nor does it test the ability to form strong research questions, conduct research, and synthesize results for consumption by other scientists and the public.” Like other standardized tests, the GRE mostly tests a person’s ability to take a standardized test.

Several studies have shown that performance on the GRE is a poor predictor of graduate degree success across fields. For example, researchers tracked more than 1,800 doctoral students in STEM fields and found little correlation between GRE scores and degree completion. In fact, men with the lowest GRE scores finished their doctoral programs more frequently than those with the highest scores [Petersen et al., 2018]. Moneta-Koehler et al. [2017] found that the GRE did not assess skills and fortitude for biomedical graduate programs: GRE scores had no predictive capabilities for who would graduate, pass qualifying exams, publish more papers, and obtain grants or for any other measure of success.

Second, the GRE poses a significant financial burden to economically disadvantaged students. As of 2020, the test costs $205 to take and $27 for each official score sent to an institution to which a student applies. GRE books are an additional cost, and preparation courses can cost thousands of dollars. On top of these costs, lost wages from taking time off to travel to a testing center or attend classes, plus paying for childcare during this time, put an overwhelming burden on economically disadvantaged students.

Third, the GRE has been shown to effectively predict sex and race. Petersen et al. [2018] showed that there was “a significant gender effect” in GRE quantitative (Q) scores: Men averaged far higher scores than women, but no significant gender differences were seen in any other measure of success, including degree completion percentage. Further, Miller and Stassun [2014] showed that minorities also scored far lower than white and Asian people—for example, 82% of white and Asian applicants scored above 700 on the GRE Q, but only 5.2% of minorities did—meaning that if GRE scores provided an arbitrary cutoff for admissions, many underrepresented minorities, Asian women, and white women would not even be considered for admissions.

The #GRExit Movement Grows

From May to December 2019, the number of geosciences programs that dropped the GRE or made it optional rose from 0 to 30.In response to the shortcomings listed above, the 2019–2020 academic year has seen a major increase in geosciences programs dropping the GRE from admission requirements: From May to December 2019, the number of geosciences programs that dropped the GRE or made it optional rose from 0 to 30. The movement to remove GRE requirements for graduate school admissions started in the life sciences. The geosciences movement built on the bioscience #GRExit movement and a crowdsourced database of programs that have abandoned the GRE. In September 2019, lead author Sarah Ledford created a similar #GeoGRExit database of programs no longer requiring the GRE, which students can reference when applying to graduate school.

Spring 2020 marked the first round of applications following when many geosciences programs dropped the GRE requirement. Long-term monitoring of applicants and acceptances will be necessary to determine whether removing the GRE changes the numbers of minorities and white women in geosciences graduate programs and whether removing the GRE affects student success rates.

Initial anecdotal evidence indicates that graduate programs that removed the GRE requirement had higher overall numbers of applicants, as well as higher percentages of underrepresented minority applicants and acceptances. In Boise State’s Department of Geosciences, the number of applications increased substantially in the first applicant pool after the department dropped the GRE requirement in 2019. Across the multiple doctoral programs administered by the department, the total number of applicants was more than double the previous maximum and more than 4 times the number from the previous year. After the GRE was dropped, initial offers for admission and funding offers were balanced across gender.

In Georgia Tech’s School of Earth and Atmospheric Sciences, the percentage of underrepresented minority graduate applicants increased from a low within the past 8 years of 6% to 13% in 2020, the first applicant pool after the program dropped its GRE requirement. Of the accepted applications this spring, 23% were from underrepresented minorities, compared with 5%–18% over the past 8 years.

Advice on How to #GeoGRExit

Here we present some tips on how to approach the #GeoGRExit process from faculty whose departments successfully dropped the GRE.

Knowing and sharing the ample, peer-reviewed literature about the inequalities inherent in the test with faculty have been an important approach in convincing departments to drop the requirement.First, arm yourself with data. Knowing and sharing the ample, peer-reviewed literature about the inequalities inherent in the test with faculty have been an important approach in convincing departments to drop the requirement. Prior to the successful faculty vote to drop the GRE by Georgia Tech’s School of Earth and Atmospheric Sciences, coauthor Kim Cobb gave a presentation (available here) to her colleagues about compiled research on established biases in the GRE and how it is not a successful indicator of student success in graduate school.

Second, prepare for pushback. Many faculty have been using the GRE as an admissions metric for years without considering how it is removing strong candidates from their pool. Strike up conversations with these faculty informally to get a sense of their position, so you know where you are starting. Encourage dialogue among faculty to provide opportunities to catalog concerns about changes in admissions processes and evaluate whether those concerns are borne out by data.

Third, do your homework with the university as a whole. Find out if other programs at your university have dropped the GRE; if so, they may already have built a framework that could save your department time and effort. You should be aware of your university’s broader requirements for graduate admissions as well: Some schools have dropped the GRE from consideration for department-level admissions while still requiring it for the university application and thus still imposing financial burdens on applicants. (Temporary changes in admissions processes made by schools during the current pandemic might spur effective pushes for permanent university-wide changes in GRE requirements, although that remains to be seen.) It is also important to check whether the GRE is required for other elements within the application process, such as fellowships.

A Better Measure of Applicants

The graduate admissions process should move away from numerical rankings of students to more holistic evaluations of entire applications. Graduate programs need to clearly articulate what skills are required of applicants and use those as criteria for admissions. It is essential to remember that graduate students are trainees and will gain most of their research and technical skills in graduate school and beyond.

The overarching concept of holistic review, which emphasizes assessment of noncognitive skills, is receiving increased attention from graduate administrators [Kent and McCarthy, 2016]. Graduate programs have the opportunity to base decisions on assessments of skills and character attributes “such as drive, diligence, and the willingness to take scientific risks,” as Miller and Stassun [2014, p. 303] put it, which research has shown are more predictive of future success in STEM workforces than GRE scores.

Implicit biases will continue to hamper the progress of minorities in STEM. As an outdated, expensive, and biased test, the GRE exacerbates such biases.There are no guidelines yet for what exactly programs should include in holistic reviews, but interviews with applicants would be very telling, as noted in the 2016 “Holistic Review in Graduate Admissions” report. Other application criteria, like GPA and letters of reference, should also be considered, but they can be susceptible to pitfalls. GPAs and institutional prestige are often unconsciously weighted more than is warranted. Overreliance on reference letters is also problematic; many of the gatekeeping techniques that hinder equity and diversity are strongly reflected in reference letters [Faulkes, 2019]. We acknowledge that not every program has time to interview every graduate student candidate, but as with job interviews, time spent interviewing a short list of prospective students will result in selection of stronger candidates.

Implicit biases will continue to hamper the progress of minorities in STEM. As an outdated, expensive, and biased test, the GRE exacerbates such biases. Not only is it irrelevant for American higher education in the 21st century, it arguably threatens scientific progress. Given the interdisciplinary and synthetic nature of Earth science subdisciplines like climate and critical zone science, placing emphasis on noncognitive skills has the potential to enhance diversity, inclusion, and access in the field while accelerating scientific discovery and innovation.

Below the Great Pacific Garbage Patch: More Garbage

Thu, 06/04/2020 - 13:57

Scientists have found a new monster lurking in the deep: plastic. Discarded plastic floating in the ocean has been a recognized issue for decades, but the extent to which it might be polluting beneath the waves has largely been unknown. New research is finding there is more plastic than what appears on the surface.

For the first time, the researchers were able to qualitatively show the vertical distribution of plastic mass and concentrations below the surface. Credit: The Ocean Cleanup

“We have a very limited understanding of where all the plastic and stuff that’s being put into the ocean ends up,” said Matthias Egger, the lead scientist behind the new study and a researcher with The Ocean Cleanup. “We know roughly there’s tens of millions of tons of plastics going into the ocean. A large part of that should be afloat, but it’s not.”

This mystery is known as the missing plastic problem. Plastic found adrift in the ocean makes up only 1% of what should be out there, though a large portion is thought to circulate in coastal environments. The largest known reservoirs of plastic at sea are giant, swirling “garbage patches” that can stretch over areas twice the size of Texas. Could some of the missing plastic have sunk beneath these massive gyres?

“Almost everything we know about plastics in the ocean is really from the surface,” said Erik van Sebille, an oceanographer and climate scientist at Utrecht University who was not involved with the new research. “This really is the first time that a group has really systematically gone and done a transact and looked at the amount of plastic in the upper 2 kilometers.”

Into the Deep

On a calm day, the Great Pacific Garbage Patch looks like a reflection of the night sky, with shining pieces of plastic speckled across or just below the surface in every direction. Scientists estimate it is home to some 80,000 metric tons of plastic brought together in the North Pacific Gyre by ocean currents. It was there that Eggers and his team dropped nets to sample as much as 2,000 meters below the surface.

Between 5 and 2,000 meters below the surface, the total mass of plastic pieces smaller than 5 centimeters is 56%–80% of what is seen at the surface.The results, published in Scientific Reports, found microplastics at every depth sampled. Analyzing over 12,000 plastic fragments, the researchers found the types of plastics found in the depths matched what was seen on the surface, providing the first evidence of fallout from the garbage patches. Although plastic is usually buoyant, it can start to sink as it becomes mixed with heavier sediment or is colonized by algae and other marine life—a process known as biofouling. The scientists estimated that between 5 and 2,000 meters below the surface, the total mass of plastic pieces smaller than 5 centimeters is 56%–80% of what is seen at the surface. In total, these plastics could compose 10% of the weight of plastics on the surface.

A Drop in the Plastic Ocean

The new research, despite its deep sampling, is still just dipping beneath the surface. There are several thousand meters more to reach the ocean floor, and there’s evidence that there could be a large amount of plastic just nanometers in size—10 to 100 times smaller than what is typically measured in the ocean. Additionally, although transects help showcase the variability in plastic distribution, scientists still don’t understand how currents circulate plastics, particularly subsurface.

However, the findings do reveal a new chapter in the ocean plastics story: what happens to plastic permanently adrift. The results support the theory that most of the missing plastic circulates in coastal regions but also show how plastics are already filtering into the deep ocean. Understanding the amount of subsurface plastic in garbage patches is key to understanding the longevity of these Anthropocene icons.

“It’s really important information for us to be able to say anything or make any predictions,” Egger said. “Let’s say we stopped all emissions into the ocean; how long will these garbage patches still persist?”

—Mara Johnson-Groh (marakjg@gmail.com), Science Writer

Everything’s Coming Up Roses for Pasadena Seismologists

Thu, 06/04/2020 - 13:54

The 2020 Tournament of Roses Parade was watched by 700,000 spectators as it wound its way through the streets of Pasadena, Calif., and was broadcast live to another 7 million people.

A few scientists were also eavesdropping on the pavement-pounding event using a new type of seismic array. The technique could inspire a few new parade prize categories—such as heaviest float, loudest band, and most coordinated marching—as well as revolutionize seismic monitoring in both urban and offshore settings.

“The Rose Parade is a well-controlled event. This provided a rare opportunity to calibrate the new seismic network.”The Pasadena distributed acoustic sensing (DAS) array was installed by researchers at the California Institute of Technology (Caltech) in November 2019 by converting two strands of unused telecommunications fiber-optic cable into a citywide seismic instrument. “The primary goal of the Pasadena Array is to detect small earthquakes, but we have not yet had any good-sized earthquakes in the city to test the system,” said Zhongwen Zhan, a seismologist at Caltech in Pasadena and an author of the new study, published in Seismological Research Letters.

“The Rose Parade is a well-controlled event,” Zhan said, with no other traffic except the parade, which travels in a uniform direction at almost constant speed. “This provided a rare opportunity to calibrate the new seismic network.”

Everyone Loves a Parade The Rose Parade route followed the Pasadena distributed acoustic sensing (DAS) array for 2.5 kilometers, creating a detailed report of the ground vibrations generated by motorcycles, floats, and other vehicles. Credit: Wang et al., 2020, https://doi.org/10.1785/0220200091

Zhan and colleagues drew upon data collected by a stretch of fiber-optic cable that supports around 400 seismic sensors. The 2.5-kilometer cable runs directly beneath the parade route. “The data turned out to be even better than we thought,” Zhan said, capturing both short- and long-period vibrations with “remarkable” resolution down to a few meters.

The seismic record shows the zigzag patterns of police motorcycles used to clear the route; the long and steady vibrations as the road flexes under massively heavy parade floats; and the coordinated, harmonic steps of marching bands. A distinctive gap appears when the Mrs. Meyer’s Clean Day float got stuck at a tight turn and backed up parade traffic for 6 minutes.

The prize for the heaviest float went to Amazon Studios, which had a real bus and rocket mounted on a truck.

Look out for our float in the @RoseParade, birdies!

Armagedón a 10,000 A.C.

Thu, 06/04/2020 - 13:51

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Abu Hureyra es un importante sitio arqueológico en Siria, conocido por los hallazgos que documentan la adopción temprana de la agricultura en la región. También podría ser reconocido como el único asentamiento humano que ha sido golpeado por un fragmento de un cometa.

El sitio, ahora bajo las aguas del lago Assad, fue excavado rápidamente entre 1972 y 1973 antes de que la construcción de la presa de Tabqa inundara el área. Durante la excavación, los arqueólogos se dieron cuenta de que realmente había dos sitios, uno encima del otro. El primero fue un asentamiento paleolítico de cazadores-recolectores, y el segundo fue una ciudad agrícola, con nuevos edificios de un estilo diferente.

La aldea paleolítica de Abu Hureyra fue golpeada y destruida indirectamente por los fragmentos de un cometa que se estrelló contra la Tierra hace unos 12.800 años.Un nuevo análisis de muestras de suelo y artefactos rescatados de la excavación original ha revelado un hallazgo sorprendente: la aldea paleolítica de Abu Hureyra fue golpeada y destruida indirectamente por fragmentos de un cometa que se estrelló contra la Tierra hace unos 12,800 años.

Los investigadores piensan que, al entrar en la atmósfera de la Tierra, el cometa ya fracturado probablemente se dividió en varios pedazos más, muchos de los cuales no llegaron a tocar el suelo. En cambio, produjeron una serie de explosiones en la atmósfera conocidas como explosiones aéreas. Cada explosión aérea era tan poderosa como una explosión nuclear, vaporizando instantáneamente el suelo y la vegetación debajo y produciendo poderosas ondas de choque que destruyeron todo durante decenas de kilómetros a la redonda. Abu Hureyra fue golpeada por una de estas ondas de choque.

“Cuando excavamos el sitio en 1973, me di cuenta de que había un área fuertemente quemada, pero por supuesto, en ese entonces no estaba pensando en cometas o asteroides o en algo por el estilo”, dijo Andrew Moore, arqueólogo y profesor del Instituto de Tecnología de Rochester, en Nueva York, que dirigió la excavación en Abu Hureyra. Moore es el primer autor del nuevo estudio, que apareció en línea el 6 de marzo en Scientific Reports. “Ahora sabemos que un gran incendio fue el resultado de que toda la aldea se convirtiera en humo, gracias a esta explosión aérea que incineró todo el lugar”.

Un grupo multidisciplinario de científicos descubrió que algunas muestras de suelo de Abu Hureyra estaban llenas de pequeños trozos de vidrio fundido—pequeños trozos de suelo vaporizado que se solidificaron rápidamente después de la explosión. Encontraron vidrio fundido entre las semillas y los granos de cereal recuperados del sitio, así como salpicados en la cobertura de los edificios. La mayoría de estos trozos de vidrio fundido tienen entre 1 y 2 milímetros de diámetro. El equipo también encontró altas concentraciones de nanodiamantes microscópicos, pequeñas esférulas de carbón y carbón vegetal quemado—probablemente todos formados durante un impacto cósmico.

“Encontramos que el vidrio salpicó en pequeños pedazos de hueso que estaban cercanos a los hogares, entonces sabemos que el vidrio fundido había aterrizado en este lugar mientras la aldea estaba habitada”, dijo el coautor Allen West, miembro del Comet Research Group, una organización sin fines de lucro destinada a estudiar este impacto cósmico particular y sus consecuencias.

Origen Cósmico

El origen del vidrio fundido está respaldado por los minerales que contiene. El vidrio fundido que se encuentra en Abu Hureyra contiene partículas de minerales como cuarzo, cromoferido y magnetita, que solo pueden fundirse a temperaturas entre 1,720° C y 2,200° C.

“Este vidrio fundido requería una enorme cantidad de calor, mucho más de lo que un grupo de cazadores y recolectores podría generar por sí solo”.“Debes usar técnicas analíticas científicas muy sofisticadas para ver estas cosas, pero una vez que las ves, no hay absolutamente ninguna duda sobre a qué te enfrentas, y solo hay una explicación”, dijo Moore. “Este vidrio fundido requería una enorme cantidad de calor, mucho más de lo que un grupo de cazadores y recolectores podría generar por sí solo”.

También se han descartado fuentes naturales como el fuego o el vulcanismo porque no pueden alcanzar las temperaturas requeridas. Los rayos alcanzan temperaturas que funden los sedimentos y producen vidrio, pero también crea huellas magnéticas que no están presentes en el vidrio fundido de Abu Hureyra.

“Esto no puede ser el resultado de incendios”, dijo Peter Schultz, geólogo y científico planetario de la Universidad de Brown en Rhode Island, y que no participó en el nuevo estudio. “Sus resultados respaldan firmemente sus conclusiones de que se produjo un impacto o, más probablemente, una explosión área en la región”.

“Esas temperaturas convertirían su automóvil en una piscina de metal fundido en menos de un minuto”, dijo West.

Persiguiendo Cometas

Abu Hureyra se encuentra en el sector más oriental de lo que se conoce como Límite Younger Dryas, una serie de sitios en las Américas, Europa y Medio Oriente donde hay evidencia de un impacto cósmico que ocurre hacia el final del Pleistoceno. Esta evidencia incluye una capa rica en carbono conocida como la “alfombra negra” que contiene grandes cantidades de nanodiamantes generados por un impacto, esférulas metálicas y concentraciones más altas de lo habitual de elementos tan raros como iridio, platino y níquel. También contiene carbón vegetal quemado, lo que insinúa que hubo incendios forestales generalizados que podrían haber incinerado hasta el 10% de todas las áreas boscosas del planeta.

La hipótesis del impacto del “Younger Dryas” afirma que el impacto alteró el clima de la Tierra, causando una ola de frío que duró 1,300 años. Las temperaturas cayeron en promedio de 10 ° C y el clima se volvió más seco, particularmente en el Medio Oriente.

Algunos investigadores piensan que el impacto y el consiguiente cambio climático podrían haber acelerado la extinción de la mayoría de los animales grandes del planeta, incluidos los mamuts, los gatos con dientes de sable y los caballos y camellos estadounidenses. También podría haber alterado la cultura Clovis en América del Norte, que desapareció aproximadamente en ese momento.

Convertirse en Agricultores

Los arqueólogos también vinculan el evento Younger Dryas con el comienzo de la agricultura sistemática en el Medio Oriente. “Ya sabíamos que el cambio de la caza y la recolección a la agricultura coincidió con los comienzos del Younger Dryas, por lo que ya sabíamos que parecía que el cambio climático había tenido un papel en persuadir a la gente de la aldea para que se dedicara a la agricultura”, Dijo Moore, “por supuesto, no sabíamos qué había causado el Younger Dryas”.

“No creo que la gente de Abu Hureyra la haya inventado”, dijo Moore, “pero Abu Hureyra es el primer sitio donde podríamos decir que algo como la agricultura sistemática está definitivamente comenzando”.La datación por carbono en Abu Hureyra reveló que la aldea fue reconstruida poco después del impacto por personas que utilizaron el mismo tipo de herramientas de hueso y sílex que los primeros ocupantes del asentamiento. “No hubo absolutamente ningún cambio en el equipo cultural”, dijo Moore, lo que sugiere que fue el mismo grupo de personas que reconstruyó la aldea. Tal vez, piensa Moore, algunos miembros de la aldea estaban cazando o recolectando comida y pudieron regresar.

Solo que esta vez hicieron cambios sustanciales en su economía. “No creo que la gente de Abu Hureyra la haya inventado”, dijo Moore, “pero Abu Hureyra es el primer sitio donde podríamos decir que algo como la agricultura sistemática está definitivamente comenzando”.

“En las circunstancias climáticas completamente cambiadas, comenzaron a cultivar, comenzaron a cultivar campos de centeno y luego, con el tiempo, trigo y cebada, y finalmente, también comenzaron a criar ganado con ovejas y cabras”, dijo Moore. Con el tiempo, “la cosa se convirtió en un enorme asentamiento con varios miles de habitantes, y se convirtió en la aldea dominante en esa parte de Siria”.

—Javier Barbuzano (@javibarbuzano), Science Writer

Eruption and Emissions Take Credit for Ocean Carbon Sink Changes

Wed, 06/03/2020 - 13:00

The ocean’s capacity to take up carbon is a balance between the amount of carbon dioxide in the atmosphere and the state of the ocean. Reduced emissions in the 1990s resulted in lower atmospheric carbon dioxide, which slowed the ocean’s uptake of carbon. At the same time, the eruption of Mount Pinatubo in 1991 produced cooler sea surface temperatures followed by warming in the latter part of the decade that changed the state of the ocean, which, in turn, shifted the timing of the carbon sink slowing.

McKinley et al. [2020] now provide a model-based analysis that evaluates the ocean’s response when these two effects coincided, to explain the slowing of the ocean carbon sink during the 1990s. The confluence of these events yielded a net carbon uptake by the ocean that was less than expected. These results highlight the importance of external processes in controlling decadal variability in the ocean carbon sink and point to the uncertainty introduced by extreme events.

Citation: McKinley, G., Fay, A., Eddebbar, Y., Gloege, L, & Lovenduski, N. [2020]. External forcing explains recent decadal variability of the ocean carbon sink. AGU Advances, 2, e2019AV000149. https://doi.org/10.1029/2019AV000149

—Eileen Hofmann, Editor, AGU Advances

Read more highlights from AGU Advances: agu.org/advances-digest

JGR: Space Physics Seeks Submissions on Underrepresented Topics

Wed, 06/03/2020 - 12:37

In January I took over as Editor in Chief of JGR: Space Physics. It’s a great honor to serve in this position and I will continue the work of my predecessor, Mike Liemohn, in maintaining high scientific standards, upheld through a vigorous peer review process, and the commitment and expertise of the editorial board.

JGR: Space Physics is the leading scientific journal in the broad field of space physics.JGR: Space Physics is the leading scientific journal in the broad field of space physics. Key to its stature and success are the scientists who submit the exciting results of their novel and original research. To ensure that the journal represents and publishes the latest developments in space physics, I would like to see growth and evolution in three particular areas.

First, I would like to encourage papers relating to spacecraft scientific instrumentation and computer simulations. Many space missions focus on important and unsolved problems of space physics, and the lion’s share of the best scientific results are published in JGR: Space Physics. These results are based on data generated by the scientific instruments onboard these missions. Thus, the journal should also be a natural home for publications describing these instruments. Such publications would be useful for promoting and understanding the potential (and limitations) of past, current, and future instrument data sets.

Second, I welcome submissions of reports that describe novel developments of numerical models. At the very beginning of the era of in-situ space measurements, most space physics results published in the journal were based either on observational or theoretical studies. Since that time, scientific results achieved by exploitation of numerical models have grown immensely. If the results of a numerical model are published in JGR: Space Physics, then the opportunity for the model developers to familiarize the space physics community with this model can also be provided.

Third, I am keen to increase the proportion of papers in the journal relating to solar physics and to theoretical results. Looking at the subject matter of papers published in the journal over the decades, the proportion devoted to various aspects of solar physics and to the development of theory have been decreasing. One of my objectives is to reverse these trends.

In addition to diversifying the breadth of topics in JGR: Space Physics, I would like the journal to publish more review articles. Reviews present a comprehensive summary of the current state of understanding in a particular field, and also provide extra exposure to the most recent advances and significant findings. The editorial board will select topics and identify scientists to deliver these reviews over the coming years.

Commissioning some new special collections is also planned. AGU statistics show that papers in special collections tend to have a higher number of citations and greater impact than papers on similar topics that are not part of a collection. For researchers with interests in a particular topic, it is easier to find and explore thematic collection but easy to miss an individually published paper.

To remain the flagship journal of the space physics community we must be flexible in response to evolving trends, and welcome papers in new and underrepresented fields.Space physics is a very broad scientific field and our community is large and diverse, thus it’s a challenge to be the publishing home for everyone.

To remain the flagship journal of the space physics community we must be flexible in response to evolving trends, and welcome papers in new and underrepresented fields.

Together with the editors I will also be supporting the community in understanding how the FAIR data standards apply to our data, models, and instruments.

I will only be able to advance towards these goals with the support of the scientists who submit their best work to our journal. I welcome feedback on these ideas as we take JGR: Space Physics forward over the next few years.

—Michael Balikhin (m.balikhin@sheffield.ac.uk), University of Sheffield, UK

Deep Biases Prevent Diverse Talent from Advancing

Wed, 06/03/2020 - 12:35

Does groundbreaking scientific work lead to a successful academic career? According to a recent study, it may depend on your race or gender.

If diversity in science leads to innovation and innovation leads to career success, then it should follow that students from diverse backgrounds will have successful careers. A new study, however, finds the opposite is true. In fact, it shows that although underrepresented scholars in science-related fields are more likely to innovate, they’re also less likely than their majority-group peers to earn influential academic positions—what the authors call a diversity-innovation paradox.

How to explain it? The study, published in the Proceedings of the National Academy of Sciences of the United States of America, posits that the work of students from traditionally underrepresented groups is discounted and devalued, preventing their contributions, however potentially impactful, from finding traction in the scientific community.

“What we find that partially explains the devaluation is that underrepresented groups introduce ideas that…perhaps bring concepts together that are more distal from one another,” said study colead Bas Hofstra, a postdoctoral research fellow at the Stanford University Graduate School of Education. “That’s somewhat suggestive that these ideas are difficult to parse and difficult to place, and maybe the majority has a disproportionate say in which ideas are useful.”

A Striking Result

To reach their conclusions, Hofstra and his coauthors looked at a near-complete record of Ph.D. theses published in the United States between 1977 and 2015. Analyzing data such as names, institutions, thesis titles, and abstracts, they determined whether students belonged to an underrepresented group and whether they introduced novel concepts in their fields. Researchers then looked at the theses authors’ career trajectories, searching specifically for continued careers in academic research.

What researchers found was that the less likely a student’s gender and racial groups were to be represented in their field, the more likely they were to introduce novel conceptual linkages in their work.What researchers found was that the less likely a student’s racial and gender groups were to be represented in their field—for instance, a woman in a predominately male field or an African American in a predominately white field—the more likely they were to introduce novel conceptual linkages, defined by the authors as having first linked meaningful concepts in a thesis. According to the study, this higher rate of innovation is a result of the unique perspectives and experiences brought by these individuals, who “often draw relations between ideas and concepts that have been traditionally missed or ignored.”

However, these students were also less likely to have their novel concepts adopted by their peers, with analysis suggesting that overall, nonwhite men and women and white women innovate at higher rates than white men, but the innovations of white men go on to have a higher impact.

Lisa White, director of education and outreach at the University of California Museum of Paleontology, chair of AGU’s Diversity and Inclusion Advisory Committee, and the Eos Science Adviser for Diversity and Inclusion, called the study “striking” and said the science community should continue to learn from work like this.

“What struck me the most was just how deep the biases continue to run in professional circles…preventing underrepresented students from advancing,” said White, who was not involved in the study. “There really has to be more attention paid to how we’re addressing biases in the way we evaluate research quality and potential for career success.”

Evaluating Careers in Science

Hofstra said many institutions are working to increase diversity and equality in science even while the study shows that a significant portion of scientific discovery is guided by biases that align with gender and racial signals. “Being aware and actually pinpointing when and where these biases creep into the evaluation of science is a first step, or at least an additional step, to try and correct [the paradox],” he said.

The study looks specifically at whether scholars have gone on to successful academic careers, for instance, whether they’ve become a research faculty member or continued to be a research-active scientist. White said that although she acknowledges that individuals in research-intensive positions at labs and universities are pushing the envelope in science, it’s worth noting that many Ph.D. students have successful careers outside of research and academia.

“There are plenty of underrepresented individuals who go on to great careers in science,” White said. “They may be at universities or in professional appointments that perhaps don’t garner as much high-profile attention.…And [the students] don’t see that at all as an alternative path or second choice.”

It’s Going to Take More

Fewer underrepresented identities in positions of leadership and influence mean fewer role models for underrepresented students.Although the loss of individual contributions to science and continued research by promising Ph.D. students is a clear outcome of the diversity-innovation paradox, the disparity also has broader implications for the science education community. Fewer underrepresented identities in positions of leadership and influence, for instance, mean fewer role models for underrepresented students, whose numbers in degree programs have been increasing. According to the American Council on Education (ACE), in fall 2018 women made up 51% of undergraduate science, technology, engineering, and mathematics (STEM) majors but less than a quarter of STEM faculty members.

For underrepresented students, seeing fewer role models in faculty and high-level administration may be among the barriers they face to success in degree programs. ACE cites research showing that women who have role models perform better in math and science, and women science majors who see female STEM professors as role models can better envision themselves in a similar career.

“If you don’t identify with scholars and if their intellectual pursuits aren’t related to yours, then that can be quite a barrier,” said study colead Daniel A. McFarland, a professor of education at Stanford’s Graduate School of Education.

“If [underrepresented students] are not able to find support,” Hofstra added, and “if they’re not able to find a mentorship, then that entry point from doctorate to faculty or research position becomes particularly hard.”

Although the scientific enterprise is greatly strengthened by consensus and established standards, those same aspects can hide biases.McFarland said that although the scientific enterprise is greatly strengthened by consensus and established standards, those same aspects can hide biases. “Societies and communities have biases, and certain groups are more represented in their opinions than others,” he said. “Science is no different, and we have to be vigilant there. I think science’s great advantage is that it continually questions and interrogates things, and this same interrogation can be applied to the scientific enterprise itself. By recognizing bias and constantly trying to rectify it, science will only improve. We just want to speed up and assist in that process.”

Although certain positive steps are being taken to diversify faculty—such as training hiring committees on implicit bias and requiring diversity and inclusion statements on applications—White said it’s not enough and that administrators at leading universities need to continue to put pressure on hiring committees.

“It’s going to take much more,” White said. “A university may make a great hire or couple of hires…and then they may pause because they think they’ve achieved some progress, [but] we can’t relax on this at all. When people in leadership positions continue to misjudge and undervalue how innovative people of color can be in science, there are consequential outcomes.”

—Korena Di Roma Howley (korenahowley@gmail.com), Science Writer

Fieldwork in the Experimental Lakes Area Adapts to COVID-19

Wed, 06/03/2020 - 12:34

The International Institute for Sustainable Development Experimental Lakes Area (IISD-ELA) sits in a remote wilderness area of northwestern Ontario, Canada. Of the thousands of glacial-scoured water bodies that dot this region, its 58 lakes are part of a unique living laboratory where the hydrology, chemistry, and limnology—the biological study of lakes—have been studied since 1968.

Over the 52 years of data collection at IISD-ELA, there have been numerous bumps along the road. Funding crunches and shifting government priorities have more than once threatened the station’s existence. In 1973, a windstorm tore up trees like toothpicks. A 1980 forest fire nearly consumed the field station’s buildings. After enduring these onslaughts, as well as severe snowstorms and floods, the field station now faces a new challenge: the coronavirus disease of 2019, or COVID-19.

First opened as a temporary location for research on nutrient pollution, the internationally renowned site has become famous for its scientific impact on international policies around air pollution and phosphates, as well as decades of data sets on hydrology, water chemistry, biology, and climate.

At IISD-ELA, whole lakes are manipulated like giant, ecologically functioning test tubes, with untouched lakes acting as experimental controls. Whole-lake experimental manipulations have involved additions of acid, nutrients, mercury, radioactivity, iron, and synthetic estrogens, allowing researchers to see the environmental impacts of these stressors on freshwater systems. Another unplanned long-term experiment—climate change—has provided an additional limnological learning opportunity.

Defining a New Normal IISD-ELA biologist Lee Hrenchuk measures a white sucker, one of the Experimental Lakes Area’s native fish, at an outdoor measuring station during summer fieldwork in 2019. Credit: Lesley Evans Ogden

In a normal year, summer is the research station’s busiest season. As the icy winter thaws, more than 50 researchers and staff typically converge to live and work in this land of lakes and curious minds. Field research usually involves close collaboration, shared eating spaces, and centralized sleeping quarters in dormitories. But group living presents a formidable challenge amid the novel coronavirus pandemic. To balance safety with continuity of the site’s 52-year unbroken data sets, IISD-ELA has adapted to a new normal—at least for the time being.

IISD-ELA staff biologist Lee Hrenchuk was already at IISD-ELA doing regularly scheduled winter sampling as the new reality began to dawn. As part of the station’s fish crew, she regularly catches sentinel species like lake trout in experimental and control lakes to study their growth rates, condition, and survival.

Hrenchuk and her colleagues quickly assessed how normal protocols might be adapted.

“We sat down one evening and spent 3 or 4 hours hashing out what we thought would be a doable sampling protocol,” said Hrenchuk, explaining that it needed to be one requiring a minimum of people and time.

Even before COVID-19, staff at IISD-ELA knew that because of the usually close communal nature of life at a remote field camp, infections can spread easily.Even before COVID-19, staff at IISD-ELA knew that because of the communal nature of life at a remote field camp, infections can spread easily. “Once a cold or a flu gets into camp, everybody gets it,” said the site’s head research scientist Vince Palace.

At the site, summer fieldwork has already begun, and instead of 50 or more residents in camp—a number that sometimes rises to 100 visitors including daytime tours—the small crew will include, at most, eight scientists, two facilities maintenance staff, and one cook.

Prior to heading to the site, every researcher will self-isolate at home for 14 days. Additionally, staff alternates will also self-isolate so they are available for work in case someone at camp gets sick.

Once individuals arrive at camp, there will be a vehicle cleaning protocol and another before they leave. Whereas accommodations are usually shared dormitories, this summer each researcher will have a separate sleeping cabin. Gloves have always been worn when handling fish, explained Hrenchuk, but other protocols like physical distancing will be carefully observed when getting in and out of boats, using walking trails, going into buildings and vehicles, and collecting data from flowmeters and meteorological stations. Any shared equipment will be frequently sanitized, and chemical analyses of water samples will be done by the chemistry staff in a laboratory building on site.

In Hungry Hall, where field crew eat, dining tables are now spread far apart. “Shouting across the dining room to have a conversation—stuff like that is different,” said Hrenchuk, emphasizing that new procedures are all about physical separation. As a reminder, IISD-ELA even created some infographics explaining social distancing in site-specific terms.

Credit: IISD Experimental Lakes Area

One of the unfortunate repercussions of COVID-19 will be pressing pause on new research. Continuity for the long-term data collection is now the priority, so most other research, like investigating the impacts of oil spills, microplastics, and cannabis and other drugs on freshwater ecosystems, has been delayed.

Hrenchuk acknowledged that it will be a very different summer at IISD-ELA. Nevertheless, despite the long days and hard work that field research will entail with just a few crew members doing the job of many, when it comes to getting out to the lakes this summer after sheltering in place in the city, she said, “we are all excited to leave our house.”

—Lesley Evans Ogden (@ljevanso), Science Writer

Extremely High Carbon Return in Certain Volcanic Arcs

Wed, 06/03/2020 - 11:30

The efficiency of carbon recycling at volcanic arcs assesses the degree to which subducted carbon in sediment and altered oceanic crust escapes into the forearc region, is degassed in arc volcanism, or gets subducted deeper into the Earth’s mantle. By examining the carbon recycling efficiency of arcs at individual subduction zones, a more accurate model of global carbon recycling will become possible.

Previous estimates for the release of subducted carbon along various volcanic arc systems range considerably, from very little (under 25%) to amounts that are greater than the expected sedimentary carbon being subducted. A critical gap in our understanding of carbon recycling efficiency at arcs comes from poor constraints on the amount of carbon in altered oceanic crust below the sedimentary carbon.

Li et al. [2020] used two drill cores off the Lesser Antilles trench to constrain the average carbon isotope values and total carbon concentrations in the subducted slab, finding roughly 1.28 x 1010 mol C/yr with an average δ13C = -2.7 per mil. Using instrument data to constrain a model of volcanic arc degassing, it was found that the release of carbon along the Central to Northern Lesser-Antilles arc is essentially the same, implying 100% carbon recycling efficiency.

The extremely high loss of carbon at sub-arc depths implies very little carbon is being lost to the forearc or being subducted deeper into the mantle. This study highlights the importance of evaluating individual subduction zones to improve global models of the deep carbon cycle.

Citation: Li, K., Li, L., Aubaud, C., & Muehlenbachs, K. [2020]. Efficient carbon recycling at the Central‐Northern Lesser Antilles arc: Implications to deep carbon recycling in global subduction zones. Geophysical Research Letters, 47, e2020GL086950. https://doi.org/10.1029/2020GL086950

—Steven D. Jacobsen, Editor, Geophysical Research Letters

Improving Atmospheric Forecasts with Machine Learning

Tue, 06/02/2020 - 13:49

Weather forecasting has improved significantly in recent decades. Thanks to advances in monitoring and computing technology, today’s 5-day forecasts are as accurate as 1-day forecasts were in 1980. Artificial intelligence could revolutionize weather forecasts again. In a new study, Arcomano et al. present a machine learning model that forecasts weather in the same format as classic numerical weather prediction models.

Previously, the team developed an efficient machine learning algorithm for the prediction of large, chaotic systems and demonstrated how to incorporate the algorithm into a hybrid numerical machine learning model for dynamical systems like atmospheric conditions. In the new proof-of-concept study, the researchers build on their previous work by using a reservoir computing–based model, rather than a deep learning model, to reduce the training time requirements for their machine learning technique.

The researchers trained their model using data from the European Centre for Medium-Range Weather Forecasts and prepared 171 separate 20-day forecasts, each of which took just 1 minute to prepare. They compared the machine learning forecasts to three benchmark forecasts: daily climatology; a persistence model, which assumes that the atmospheric state will remain constant throughout the forecast; and the Simplified Parameterizations, Privitive-Equation Dynamics (SPEEDY) model, a low-resolution version of numerical weather prediction models.

They found that the machine learning model typically forecast the global atmospheric state with skill 3 days out. It outperformed both daily climatology and the persistence model in the extratropics, though not in the tropics, and bested the SPEEDY model in predicting temperature in the tropics and specific humidity at the surface in both the tropics and the extratropics. However, the SPEEDY model still outperformed the machine learning model for wind forecasts more than 24 hours out. The authors note that overall, the reservoir computing–based machine learning model is highly efficient and may be useful in rapid and short-term weather forecasts. (Geophysical Research Letters, https://doi.org/10.1029/2020GL087776, 2020)

—Kate Wheeling, Freelance Writer

How Machine Learning Redraws the Map of Ocean Ecosystems

Tue, 06/02/2020 - 13:47

On land, it’s easy for us to see divisions between ecosystems: A rain forest’s fan palms and vines stand in stark relief to the cacti of a high desert. Without detailed data or scientific measurements, we can tell a distinct difference in the ecosystem’s flora and fauna.

A new paper proposes a tool to redraw the “ecosystem” lines in the ocean.But how do scientists draw those divisions in the ocean? A new paper proposes a tool to redraw the lines that define an ocean’s ecosystems, lines originally penned by the seagoing oceanographer Alan Longhurst in the 1990s. The paper uses unsupervised learning, a machine learning method, to analyze the complex interplay between plankton species and nutrient fluxes. As a result, the tool could give researchers a more flexible definition of ecosystem regions.

Using the tool on global modeling output suggests that the ocean’s surface has more than 100 different regions or as few as 12 if aggregated, simplifying the 56 Longhurst regions. The research could complement ongoing efforts to improve fisheries management and satellite detection of shifting plankton under climate change. It could also direct researchers to more precise locations for field sampling.

Beyond the Human Eye

Coccolithophores, diatoms, zooplankton, and other planktonic life-forms float on much of the ocean’s sunlit surface. Scientists monitor plankton with long-term sampling stations and peer at their colors by satellite from above, but they don’t have detailed maps of where plankton lives worldwide.

Models help fill the gaps in scientists’ knowledge, and the latest research relies on an ocean model to simulate where 51 types of plankton amass on the surface oceans worldwide. The latest research then applies the new classification tool, called the systematic aggregated ecoprovince (SAGE) method, to discern where neighborhoods of like-minded plankton and nutrients appear.

“While my human eyes can’t see these different regions that stand out, the machine can. And that’s where the power of this method comes in.”SAGE relies, in part, on a type of machine learning algorithm called unsupervised learning. The algorithm’s strength is that it searches for patterns unprompted by researchers.

To compare the tool to a simple example, if scientists told an algorithm to identify shapes in photographs like circles and squares, the researchers could “supervise” the process by telling the computer what a square and circle looked like before it began. But in unsupervised learning, the algorithm has no prior knowledge of shapes and will sift through many images to identify patterns of similar shapes itself.

Using an unsupervised approach gives SAGE the freedom to let patterns emerge that the scientists might not otherwise see.

“While my human eyes can’t see these different regions that stand out, the machine can,” first author and physical oceanographer Maike Sonnewald at Princeton University said. “And that’s where the power of this method comes in.” This method could be used more broadly by geoscientists in other fields to make sense of nonlinear data, said Sonnewald.

Desert of the Ocean

Applying SAGE to model data, the tool noted 115 distinct ecological provinces, which can then be boiled down into 12 overarching regions.

One region appears in the center of nutrient-poor ocean gyres, whereas other regions show productive ecosystems along the coast and equator.

“You have regions that are kind of like the regions you’d see on land,” Sonnewald said. One area in “the heart of a desert-like region of the ocean” is characterized by very small cells. “There’s just not a lot of plankton biomass.” The region that includes Peru’s fertile coast, however, has “a huge amount of stuff.”

If scientists want more distinctions between communities, they can adjust the tool to see the full 115 regions. But having only 12 regions can be powerful too, said Sonnewald, because it “demonstrates the similarities between the different [ocean] basins.” The tool was published in a recent paper in the journal Science Advances.

Oceanographer Francois Ribalet at the University of Washington, who was not involved in the study, hopes to apply the tool to field data when he takes measurements on research cruises. He said identifying unique provinces gives scientists a hint of how ecosystems could react to changing ocean conditions.

“If we identify that an organism is very sensitive to temperature, so then we can start to actually make some predictions,” Ribalet said. Using the tool will help him tease out an ecosystem’s key drivers and how it may react to future ocean warming.

—Jenessa Duncombe (@jrdscience), Staff Writer

The Future of Big Data May Lie in Tiny Magnets

Tue, 06/02/2020 - 13:45

The use of artificial intelligence and machine learning is now widespread with wide-ranging practical applications in many Earth science domains, including climate modeling, weather prediction, and volcanic eruption forecasting. This current revolution in computing has been driven largely by rapid improvements in computer software and algorithms.

“We’re now approaching a second computing revolution of redesigning our hardware to meet new computing challenges.”But now, we’re approaching a second computing revolution of redesigning our hardware to meet new computing challenges, said Jean Anne Incorvia, a professor of electrical and computer engineering at the University of Texas at Austin.

Traditional silicon-based computer hardware (like the computer chips found in your laptop or cell phone) has bottlenecks in both speed and energy efficiency, which may impose limits on their use in increasingly intensive computational problems.

To break these limitations, some engineers are drawing inspiration from biological neural systems using an approach called neuromorphic computing. “We know that the brain is really energy efficient at doing things like recognizing images,” said Incorvia. This is because the brain, unlike traditional computers, processes information in parallel, with neurons (the brain’s computational units) interacting with one another.

A diagram of an artificial neural network implementing a “winner-take-all” (WTA) machine learning algorithm. Neurons (blue) are the computation units receiving data input (red) with synapses that interconnect them. The final computational output neurons (green) are inhibited by a lateral inhibition signal (purple). Credit: University of Texas at Austin

Incorvia and her research team are now studying how magnetic—not silicon—computer components can mimic certain useful aspects of biological neural systems. In a new study published in April in the journal Nanotechnology, her team reports that physically tuning the magnetic interactions between the magnetic nanowires could significantly cut the energy costs of training computer algorithms used in a variety of applications.

“The vision is [creating] computers that can react to their environment and process a lot of data at once in a smart and adaptive way,” said Incorvia, who was the senior author on the study. Reducing the costs of training these systems could help make that vision a reality.

Lateral Inhibition at a Lower Cost

Artificial neural networks are one of the main computing tools used in machine learning. As the name implies, these tools mimic biological neural networks. Lateral inhibition is an important feature of biological neural networks that helps improve signal contrast in human sensory systems, like vision and touch, by having more active neurons inhibiting the activity of the surrounding neurons.

Implementing lateral inhibition into artificial neural networks could improve certain computer algorithms. “It’s preventing errors in processing the data so you need to do less training to tell your computer when it did things wrong,” said Incorvia. “So that’s where a lot of the energy benefits come from.”

Lateral inhibition can be implemented with conventional silicon computer parts—but at a cost.

“There is peripheral circuitry that is needed to implement this functionality,” said Can Cui, an electrical and computer engineering graduate student at the University of Texas at Austin and lead author of the study. “The end result is that as the size of the network scales up, there’s much more difficult[y] in the design of the circuit and you will have extra power consumption.”

Researchers found that when nanowires are placed parallel and next to one another, they can produce lateral inhibition without additional circuitry.Researchers in the new study are using magnetic nanowires and their innate physical properties to more efficiently produce lateral inhibition. These devices produce small, stray magnetic fields, which are typically a nuisance in designing computer parts because they can disturb nearby circuits. But here, “this kind of interaction might be another degree of freedom that we can exploit,” Cui said.

By running computer simulations, the researchers found that when these nanowires are placed parallel and next to one another, the magnetic fields they produce can inhibit one another when activated. That is, these magnetic parts can produce lateral inhibition without additional circuitry.

The researchers ran further computer modeling experiments to understand the physics of their devices. They found that using smaller nanowires (30 nanometers wide) and tuning the spacing between pairs of nanowires were key to enhancing the strength of lateral inhibition. By optimizing these physical parameters, “we can achieve really large lateral inhibition,” said Cui. “The highest we got is around 90%.”

The Challenge of Real-World Applications

Training machine learning algorithms is a large time and energy sink, but building computer hardware with these magnetic parts could mitigate those costs. The researchers have already built prototypes of these devices.

However, there are still open questions about the practical applications of this technology to real-world computing challenges. “While the study is really good and combines the hardware-software functionality very well, it remains to be seen whether it is scalable to more large-scale problems,” said Priyadarshini Panda, an electrical engineering professor at Yale University who was not involved in the current study.

But the study is still a promising next step and shows “a very good way of matching what the algorithm requires from the physics of the device, which is something that we need to do as hardware engineers,” she added.

—Richard J. Sima (@richardsima), Science Writer

Removal of Ozone Air Pollution by Terrestrial Ecosystems

Mon, 06/01/2020 - 11:51

“Dry deposition” is the process by which ozone is removed from the atmosphere at Earth’s surface. It’s an important process to understand because it impacts air pollution, ecosystem health, and climate. A recent article in Reviews of Geophysics presented the current state of knowledge on ozone dry deposition. Here, one of the authors gives an overview of ozone dry deposition processes, and how they are measured and modeled.

How and where does dry deposition happen?

Dry deposition of ozone happens when turbulent air motions bring ozone close to a surface, and then ozone reacts with that surface.Dry deposition of ozone happens when turbulent air motions bring ozone close to a surface, and then ozone reacts with that surface.

Dry deposition is analogous to wet deposition, another removal pathway for many atmospheric compounds (but not ozone) happening when rain removes the compounds from the atmosphere.

Important surfaces to which ozone dry deposits include vegetation, soil, and snow. There are multiple pathways by which ozone deposits to vegetated surfaces. We classify these pathways as either ‘stomatal’, when ozone is taken up by plant stomata (the small pores on plant leaves used for gas exchange), or ‘nonstomatal’, when ozone deposits to other parts of the plant canopy.

Why is it important to understand and quantify these processes?

There are a few reasons. First, dry deposition removes ozone from the troposphere, the atmospheric layer where weather happens. In the troposphere, ozone is an air pollutant, a greenhouse gas, and central to the removal of many pollutants and reactive greenhouse gases through chemical oxidation. Knowing how much ozone is deposited helps us to quantify how much ozone is in the troposphere. There is also evidence that day-to-day changes in ozone dry deposition influence high versus low ozone pollution days.

Second, plants can be harmed when ozone deposits inside stomatal pores. When plants open their stomata to take up carbon dioxide for photosynthesis, ozone gets inside the leaf and reacts with internal fluids and tissues. These reactions cause plants to stress, impacting their ability to photosynthesize. We need to understand and quantify stomatal ozone dry deposition to estimate the damages incurred by vegetation, including crops, from ozone pollution.

Third, the stomatal depositional pathway for ozone is linked with regional-to-global water and carbon cycling as these gases are exchanged through the stomata alongside ozone. If plants are damaged by ozone, the ability of the plant to accumulate carbon and transpire can be compromised. Understanding linkages between ozone pollution and water and carbon cycling requires us to quantify the stomatal portion of ozone dry deposition.

Processes contributing to ozone (O3) dry deposition and its impacts on air pollution, ecosystems, and climate, both directly (red) and indirectly (purple). Credit: Clifton et al. [2020], Figure 1 / Illustration by Simmi SinhaHow is it measured and how accurately can it be quantified?

Ozone dry deposition is best measured through the eddy covariance technique. For this technique, you need fast measurements of ozone concentrations and vertical wind over the surface you wish to investigate. The covariance between the ozone concentration and vertical wind gives you a measure of how much ozone was lost to dry deposition to the surface underneath your ozone and wind instruments. The measurement is pretty accurate, although it has limitations during times of low wind and over non-flat terrain or vegetation. Another issue is that very fast ozone chemistry in the air beneath the instruments can look like ozone dry deposition, and so you must be able to separate dry deposition from ambient chemistry.

How does ozone dry deposition vary over time and between places?

Ozone dry deposition changes quickly over the course of a day, as well as from day to day, one season to the next, and year to year.We need more sites with longer ozone dry deposition measurements to understand how ozone dry deposition varies over time and between places. There are only a couple of sites around the world with more than a few years of data. These datasets suggest ozone dry deposition changes quickly over the course of a day, as well as from day to day, one season to the next, and year to year.

There appear to be differences in ozone dry deposition between places, but it’s hard to understand how differences in meteorology or vegetation between places drive these differences without long-term ozone dry deposition measurements at a coordinated network of sites.

In general, the exact drivers of variability in ozone dry deposition, especially in nonstomatal deposition, are uncertain. This uncertainty prevents us from modeling variability in ozone dry deposition accurately, hindering a full understanding of ozone air pollution and the associated plant damage.

Ozone dry deposition differs across ecosystems and environmental conditions, but the exact mechanisms causing these differences are uncertain. Credit: Ricardo Gomez Angel on Unsplash (left); Jonathan Aman from Pexels (center); Flohrflohr on Pixabay (right) (CC0)

What aspects ozone dry deposition are well-understood and which are less certain?

From our review of the current knowledge in this field we think that nonstomatal deposition may be more important than previously recognized. In general, stomatal deposition is better understood than nonstomatal deposition. The topic of stomatal functioning is relevant for regional-to-global carbon and water cycling and forest and crop productivity so we can build our understanding of stomatal ozone dry deposition from studies from other fields like plant physiology and ecology. While laboratory and field studies indicate that nonstomatal deposition can vary with environmental conditions and contribute to a sizeable fraction of the total ozone dry deposition, the magnitude and temporal variability of nonstomatal ozone dry deposition at any given place is uncertain.

More laboratory research and long-term field data are needed.The main need in terms of research going forward is for more laboratory research and long-term field data, as well as syntheses of old data, to learn how to translate our current understanding of contributing processes into an ozone dry deposition parameterization for regional-to-global models.

What are some of the benefits and challenges of synthesizing research from different fields such as atmospheric chemistry, ecology, and meteorology?

For an interdisciplinary topic like ozone dry deposition there are many benefits from synthesizing research from different fields.For an interdisciplinary topic like ozone dry deposition there are many benefits from synthesizing research from different fields. Something well understood in one field is not necessarily known in another, so it’s important to solidify a knowledge base across fields. Because ozone dry deposition is so dependent on the land surface, we wouldn’t get very far in advancing understanding without leveraging decades of knowledge from biometeorological and plant physiological research.

However, it can be challenging to get everyone on the same page because terminology and overarching research questions differ across fields. Generally, it’s also hard to meet and establish collaborations with folks in other fields. This review paper was an outcome of a workshop in 2017 at Lamont Doherty Earth Observatory. Funding for this workshop from the Lamont Climate Center, NOAA, and NASA allowed us to invite people from a variety of different fields and initiate collaborations for the review paper. It is through interdisciplinary initiatives like these that we can advance our understanding.

—Olivia Clifton (oclifton@ucar.edu;  0000-0002-1669-9878), National Center for Atmospheric Research, Colorado, USA

The First Global Geologic Map of the Moon

Mon, 06/01/2020 - 11:50

It’s a common problem in geologic mapping. Scientists from different disciplines, interested in different things, map only the areas they want to study. The result is a patchwork of maps that don’t quite align with each other and therefore provide an incomplete picture of the whole.

In the case of an ambitious mapping project recently completed at the U.S. Geological Survey’s (USGS) Astrogeology Science Center in Flagstaff, Ariz., that whole is the Moon.

Although Earth’s closest celestial neighbor has been mapped in some form since the early 17th century, the new “Unified Geologic Map of the Moon,” published in March, is the first standardized geologic version ever made. As such, it could be an essential tool for future science and exploration.

This new work represents a seamless, globally consistent, 1:5,000,000-scale geologic map derived from the six digitally renovated geologic maps. Click image for larger version. Credit: USGS

“We’re all talking and using the same language through this map,” said Jim Skinner, a USGS research geologist in Flagstaff. Skinner coordinates the production of all standardized geologic maps of nonterrestrial solid surface bodies in the solar system on behalf of NASA, which funded the 4-year project. “It kind of sets the baseline for everybody to be able to communicate effectively, which is what maps are anyway.”

Prior to the completion of the unified map, the Moon’s geology was represented on six separate maps, themselves agglomerations of Lunar Orbiter images gathered in preparation for the Apollo missions, as well as photographs taken by the Apollo astronauts themselves.

One drawback of the six previous maps was that they existed only on paper or in two-dimensional scans. In 2013, a team led by USGS geologist Corey Fortezzo completed a project to digitize them for use within GIS software—a step Skinner estimates quadrupled their usefulness. Joined by preeminent lunar geologist Paul Spudis, the team then set to work stitching the six maps together into one covering the entire Moon, creating consistent geological nomenclature and adding much more detailed topography data collected from lunar orbit.

Fitting the Pieces Together

The first task in creating the unified version was to conform the six old maps into a single spatial representation. Because the old maps were based on composites of hundreds of photographs, the scale of any given section varied depending on the orbiting spacecraft’s positions and camera angles. Fortezzo’s team had to essentially bend and warp the maps to fit them into a uniform framework.

Topography data had also been limited in the old maps—scientists had to “eyeball” the relative height of many features on the basis of the shadows they cast. To solve that problem, Fortezzo’s team flooded the new map with a digital elevation model data set that combines stereo images taken from Japan’s Selenological and Engineering Explorer (SELENE) spacecraft, which orbited the Moon in 2007, and laser altimeter data and some imaging from NASA’s Lunar Reconnaissance Orbiter (LRO), launched in 2009. The result was the addition of more than 7 billion points of altimetry data, providing an altitude for every 60 square meters of the Moon’s surface. .

Prior to the unified map, maps of the Moon were not standardized and were produced by different organizations for different reasons. This map of lunar geography was produced by the U.S. Air Force Aeronautical Chart and Information Center in 1961 but was never formally published. Credit: USGS

. The map comes as lunar science is on the verge of a heyday, with new probes returning vastly more detailed observations of the Moon than have been available in the past. “It’s hard to get your head around all these data,” said Kip Hodges, a geologist at Arizona State University whose wide-ranging research includes the ebb and flow of meteoroid impacts on the Moon over time. “What’s novel about this geologic map is putting all of these observations together at a time when the rate of observation is increasing incredibly fast.”

The new map identifies over 12,000 distinct outcrops. At a scale of 1:5,000,000, the smallest feature discernible on it is a crater rim roughly 3 kilometers across, about the width of Manhattan.

The USGS team chose to use what’s called a bridge scale between global, regional, and local units, maximizing the map’s usefulness in identifying areas for more detailed study. Once they’ve used the global map to narrow down a location, researchers can request very high resolution images—as sharp as 1 or 2 meters per pixel—from spacecraft like the LRO. Future lunar landing missions may use the global map to select a few areas with the right kinds of features before using smaller-scale maps to pick a landing site for a sample return mission.

“We know that the Apollo sample collection and the lunar meteorite collection [are] not representative of all the lithologies that are on the surface of the Moon,” said Clive Neal, a lunar geologist who leads a project to put a new geophysical observation network on the Moon.  “We can now target those [lithologies] much better with this map.”

By the same token, the global map will help planetary geologists use local observations, say, from a lunar lander, to better understand similar features elsewhere on the Moon. “[The map] allows you to extrapolate out those observations you’re making in one or two or three different locations on the surface of that body…so we can understand the broadest area of [the body] that we can,” Skinner said.

This global map of the Moon is possibly “the purest form of a planetary geologic map.”NASA’s Planetary Geologic Mapping Program, which the USGS astrogeology center coordinates, has produced 243 maps of nonterrestrial bodies in the solar system so far, most of which are physical maps that Fortezzo’s team is also digitizing. Although Mars and Jupiter’s moons Io and Ganymede have also been mapped globally using the rigorous cartography standards the USGS is known for (and its global maps of Mercury, Jupiter’s moon Europa, Saturn’s moon Enceladus, and the asteroids Ceres and Vesta are currently in progress), Skinner believes this most recent version of the Moon’s is possibly “the purest form of a planetary geologic map.”

That’s because the map preserves so much of the data from its previous versions at a similar scale, adding without subtracting. “It’s just a good, aesthetically good, technically good map.”

An Evolving Process

When geologists first began mapping the Moon in the 1960s, it was a much less precise process that involved cutting up physical photographs and pasting them together. Cartographers would then copy the photo collage onto a new base map by hand using an airbrush.

In the early days of lunar geologic mapping, photographs of the Moon’s surface were copied by hand onto a map using an airbrush. Credit: USGS

With that process, “you’re losing things in the translation into kind of an artistic device,” Skinner said. Fortezzo’s team found that some outcrops shown on the old maps don’t actually exist, having been misinterpreted by an airbusher’s eye. The six separate map sections also suffered from poor translation across their boundaries, simply because they were not all created by the same scientists using the same methods.

For instance, there were about 200 different types of geological features on those maps—but some were duplicates, meaning the same feature may have been labeled differently depending on which map it was on. The new map uses the same data set for the entire lunar surface, eliminating the need to reconcile map sections across their boundaries, and boils the list down to 43 unique global stratigraphic units, each represented by a different color.

No two geologists will interpret every data point the same way, and Spudis and Fortezzo would often disagree about how a given outcrop should be classified. “They would sit there and argue about the different units,” said Skinner. “And it was a perfect way to do this….You need to argue and discuss it and talk about it. But they’d work together to try to dial everything in.” Spudis died in 2018 before the map was completed.

“This map enables people to actually talk across disciplines, and that enables discovery and exploration. Because now we have one device that will kind of rule them all, so to speak.”The consolidation and unification of so much of what is known about the Moon’s geology has far-reaching benefits. An express goal of the new map was to make lunar geology accessible to the general public, not just to planetary scientists, said Skinner. “Laymen can approach this map and look at it and start to have an understanding about what’s going on, even if they don’t know what the units mean. And they can start to see patterns.”

According to Skinner, the fractured nature of past maps made seeing those patterns difficult even for scientists. “There was not a whole lot of cross correlation between actual disciplines in planetary science,” he said. “This map enables people to actually talk across disciplines, and that enables discovery and exploration. Because now we have one device that will kind of rule them all, so to speak.”

Broadening the Lunar Science Community

Thanks to the consistency of USGS standards, the new map describes the Moon in the same basic visual language used in geologic maps of Earth. Hodges, whose work spans both planetary bodies, believes that simple fact can allow a wealth of knowledge to flow into lunar science from Earth science. “A geologic map actually puts the geology of the Moon in a context that is accessible to terrestrial geoscientists,” he said. And that’s going to lead to new collaborative interactions with people who generally don’t study the Moon.”

Skinner added that making a unified, global map of any planetary body is mostly about making something that will be useful to the largest number of people doing the widest range of science. That’s done partly by separating the objective observations made from orbit and astrogeologists’ best interpretation of what features are made of and how they were formed, represented in two separate columns in the map’s key. Without being able to go to the Moon and directly analyze every outcrop, the interpretations will always be limited. Skinner and his colleagues are keenly aware that now that more data are available, many of the classifications made by early geologic maps of the Moon have turned out to be incorrect.

“These big, gigantic-scale geologic maps are principally useful to titillate your senses and get you working on really interesting problems.”That’s why the new Moon map, although it’s the first standardized geologic representation of the entire lunar surface, won’t be the last. It will continue to be updated and improved as better data come in, a process that will be much easier now that the map is completely digital.

But the map’s usefulness lies in neither perfection nor permanence, according to Hodges. Its synthesis of new data and the way it elegantly presents that data in an accessible form are the map’s real contribution to future research. “These big, gigantic-scale geologic maps are principally useful to titillate your senses and get you working on really interesting problems,” Hodges said.

The “Unified Geologic Map of the Moon” is available for public use in GIS, PDF, and JPG formats.

—Mark Betancourt (@markbetancourt), Science Writer

A Longer-Lived Magnetic Field for Mars

Mon, 06/01/2020 - 11:48

Early Mars was much more similar to Earth than it is today. It had a thick atmosphere, liquid water at its surface, and a global magnetic field. But at some point, the magnetic field disappeared, and Mars turned into the barren desert we see today, although researchers haven’t entirely figured out the full sequence of how and why this happened.

Adding one more piece to the puzzle, a group of researchers has now found evidence showing that the Martian magnetic field was active for a much longer period than previously thought.

Mars’s Dying Dynamo

A planet’s magnetic field is generated by an internal dynamo in which liquid metal flowing within the planet’s outer core produces an electrical current. The magnetic field can be recorded on the planet’s surface by rocks containing magnetic minerals that align with the magnetic field when the rocks form. Volcanic rocks are particularly good at recording, and by dating them, scientists can determine whether the dynamo was active at the time the rocks were emplaced.

Researchers know that Mars had an active magnetic field between 4.3 and 4.2 billion years ago. But observations from NASA’s Mars Global Surveyor (which orbited the planet between 1999 and 2006) didn’t show any sign of magnetism over Hellas, Argyre, and Isidis, three large impact basins that formed about 3.9 billion years ago. This lack of evidence led most scientists to believe that the dynamo had already stopped by that time.

This map curated with data from NASA’s Mars Global Surveyor uses colors to represent the strength and direction of the planet’s magnetic field. Credit: NASA

Now, looking at new observations from another NASA mission, the Mars Atmosphere and Volatile Evolution (MAVEN) satellite, scientist are reevaluating that idea. Anna Mittelholz, a geophysicist at the University of British Columbia in Vancouver, Canada, has found evidence showing that the Martian dynamo was active for at least 700 million years longer than previously thought.

Mittelholz and her colleagues detected low-intensity magnetic fields over one of the oldest features on Mars, the Borealis Basin. Located in the planet’s northern hemisphere, scientists think the basin was formed about 4.5 billion years ago. They also found evidence of a magnetic field coming from a lava flow in a region called Lucus Planum, with an age of 3.7 billion years.

Mittelholz spotted a 35-kilometer-wide impact crater piercing the lava flow. The magnetic signal picked up by MAVEN drops at the crater, indicating that the lava flow itself is magnetized, not any other material lying below it.

“There is this lava flow that has a magnetic signal, and it drops as we see a hole in the surface,” Mittelholz said. “I think this is the first time we can make a dating argument that is that solid.”

“This is the first time that I have seen evidence that the dynamo could have been active past 4 billion years ago,” said David Brain, a planetary scientist at the University of Colorado Boulder who wasn’t involved in the new study. “The evidence for 3.7 [billion years] for a young dynamo is based on the demagnetization signature of a single impact crater, so of course, I would love to see more such signatures at other places on Mars.”

Mittelholz and her colleagues published their findings in Science Advances in May.

A Magnetic Patchwork

Although some areas show stronger magnetism than others, the difference doesn’t necessarily mean that the global magnetic field was weaker at a certain time. It’s more likely a matter of the abundance of carrier minerals.After the global magnetic field disappeared, Mars was left with a patchwork of magnetized rocks and minerals of varying intensities scattered around its surface.

Although some areas show stronger magnetism than others, the difference doesn’t necessarily mean that the global magnetic field was weaker at a certain time, Mittelholz said. It’s more likely a matter of the abundance of carrier minerals—those minerals in the rock that can be influenced by the planet’s magnetic field. Mittelholz attributes the low-intensity magnetic fields over the Borealis Basin to fewer carrier minerals in the rock formation. “In the northern hemisphere, we have generally weak fields, probably because the concentration of magnetic carriers is much lower,” she said.

A lower abundance of carrier minerals could also explain why the three large impact basins in the northern hemisphere lack signs of magnetism. Maybe, research suggests, the impacts that created the basins also removed large portions of the Martian crust and, with it, the minerals needed to record strong magnetism in the rock. “There might still be signatures within the basins that are not visible from high altitude,” Mittelholz said. “If the wavelength of the features is too small in extent or if it’s weak, we won’t be able to see it from orbit.”

Magnetism at a Crucial Time for Martian History

Although extending the active period of the Martian dynamo by 400 million years doesn’t sound like a lot when compared to the planet’s 4.6-billion-year life span, it was a very agitated period of Martian history.

“All the major impacts happened in that time: the Hellas basin, the Argyre basin, the Late Heavy Bombardment of the inner solar system occurred during that period, and a lot of the evidence for liquid water on the surface of Mars is right around this time,” Brain said. “Knowing that the interior was generating enough energy to support a dynamo I think is important.”

A longer-lived magnetic field could have also made Mars a friendlier place for life. The magnetic field could have slowed down the process of atmospheric escape, although scientists are still debating if that could be the case. What’s clear is that much less radiation would have reached the surface of the planet. “A magnetic field can protect the surface of a planet from charged-particle radiation that would strike the surface and be bad for any life near the surface,” Brain said. “If you can extend that protected period for longer, then, maybe, that has implications [for life].”

—Javier Barbuzano (@javibarbuzano), Science Writer

Meteoric 10Be Reveals Lithological Control on Erosion Rates

Mon, 06/01/2020 - 11:30

Quantifying erosion rates across landscapes is a key challenge in geosciences: erosion rate data are crucial to understand how landscapes evolve through time, as well as to assess numerous hazards (e.g., landslides, flooding, earthquakes). For example, sediment (gravel, sand) will be preferentially sourced from areas with the greatest erosion rates; such areas may also represent areas of greatest tectonic activity.

Over the last 20 years, techniques based on the measurement of cosmogenic radionuclides (CRN) concentrations in river sands have been the tool of choice to quantify erosion rates. CRN are rare elements produced as cosmic rays bombard the surface of the Earth and interact with the atoms that make up rocks. Rocks at depth have no CRN. CRN start accumulating into rocks as they reach the surface: the longer a rock stays near the surface, the greater its CRN concentration. If erosion rates are low, rocks spend a long time near the surface: rocks (and the sand produced from their erosion) will therefore have high CRN concentrations. Conversely, rocks and sand will have low CRN concentrations in fast eroding landscapes.

However, the best understood CRN system is the one that leads to the formation of Beryllium-10 (10Be) in quartz crystals: most CRN-derived erosion rates are based on 10Be concentrations in quartz sand, which means we have very few erosion rate datasets from catchments that do not have quartz (e.g., slate, mudstone).

Deng et al. [2020] use meteoric 10Be (delivered to rocks by rain) to derive erosion rates for a range of rock types, including rocks that do not contain quartz. They show that the slate headwaters of the Zhuoshui River catchment in Taiwan are eroding much faster than the other rocks in the rest of the catchment, at around 4 to 8 millimeters per year.

The authors use their new data to test the commonly assumed relationship between landscape steepness and erosion rates. They show that the highest erosion rates are associated with moderate steepness indices (left panel above), but that the datasets can be reconciled by adjusting the steepness index to incorporate variations in rock resistance to erosion: the best fit is achieved with Miocene slates five to ten times weaker than the other rock types exposed in the catchment (right panel above).

This study bridges a significant gap: it demonstrates that meteoric 10Be can be used to quantify erosion rates in catchments with a range of lithologies exposed. It proposes a new framework to quantify differences in rock resistance to erosion and demonstrate their impact on landscape steepness, with implications for retrieving erosional signals from topographic data.

Citation: Deng, K., Yang, S., von Blanckenburg, F., & Wittmann, H. [2020]. Denudation rate changes along a fast‐eroding mountainous river with slate headwaters in Taiwan from 10Be (meteoric)/9Be ratios. Journal of Geophysical Research: Earth Surface, 125, e2019JF005251. https://doi.org/10.1029/2019JF005251

—Mikael Attal, Associate Editor, JGR: Earth Surface

Are Geysers a Signal of Magma Intrusion Under Yellowstone?

Fri, 05/29/2020 - 11:18

Steamboat Geyser in Yellowstone National Park is an enigma. It is the tallest currently active geyser in the world, sometimes blasting superheated water over 90 meters (300 feet) into the air. Yet unlike the more famous Old Faithful, Steamboat Geyser runs on its own rhythm. Sometimes the geyser is quiet for decades and then suddenly bursts back to life. It is a mystery exactly why Steamboat has such behavior. After a new period of activity started in 2018, we might have more clues about what drives these steam-and-water explosions.

Yellowstone caldera is a geologic wonderland. It is the source of three of the largest explosive eruptions in the past 3 million years. The caldera itself covers over 1,500 square kilometers (580 square miles) in the northwest corner of Wyoming. As Charles Wicks from the U.S. Geological Survey (USGS) puts it, “Yellowstone’s roots seem to extend all the way to the core-mantle boundary. In that dimension, it’s a magmatic system of continental scale.”

The Yellowstone caldera is packed with geothermal features like geysers, hot springs, mud pots, and geothermal pools. This hydrothermal activity is driven by the vast reservoirs of heat beneath the Yellowstone area, most of which comes from the magma found many kilometers underneath the caldera. All this heat and water mean the land surface at Yellowstone rises and falls frequently, meaning Yellowstone is best described as a “restless caldera.”

Modeling a Deformation

A new study by Wicks and colleagues in the Journal of Geophysical Research: Solid Earth examines this restless behavior to try to understand what drives the deformation and how it might be linked to Steamboat Geyser. It turns out that the rising and falling of the land, the behavior of some geysers, and the introduction of new magma underneath Yellowstone could all be linked.

The geyser is in the midst of one of its most active periods on record, having erupted 79 times since late 2018.The north rim area and caldera floor at Yellowstone have been deforming for decades. The ability to monitor this deformation has greatly improved with GPS and interferometric synthetic aperture radar techniques. Wicks and others looked at the North Rim Uplift Anomaly (NRUA) and used deformation data from these techniques to model potential sources of these changes.

Magma, volatiles released from magma, or hydrothermal fluids could all cause the surface to rise or fall. Wicks and others used data from 2000 to 2018 to examine how much the NRUA either inflated or deflated. What they found is that the deformation in the NRUA may represent the accumulation of magmatic volatiles from a basaltic intrusion that occurred around 2000.

Steamboat Geyser might be a pressure gauge for these intrusions and accumulations of volatiles. The geyser is in the midst of one of its most active periods on record, having erupted 79 times (as of 20 May) since late 2018. This activity appears to correlate with the deformation cycle in the NRUA, which to Wicks suggests a connection between the cause of the deformation (magmatic volatiles) and Steamboat Geyser.

Small Sample Size

Mike Poland, scientist-in-charge of the USGS Yellowstone Volcano Observatory who was not involved in the study, thinks that the connection is more complicated: “Why would just a single geyser respond to processes beneath Norris [Geyser Basin]? Wouldn’t that impact all the geysers?” One explanation might be that Steamboat Geyser likely has a more complex and deeper system than many geysers at Yellowstone.

Wicks admits that so far, scientists have a limited data set for Steamboat Geyser. “With only two episodes of observed deformation near Norris, the eruption frequency appears to respond to the surface deformation. Of course, that’s a very small sample size, so the apparent connection might be happenstance.” Any connection between Steamboat Geyser’s activity and accumulation of gases near the surface could heighten the risk of hydrothermal explosions, one of the most significant geologic hazards for tourists visiting the national park.

The ability to link deeper processes like magmatic intrusion with surface phenomena is tantalizing for geoscientists monitoring an active caldera like Yellowstone.Wicks suggests that the ability to have constant geochemical monitoring of the hydrothermal fluids from geysers and hot springs would help scientists create more accurate models: “Continuous monitoring of volatile flux rates are really needed to put the geophysical models in context. If surface inflation is caused by accumulation of pressurized fluids and deflation occurs when those fluids escape to the surface, we might be able to detect changes in volatile flux over time.”

The ability to link deeper processes like magmatic intrusion with surface phenomena is tantalizing for geoscientists monitoring an active caldera like Yellowstone. Wicks and Poland both agree that understanding the connection between the magmatic and hydrothermal systems at Yellowstone is fundamental for better hazard models. They might end up being more directly linked than geoscientists have previously thought.

—Erik Klemetti (@eruptionsblog), Science Writer and Associate Professor of Geosciences, Denison University, Granville, Ohio

1 June 2020: This article has been updated to correct the location of Yellowstone National Park.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer