Feed aggregator

When Rivers Are Contaminated, Floods Are Only the First Problem

EOS - Fri, 09/10/2021 - 13:28

Dioxins—the category of chemicals that includes Agent Orange—have been banned in the United States since 1979. But that doesn’t mean they’re gone. Like in the plot of countless scary movies, dioxins and other banned chemicals are just buried beneath the surface waiting to be unearthed.

A new perspective paper in Journal of Hazardous Materials calls attention to an understudied area: the remobilization of pollutants buried in riverbeds. Chemicals have a knack for binding to sediments, meaning chemical spills in rivers frequently seep into sediments instead of flowing downstream. Future layers of silt bury the pollutants and hide the problem.

But persistent chemicals in riverbeds are “ticking time bombs,” warned Sarah Crawford, an environmental toxicologist at Goethe University Frankfurt and lead author of the paper. The buried chemicals can easily be remobilized. “It just takes one flood event,” she said.

Little Pockets of Pollution

The paper comes from an interdisciplinary research team based mostly in Germany, a country that faced catastrophic floods this year that defied comparison. As the climate warms, similarly intense storms are expected to increase. Floods cause immediate turmoil, but chemical remobilization can prolong the disaster.

“Cohesive sediments are really stable over long ranges of flow velocities, but at some point the sediment bed just fails,” said Markus Brinkmann, an ecotoxicologist at the University of Saskatchewan and a coauthor of the paper.

“Little pockets of contamination are really easily dispersed by flood events.”When the riverbed fails, the turbulent water fills with sediment. That churning water can spread toxins widely. After Germany’s Elbe river flooded in 2002, for example, hexachlorocyclohexane concentrations in fish were 20 times higher than they were before the floods. In another example from 2017, Hurricane Harvey flooded or damaged at least 13 Superfund sites in the United States and sent cancer-causing compounds flowing into Galveston Bay in Texas.

“Little pockets of contamination are really easily dispersed by flood events,” Brinkmann said.

The location of these little pockets is uncertain, complicating the problem. Urban areas and agricultural hot spots are obvious starting points for research and remediation, “but we just can’t pinpoint all of them,” said Crawford. “Maybe a farmer in the ’60s was spraying DDT. We don’t have records of that.”

Other questions remain unanswered. How bioavailable are reintroduced chemicals? How toxic are chemicals after decades bound to sediments? What is the economic risk of inaction? “A lot of this hasn’t been studied,” noted Crawford.

The recent paper doesn’t attempt to answer questions about the presence and release of riverbed toxins but tries, rather, to spur interdisciplinary research on the growing threat.

Involving the Community

Interdisciplinary research is essential for such a complex problem. As evidence, the paper’s 16 authors include a mix of toxicologists, economists, microbiologists, chemists, and engineers.

“To really accomplish this, particularly at the scale [at which] it needs to be done, you can’t have grad students collect every sample. You really need to engage the public.”But it’s important that the research expands beyond academia, too. “To really accomplish this, particularly at the scale [at which] it needs to be done, you can’t have grad students collect every sample,” said Ashaki Rouff, an environmental geochemist at Rutgers University–Newark who was not involved in the research. “You really need to engage the public.”

That often means collaborating with marginalized communities. “Issues of climate change and contamination and pollution disproportionately affect communities of color and low-income communities,” Rouff added. Getting residents involved in the research “is a way to empower those vulnerable communities and get them more agency in the environmental health of their community.”

“It’s really important to work with community-based organizations for this type of work, especially in these types of marginalized communities,” agreed Vanessa Parks, an associate sociologist with RAND Corporation who was not involved in the research. Residents of at-risk regions are well aware of the threat next door; excluding them from the conversation can increase the frustration and psychological burden of living near a contaminated site.

“Working with communities and having open dialogue about the risks and about environmental monitoring can help engender trust,” Parks said.

Ticking Time Bombs Get Louder

While the paper is a call for transdisciplinary action, Crawford and Brinkmann and their colleagues have already facilitated a research network to address the issue. They brought together at RWTH Aachen University in Germany graduate students from multiple disciplines (engineering, economics, ecotoxicology, and more) to research different angles of flood risk and contaminant mobilization. They published an open-access article on their efforts in 2017.

“I really hope to move forward working in an interdisciplinary manner,” said Crawford. “I hope we train this next generation of scientists to be able to communicate across different disciplines.”

It takes only one fast moving flood to rip up buried toxins and contaminate an entire area. As the climate warms and storms intensify, the ticking time bombs of polluted river sediments are only getting louder.

—J. Besl (@J_Besl), Science Writer

2021 AGU Section Awardees and Named Lecturers

EOS - Fri, 09/10/2021 - 13:28

AGU sections recognize outstanding work within their fields by annually hosting numerous awards and lectures. Individuals are selected as section honorees on the basis of meritorious work or service toward the advancement and promotion of discovery and solution science. Each one of you made tremendous personal sacrifices and selflessly dedicated yourselves to advancing Earth and space sciences. Your discoveries and solutions are simply remarkable.

We hope you take some time to celebrate your well-deserved recognition. We also know that for each one of you there is a group of people who were invaluable to your success. We greatly appreciate the efforts of those mentors, supportive colleagues, friends, and loved ones.

2021 awardees details:

23 AGU sections gave 78 awards/honors

5 awards are for students/postdocs 28 winners are early-career scientists (up to 10 years post-Ph.D.) 17 winners are midcareer (10–20 years post-Ph.D.) 18 winners are senior scientists (experienced and an established leader) 9 awards are given to honorees in the midcareer or senior career stage 1 award is given to all career stages 30 awards are for named lectureships, offered by AGU sections to recognize distinguished scientists with proven leadership in their fields of science

The 30 named lectureships offer unique opportunities to highlight the remarkable accomplishments of the awardees. AGU inaugurated the Bowie Lectures in 1989 to commemorate the 50th presentation of the William Bowie Medal, which is named for AGU’s first president and is the highest honor given by the organization. This year’s Bowie Lectures have an asterisk by their names in the list below. Please add these named lectures to your calendars for #AGU21.

Finally, we are grateful to the nominators, nomination supporters, section leaders, and selection committees for selecting these deserving colleagues. Your volunteer hours are invaluable to our community.

Again, congratulations to the 2021 section awardees and named lecturers!

—Susan Lozier, President, AGU; and LaToya Myles (honors@agu.org), Chair, Honors and Recognition Committee, AGU

 

Atmospheric and Space Electricity Section

Benjamin Franklin Lecture Donald R. MacGorman, Cooperative Institute for Mesoscale Meteorological Studies  

Atmospheric Sciences Section

Atmospheric Sciences Ascent Award Benjamin John Murray, University of Leeds Kerri Pratt, University of Michigan Nicole Riemer, University of Illinois at Urbana-Champaign Isla Simpson, National Center for Atmospheric Research

James R. Holton Award Marysa M. Laguë, Coldwater Lab, University of Saskatchewan

Yoram J. Kaufman Outstanding Research and Unselfish Cooperation Award Oleg Dubovik, University of Lille 1

Jacob Bjerknes Lecture* Joyce Penner, University of Michigan

Jule Gregory Charney Lecture* Wayne H. Schubert, Colorado State University

Future Horizons in Climate Science: Turco Lectureship Yuan Wang, California Institute of Technology  

Biogeosciences Section

Thomas Hilker Early Career Award for Excellence in Biogeosciences Jennifer B. Glass, Georgia Institute of Technology

Sulzman Award for Excellence in Education and Mentoring Susan Natali, Woodwell Climate Research Center

William S. and Carelyn Y. Reeburgh Lecture Edward J. Brook, Oregon State University  

Cryosphere Sciences Section

Cryosphere Early Career Award Brooke Medley, NASA Goddard Space Flight Center

John F. Nye Lecture Andrew Fowler, University of Limerick  

Earth and Planetary Surface Processes Section

G. K. Gilbert Award in Surface Processes David C. Mohrig, University of Texas at Austin

Luna B. Leopold Early Career Award Mathieu G. A. Lapôtre, Stanford University

Marguerite T. Williams Award Nicole M. Gasparini, Tulane University

Robert Sharp Lecture Mathieu G. A. Lapôtre, Stanford University  

Earth and Space Science Informatics Section

Greg Leptoukh Lecture Charles S. Zender, University of California, Irvine  

Education Section

Dorothy LaLonde Stout Education Lecture Emily H. G. Cooperdock, University of Southern California  

Geodesy Section

John Wahr Early Career Award Lin Liu, Chinese University of Hong Kong Surendra Adhikari, Jet Propulsion Laboratory, California Institute of Technology

William Bowie Lecture* Glenn A. Milne, University of Ottawa  

Geomagnetism, Paleomagnetism, and Electromagnetism Section

William Gilbert Award Richard J. Blakely, U.S. Geological Survey

Edward Bullard Lecture* Barbara Maher, Lancaster University  

Global Environmental Change Section

Bert Bolin Global Environmental Change Award and Lecture Thomas L. Delworth, Geophysical Fluid Dynamics Laboratory, NOAA

Global Environmental Change Early Career Award Alexandra G. Konings, Stanford University Bin Zhao, Tsinghua University Kimberly A. Novick, Indiana University Bloomington

Piers J. Sellers Global Environmental Change Mid-Career Award Charles D. Koven, Lawrence Berkeley National Laboratory

Stephen Schneider Lecture Alan Robock, Rutgers University

Tyndall History of Global Environmental Change Lecture Michael D. Dettinger, Scripps Institution of Oceanography  

Hydrology Section

Hydrologic Sciences Early Career Award Laura E. Condon, University of Arizona Adrian Adam Harpold, University of Nevada, Reno Scott Jasechko, University of California, Santa Barbara Pamela L. Sullivan, Oregon State University

Hydrologic Sciences Award Jiri Simunek, University of California, Riverside

Walter Langbein Lecture* John W. Pomeroy, University of Saskatchewan

Paul A. Witherspoon Lecture Junguo Liu, Southern University of Science and Technology  

Mineral and Rock Physics Section

Mineral and Rock Physics Early Career Award Takayuki Ishii, Center for High Pressure Science and Technology Advanced Research

Mineral and Rock Physics Graduate Research Award Kosuke Yabe, Earthquake Research Institute, University of Tokyo

John C. Jamieson Student Paper Award Mingda Lv, Argonne National Laboratory  

Natural Hazards Section

Natural Hazards Section Award for Graduate Research Leah Salditch, Northwestern University

Natural Hazards Early Career Award Chia-Ying Lee, Columbia University Daniel Wright, University of Wisconsin–Madison

Gilbert F. White Distinguished Award and Lecture Gerald E. Galloway, University of Maryland  

Near-Surface Geophysics Section

Near-Surface Geophysics Early Career Achievement Award Ryan Smith, Missouri University of Science and Technology  

Nonlinear Geophysics Section

Ed Lorenz Lecture Valerio Lucarini, University of Reading  

Ocean Sciences Section

Ocean Sciences Early Career Award Malte Jansen, University of Chicago

Ocean Sciences Award Alistair Adcroft, Princeton University and Geophysical Fluid Dynamics Laboratory, NOAA

Rachel Carson Lecture Andrea G. Grottoli, Ohio State University

Harald Sverdrup Lecture Verena Tunnicliffe, University of Victoria  

Paleoceanography and Paleoclimatology Section

Willi Dansgaard Award Aradhna Tripati, University of California, Los Angeles

Harry Elderfield Student Paper Award Jordan T. Abell, University of Arizona

Nanne Weber Early Career Award Clara L. Blättler, University of Chicago  

Planetary Sciences Section

Ronald Greeley Early Career Award in Planetary Sciences Timothy A. Goudge, University of Texas at Austin

Fred Whipple Award and Lecture Paul Schenk, Lunar and Planetary Institute

Eugene Shoemaker Lecture* Athena Coustenis, Laboratoire d’Etudes Spatiales et d’Instrumentation en Astrophysique, Paris Observatory, Centre National de la Recherche Scientifique, Paris Sciences et Lettres University  

Seismology Section

Keiiti Aki Early Career Award Yihe Huang, University of Michigan

Beno Gutenberg Lecture* Gregory C. Beroza, Stanford University  

Space Physics and Aeronomy Section

Eugene Parker Lecture* James A. Klimchuk, NASA Goddard Space Flight Center

Fred L. Scarf Award Luisa Capannolo, Boston University

Space Physics and Aeronomy Richard Carrington Education and Public Outreach Award Keith M. Groves, Institute for Scientific Research, Boston College Elizabeth MacDonald, NASA Goddard Space Flight Center

Sunanda and Santimay Basu International Early Career Award in Sun-Earth Systems Science Chao Yue, Peking University

William B. Hanson Lecture* Jonathan J. Makela, University of Illinois at Urbana-Champaign

James Van Allen Lecture* Mei-Ching Hannah Fok, NASA Goddard Space Flight Center  

Study of the Earth’s Deep Interior Section

Study of the Earth’s Deep Interior Section Award for Graduate Research Mingda Lv, Argonne National Laboratory  

Tectonophysics Section

Jason Morgan Early Career Award Juliane Dannberg, University of Florida

Francis Birch Lecture* Taras Gerya, ETH Zurich  

Volcanology, Geochemistry, and Petrology Section

Hisashi Kuno Award Ming Tang, Peking University

Norman L. Bowen Award and Lecture Catherine Chauvel, Institut de Physique du Globe de Paris George W. Bergantz, University of Washington

Reginald Daly Lecture* Anat Shahar, Carnegie Institution for Science  

Joint Award: Geodesy, Seismology, and Tectonophysics Sections

Paul G. Silver Award for Outstanding Scientific Service Richard W. Allmendinger, Cornell University  

Joint Lecture: Biogeosciences and Planetary Sciences Sections

Carl Sagan Lecture Sarah Stewart Johnson, Georgetown University  

Joint Lecture: Paleoceanography and Paleoclimatology and Ocean Sciences Sections

Cesare Emiliani Lecture Baerbel Hoenisch, Lamont-Doherty Earth Observatory, Columbia University

La Captura de Carbono No Puede Resolver el Problema Climático Sin Acciones Individuales

EOS - Thu, 09/09/2021 - 12:41

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos. Una traducción de este artículo (Chino mandarín) fue realizado por Wiley. 本文由Wiley提供翻译稿。

Los proyectos de geoingeniería centrados en reducir las emisiones de gases de efecto invernadero como la plantación de árboles a gran escala, la eliminación de carbono y el almacenamiento de carbono pueden mitigar el cambio climático, pero no sin la adopción generalizada de automóviles eléctricos, según un nuevo estudio.

La nueva investigación muestra que las decisiones individuales desempeñarán un papel importante para ayudar al mundo a cumplir los objetivos globales establecidos por el Acuerdo de París, que apuntan a reducir radicalmente las emisiones de gases de efecto invernadero para 2050.

“Necesitamos hacer estos recortes de emisiones, y la acción individual puede ser una parte importante de eso”, dijo Emily Murray, estudiante de licenciatura en astrofísica de la Universidad de Princeton y autora principal del estudio, publicado en Earth’s Future.

Murray y su supervisora, la profesora de Princeton Andrea DiGiorgio, querían evaluar cómo una acción individual (reducir las emisiones de dióxido de carbono de los vehículos privados alimentados con gasolina), cuando se adopta a escala global, puede tener un efecto comparable a los efectos de los proyectos de geoingeniería.

Murray y DiGiorgio realizaron un meta-análisis de la literatura disponible que examina los impactos generales proyectados de la adopción generalizada de vehículos eléctricos. Calcularon la cantidad de emisiones de carbono que se reducirían al reemplazar los vehículos que queman combustibles fósiles por automóviles eléctricos, teniendo en cuenta la carga adicional que estos vehículos imponen a las centrales eléctricas y las emisiones que liberan.

Los investigadores encontraron que el método más simple de remoción de carbono, la forestación, era la técnica de geoingeniería más efectiva.

Luego, los investigadores examinaron varios métodos de geoingeniería para eliminar el dióxido de carbono de la atmósfera. Estos incluyeron plantar árboles, una técnica llamada meteorización mejorada que utiliza minerales naturales o creados artificialmente para absorber carbono, y técnicas de eliminación directa de dióxido de carbono que implican succionar carbono del aire. También analizaron el uso del consumo de bioenergía combinado con la captura y almacenamiento de carbono y el biocarbón, que almacena carbono en forma de carbón vegetal.

Murray y DiGiorgio calcularon la cantidad de carbono que eliminarían estas técnicas si se desplegaran al máximo para el año 2050. Descubrieron que el método más simple de eliminación de carbono, la forestación, era la técnica de geoingeniería más eficaz en ese período de tiempo.

Sin embargo, la adopción de automóviles eléctricos no se quedó atrás de la forestación como estrategia de mitigación de carbono. Si todos los vehículos de combustibles fósiles fueran reemplazados por Teslas y otros autos eléctricos para el 2034, el impacto sería más efectivo para reducir las emisiones de carbono que los dos métodos menos efectivos de eliminación de carbono combinados: la meteorización mejorada y el biocarbón.

Murray dijo que es poco probable que todas las técnicas de geoingeniería estudiadas en el documento se implementen a gran escala, con la mejor tecnología. Pero incluso si lo fueran, no mitigarían todas las emisiones de carbono.

Continuó diciendo que la adopción de autos eléctricos es una estrategia de mitigación climática más realista, como ya está sucediendo, mientras que algunas de las técnicas de geoingeniería recién se están implementando. (Earth’s Future, https://doi.org/10.1029/2020EF001734, 2021)

—Joshua Rapp Learn (@JoshuaLearn1), Escritor de ciencia

This translation was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Tropical Climate Change Is a Puzzle—Could Aerosols Be a Piece?

EOS - Thu, 09/09/2021 - 12:41

The climate in the tropical Pacific can be fickle. Alexey Fedorov can attest; when he began his career in climate dynamics in 1997, it was the strongest El Niño year on record. The climatic changes contributed to both massive flooding and droughts worldwide. The expectation at the time was that future El Niño events would become even stronger, but El Niño has been relatively quiet since then.

The eastern tropical Pacific (ETP) stretches from the southern tip of the Baja California Peninsula to northern Peru. Credit: Gabriela Agurto, Karla Belén Jaramillo, Jenny Antonia Rodríguez, and Elizabeth Andrade, CC BY-SA 4.0

Because El Niño is one temporary extreme of the natural oscillation in tropical Pacific water temperatures and atmospheric circulation, scientists have tried to predict how climate change will affect the average conditions in the tropics and, in turn, the extremes. Climate models have projected long-term warming of the eastern tropical Pacific Ocean—a region that stretches from the southern tip of the Baja California Peninsula to northern Peru—and weakening of the atmospheric circulation above it, both of which are characteristics of El Niño. But throughout the satellite era, researchers have observed an opposite trend.

“It’s one thing if the models get the trend right but the magnitude off. But when they are the opposite, it’s not great,” said Ulla Heede, a Ph.D. student at Yale University.

In a paper published in July in Nature Climate Change, Heede and Fedorov, now a professor of ocean and atmospheric sciences at Yale, showed that a combination of atmospheric aerosols and a thermostat-like mechanism might be keeping the eastern tropical Pacific cooler than expected. They also said the effect is temporary.

Investigating an “Ocean Thermostat”

An important feature of the tropical Pacific is the zonal atmospheric airflow called the Walker circulation. “In simple terms, it’s the trade winds,” said Fedorov. The trade winds blow warm surface water west, causing an upwelling of cold water in the eastern tropical Pacific and an east-west temperature gradient. The temperature gradient and Walker circulation are tightly linked.

“Over the past 30 years or so, if you look at the trends, you can see a very dramatic strengthening of the Walker circulation,” Fedorov said. “And this is not what the models predict.”

Researchers have thought the discrepancy between models and the observed strengthening might be due to natural climate variability or a built-in ocean thermostat that regulates the eastern tropical Pacific temperature. The latter could occur because the upwelled cold water in the eastern Pacific takes longer to warm relative to the warm surface waters in the western Pacific, which would strengthen the temperature gradient and corresponding Walker circulation.

Heede and Fedorov used 40 models from the Coupled Model Intercomparison Project Phase 6 to see whether the described ocean thermostat regulates eastern tropical Pacific temperature. When they simulated an abrupt carbon dioxide (CO2) increase in the atmosphere, they found that several models exhibited the ocean thermostat. “If [the models] had an ocean thermostat, they tended to have less eastern Pacific warming and less slowdown of the tropical circulation,” Heede said.

But when they ran simulations using the realistic historical emissions, the projections differed from the abrupt-CO2 simulation. The ocean thermostat might be contributing to the Pacific’s response to CO2, but something else in the emissions was having an impact.

Are Aerosols Responsible?

Because emissions consist of both greenhouse gases and aerosols, Heede thought aerosols could be to blame. Anthropogenic aerosols are harmful pollutants emitted as by-products of combustion that can have a cooling effect because they scatter solar radiation away from Earth’s surface. Aerosols dissipate from the atmosphere faster than greenhouse gases, which can last for centuries, so they tend to have the most potent effects close to where they’re produced. The far-reaching impacts of aerosols, likely acting through ocean and atmospheric circulation, are not as well understood. “I have to be honest,” Fedorov said. “Before Ulla started looking at aerosols, I didn’t think about it, because in the tropical Pacific you rarely think about aerosols’ dynamical role.”

“The aerosols, on average, tend to cancel out the warming that would otherwise have happened in the equatorial region.”With 12 models, the researchers could isolate the effects of greenhouse gases and aerosols. Greenhouse gas–only simulations projected warming in the Pacific similar to the abrupt-CO2 simulation, whereas aerosol-only simulations projected cooling. When mixed, “the aerosols, on average, tend to cancel out the warming that would otherwise have happened in the equatorial region,” Heede said.

Like Fedorov, Aaron Levine, a research scientist at the Cooperative Institute for Climate, Ocean, and Ecosystem Studies at the University of Washington who was not involved in the study, was surprised to see aerosols affecting the tropical Pacific. “I don’t see a lot of them in the Pacific,” said Levine, “but they’re strong in the Atlantic.”

Levine expects that the impact in the Pacific might be related to its connections to other oceans, particularly the Atlantic, and future studies should expand on this research by including global data from the models. “The paper really focused on the tropics,” he said.

“It’s Not Going to Stay Forever”

“I think [aerosols are] another piece of the puzzle in terms of understanding future projections of El Niño and how El Niño is going to change.”Although the models responded to aerosols differently, most of them projected eventual eastern tropical Pacific warming over several decades. “Where I think our paper really has something interesting to say is looking into the future,” Heede said. “No matter whether it was aerosols or the ocean thermostat that’s previously canceled out the warming, it’s not going to stay forever.”

As countries worldwide enact clean-air policies, the aerosols in the atmosphere will diminish, and so will their cooling effect. Without aerosols, warming could lead to more extreme weather events and warming of the planet.

“I think [aerosols are] another piece of the puzzle in terms of understanding future projections of El Niño and how El Niño is going to change,” said Levine.

—Andrew Chapman (@Andrew7Chapman), Science Writer

State-of-the-Art Technology, Serendipity, and Secrets of Stonehenge

EOS - Wed, 09/08/2021 - 12:04

Stonehenge is an iconic monument that has withstood the tests of time.

Its main architecture is composed of sarsen stones, gray megaliths towering more than 6 meters tall and weighing 18 metric tons. Despite their prominence, little is known about the 52 stones that remain of the roughly 80 that were erected during the middle–third millennium BCE.

“What’s exciting about the new study is that they have…attacked Stonehenge, as it were, with all this [new technology].”But now, new technology and an unexpected stroke of luck have allowed researchers to analyze a puzzle at the heart of the site: What are these stones made of? Published in PLOS One, the study provides a comprehensive characterization of the physical and chemical makeup of Stonehenge’s sarsens.

“What’s exciting about the new study is that [researchers] have…attacked Stonehenge, as it were, with all this [new technology],” said Mike Pitts, an archaeologist and journalist who led excavations at the site in 1979 and 1980. “And they’re able to extract information at a really, really fine level in a way that was impossible until quite recently.”

Stone Surfaces and Serendipity

David Nash, a physical geographer at the University of Brighton in the United Kingdom, led the study. His team began by analyzing the surface of each sarsen over multiple night shifts and one “very early morning shift” when tourists were not around.

https://eos.org/wp-content/uploads/2021/09/new-stonehenge-geochemistry.mp4

Using a portable X-ray fluorescence spectrometer (“it looks like a big sci-fi ray gun,” Nash said), the researchers took five measurements from each of the 52 stones, making sure to hold perfectly still for 2 minutes each time. The team stood in the dark, cold night with headlamps, trying to find patches of stone without lichen cover. Save for a few security guards, there was nobody else around, Nash said. “So, yeah, it’s a bit creepy.”

The team’s measurements, careful though they might have been, could go only so deep. They could not provide information about what lies beneath the surface. And because Stonehenge is so protected by the government, they could not take any samples of the stones’ interiors.

But then serendipity struck: As his team was wrapping up the fieldwork at Stonehenge, Nash received an email from the English Heritage Trust, the nonprofit organization that manages Stonehenge and hundreds of other historic sites in Britain.

“They emailed me and said, ‘We understand that you’re doing work on the chemistry of the stones at the moment. Could you give us a ring?’” Nash said. “My immediate reaction was, ‘Oh, God, what have we done wrong?’”

English Heritage shared information about the massive 1958 restoration project at Stonehenge. The project reerected three stones at the site, including Stone 58, a large upright sarsen that had toppled in 1797. To reinforce a fissure, three cores were drilled through Stone 58 to install metal rods.

David Nash of the University of Brighton analyzes a core extracted from Stone 58 at Stonehenge. Credit: Sam Frost/English Heritage

One core was gifted to Robert Phillips, a worker at the drilling company involved in the project. (Part of a second core was later uncovered at nearby Salisbury Museum in a box labeled “Treasure Box.” The location of the third core is still unknown.) Phillips hung the core in a protective tube in his office until his retirement, and kept it through his subsequent moves to New York, Illinois, California, and, finally, Florida. As Phillips approached 90, he sought to return this important artifact and had it delivered to English Heritage in 2018.

Phillips’s Stone 58 core, whose existence was previously unknown to any of the researchers, was lent to Nash’s team, which was able to sample and examine it in detail.

“It’s the first time that we’ve been able to look inside one of the stones at Stonehenge,” Nash said.

“They just did everything imaginable with it,” said Pitts, who was not involved in the study. “I mean, it has to be the most analyzed piece of rock on Earth.”

Remarkably Pure and Incredibly Durable

Scrutinizing the cores with state-of-the-art petrographic, mineralogical, and geochemical techniques revealed a reason why the long-standing sarsen stones at Stonehenge may be so enduring.

The core was 99.7% silica—almost entirely quartz, through-and-through, which was more pure than any sarsen stone Nash had worked on. Under the microscope, its sand-sized quartz grains were tightly packed together and supporting each other. The grains were then coated in an overgrowth cement—at least 16 different growth layers that could be counted almost like tree rings—which produced an “interlocking mosaic of quartz crystals that bind the stone together,” Nash said.

Cathodoluminescence imaging of a sarsen stone reveals the outlines of sand grains (pale blue, black) and multiple layers of quartz cement (red). Credit: Trustees of the Natural History Museum

“That’s probably why the sarsens were so big and have been so durable,” Nash said. “Because it’s an incredibly well cemented stone.”

The research also indicated that the dull gray Stonehenge we see today is probably not what it looked like when it was first built.

“When the stones were originally raised, they were dressed, they were cleaned up on the outside,” Nash said. “The fresh rock would have looked a creamy white color, and it must have been amazing.”

Data about Stone 58 can be applied to most of the other sarsens and to where they originated: In a 2020 paper published in Science Advances, Nash and his colleagues found that Stone 58 is geochemically similar to and representative of 50 of the remaining 52 sarsens at Stonehenge. These sarsens share geochemical signatures with sarsens in West Woods in Wiltshire, about 25 kilometers north of Stonehenge—the stones’ most probable source.

The large sarsen stones at West Woods in Wiltshire are the probable source of most sarsens used to construct nearby Stonehenge. Credit: Katy Whitaker/Historic England/University of Reading

The new study also lays the groundwork for future research by making all the data open-access.

“We were basically being given access to an absolutely unique sample that was of national importance,” Nash said. “And what we wanted to make sure we did was analyze it using every single modern technique that we could, with the view being that for future studies of Stonehenge, if other people are doing more work…there was a big suite of data that people could use.”

“Having access to this stone, you realize that you’re really privileged to be able to do this work,” he added. “So you want to do it right because you can’t go back.”

—Richard J. Sima (@richardsima), Science Writer

Building a Better River Delta

EOS - Wed, 09/08/2021 - 12:01

The Mississippi River Delta is sinking, its coastal wetlands are disappearing, and its coastal marshes are drowning. The delta has been aggressively engineered for about 200 years, with the building of diversions (like levees or alternate channel paths) to control the flow of water. As with many deltas, those diversions are often built near the coast. But a novel approach has identified a better location: cities.

The strategies behind most river diversions are driven by economic concerns and “really ignore the natural process of the river to want to avulse at a specific place with some average frequency,” said Andrew Moodie, a postdoctoral researcher at the University of Texas at Austin. The process of avulsion, in which a river jumps its banks to flow on a steeper slope, is crucial to flooding and river dynamics. But diversions notwithstanding, that water will eventually go where it wants, which can result in flooding, loss of homes and businesses, and even loss of life.

Moodie and his colleagues’ new research, published in the Proceedings of the National Academy of Sciences of the United States of America, describes a new framework for selecting the optimal placement for river diversions. “Because of the interaction between the river wanting to do what rivers do and a society that wants to minimize damage to itself, there emerges a best location that does both of those things,” said Moodie. “It lets the river do the closest to what it wants to do, but it also minimizes the damage to the society.”

Downtown Diversions

“It’s actually easier to justify more expensive projects because you have to protect this infrastructure now or you lose so many more benefits from it.”Moodie’s framework combines two models. The first simulates river and sediment movement scenarios to predict the timing of avulsions; the second estimates the societal benefits of diversions by factoring the cost of flooding damage and the costs of diversion engineering, like buying land and construction, as well as annual revenues (from agriculture, for example).

Moodie’s framework points to urban areas as often better locations for river diversions than the rural and suburban areas where most river diversions are constructed. This finding counters a prevailing theory that cities are less sustainable and may have to be abandoned as seas continue to rise. Placing diversions closer to cities makes economic sense because losses from flooded farmland may last a season, but floods in urban areas can cause longer-term damage. “It’s actually easier to justify more expensive projects because you have to protect this infrastructure now or you lose so many more benefits from it,” he said.

The biggest challenge to implementing a new river diversion framework is involving different parties and their needs, which are diverse and sometime conflicting. Shareholders include rural, suburban, and urban residents and landowners; agricultural businesses; water engineers; municipal, state, and regional governments; environmental organizations; and local community leaders.

“River management has a complex history that we’re trying to fit into,” Moodie said. “We are geomorphologists, we’re scientists, but we recognize and want to stress the importance of contextualizing our work within the societies that it matters to.”

The Mississippi River Delta has been engineered for hundreds of years and is sinking—but a new approach to river diversions outlined by Andrew Moodie (pictured) could help protect it. Credit: Andrew Moodie

Jaap Nienhuis, an associate professor of geosciences at Utrecht University in the Netherlands who was not involved in the new research, said the study is elegant and novel. “It’s one of the first papers that tries to couple human action and also landscape dynamics for river deltas,” he said.

The framework could help river engineers and others responsible for river management by offering a discrete set of parameters, said Nienhuis. “It gives people an overview of things to consider without it getting out of hand in terms of the things you need to worry about.”

Sea Level Rise

Flooding is not, of course, a problem limited to the Mississippi Delta; it’s happening around the world because of sea level rise, storm surge increases, and human interventions, like sea walls, which can make the problems worse.

While land subsides, climate change is contributing to sea level rise. Climate change is also contributing to more frequent and violent coastal storms. Infrastructure like dams and levees protects communities from floods but interrupts sediment flow, which deltas need to form natural wetlands that protect inland regions.

“By protecting us from floods, we’ve also eliminated any natural land area gain or elevation gain in deltas,” Nienhuis said. “So we’re definitely trying to shift the focus to how can we have diversions, for example, that would make deltas gain some land and be more resilient against sea level rise.”

Sea level rise itself is a variable Nienhuis would like to see incorporated into Moodie’s framework. The current models “keep sea level constant, and [researchers] let the delta grow,” he said. “It would be interesting and I think a relatively straightforward follow-up to have sea level rise and see what that does to these findings.”

—Danielle Beurteaux (@daniellebeurt), Science Writer

Glacier Structures: History Written in the Ice

EOS - Wed, 09/08/2021 - 12:00

A consequence of the climate emergency is the melting of glaciers around the world. Melting is revealing their internal structures in remarkable detail, as snow is removed and the surface is stripped bare. We can now see that ice structures record an intricate record of present and past glacier dynamics, a record that commonly cannot be ascertained by other means. By combining structural glaciological techniques with satellite remote sensing, the flow characteristics of large and remote ice masses, both globally and extra-terrestrially, can be determined, potentially revealing flow histories that extend over centuries to millennia. A recent article published in Reviews of Geophysics gives a synopsis of nearly two centuries’ worth of structural glaciological investigations, and highlights the major challenges yet to be tackled.

Why is it important to understand the dynamics of glaciers and ice sheets and how they are changing?

Approximately 10 percent of the Earth’s land surface is covered by ice, which has a significant influence on global climate, ocean cycles, as well as on human activities.

The behavior of glaciers and ice sheets is fundamentally changing in response to human-induced global heating.However, the behavior of glaciers and ice sheets is fundamentally changing in response to human-induced global heating. Increasing air and ocean temperatures are melting ice masses, reducing their size.

These processes are leading to the collapse of ice shelves, altering how glaciers flow, and increasing ice discharge into the ocean.

It is important to understand these dynamic changes in order to predict potential future impacts on humanity. On a global scale, such impacts include raising of sea level and changing climate and ocean cycles, while on a regional scale changes in water security are an increasing threat to mountain communities and the vast populations that live downstream.

Why is it important that glaciers be studied in four dimensions?

We need to consider the study of glacier structures as a four-dimensional exercise. We observe and map glacier structures spatially in two dimensions, in plan view, to determine the range, pattern, and inter-relationships of the different structures. The third dimension combines recording the orientation of structures at the surface with ice-penetrating radar measurements. The fourth dimension is time, which requires us to look back at historical records revealed in old photographs and satellite imagery and figure out how structural changes take place from what is visible today.

What is the structure of a typical glacier?

All glaciers, regardless of size, are composed of layers of snow that build up in the upper reaches of an ice mass, become buried, and transform into glacier ice as they are increasingly compressed by the overlying column of snow.

We refer to the initial snow layering as stratification, and this represents the ‘building blocks’ of a glacier. The majority of other structures are the result of how the glacier flows under gravity, a process that produces a complex array of structures. Firstly, where the ice deforms in a brittle manner, crevasses (V-shaped clefts in the ice) and a variety of fractures develop, which can be a major hazard for glacier travel. Secondly, ice also deforms in a ductile manner, leading to the development of a variety of folds, foliation (a layered structure commonly visible in glaciers), and a variety of lesser-known structures.

The terminus of Fountain Glacier, Bylot Island, Canadian Arctic, in which a wide range of structures are displayed, the most prominent of which are the traces of former crevasses. Credit: Michael J. Hambrey

How does our understanding of structures found in rocks help us understand the structure and behavior of ice, and vice versa?

Concepts in structural glaciology have constantly lagged behind those in geology.We consider glacier ice to be equivalent to a metamorphic rock because it deforms close to its melting point in both a brittle and ductile manner. While the study of structures in rocks has developed into a highly sophisticated, mathematically and physically based discipline, concepts in structural glaciology have constantly lagged behind those in geology.

However, if we apply structural geological principles to glaciers we can advance our understanding of glacier dynamics, notably because the same range of structures occur in both rock and ice.

Conversely, we can observe structures in glaciers forming on human timescales, reflecting the same processes that occur in rocks many kilometers below the surface of the Earth on a time scale of millions of years. This means that glacier ice deforms up to six orders of magnitude quicker than rocks in active mountain belts, and therefore a glacier can be considered as an analog of rock deformation. Consequently, glaciers can be used as natural laboratories that enable geologists to directly observe the formation and development of structures.

What does glacier structure tell us about changes on different spatial and temporal scales?

The advent of satellite remote sensing has revolutionized our understanding of inaccessible glaciers and ice sheets. However, satellite measurements only extend back for approximately half a century. As ice structures form in response to how a glacier flows, they preserve a record of glacier dynamics that covers the time that ice follows a path through the glacier system. In comparatively small valley glaciers, ice residence times can span centuries, increasing to millennia or even a few million years in the Antarctic Ice Sheet. Ice structures therefore have the potential to inform scientists about how glaciers have behaved in the past and how they have changed over time.

How might a structural approach be of use in other sub-disciplines of glaciology?

Ice structures have a fundamental, yet often overlooked, influence over many other aspects of glaciology. For example, glacier structures play a large role in the routing of meltwater through a glacier, dictating how water gets to the glacier bed, which in turn controls how quickly the glacier flows. Structures also strongly influence how debris is transported in glaciers, and this has an impact on how many glacial landforms are interpreted. Glacier structures also control the distribution of microbial life living on the surface of glaciers, and structural glaciology is a useful tool for exploring the mechanisms of glacier recession. All of these topics have a great deal of untapped potential for further study.

The corrugated surface of the valley glacier Austre Brøggerbreen in Svalbard (Norwegian High-Arctic) in summer 2013. This is a product of weathering of different ice structures, especially longitudinal foliation and the traces of former crevasse. Credit: Michael J. Hambrey

Where is further research, data gathering, or modeling needed to advance our understanding of glacier dynamics?

Modern remote sensing-based techniques have the potential for expanding the range of deformation studies over large and small ice masses.The principal challenge for structural glaciologists is to understand how ice structures develop spatially, at depth, and over time. Direct measurement of deformation in valley glaciers goes back over half a century but it is a laborious field-based process. Modern remote sensing-based techniques, such as feature-tracking and interferometry, have the potential for expanding the range of deformation studies over large and small ice masses.

Some advances have been made in replicating the formation of glacier structures in computer models, especially regarding pervasive structures such as foliation. Modeling other structures, such as fractures, has proved more elusive, but the challenge can be met using more powerful computers.

With the rapidly growing amount and quality of remote sensing data, the scope for structural studies has grown enormously. We have only scratched the surface of the potential research opportunities here, but it is clear that structural principles can then be applied to unraveling the history of many inaccessible ice masses on Earth, and even on Mars.

Scaling down to the microscopic level, there also remains the untapped potential for structural glaciology to illuminate the recently emerged field of cryospheric microbiology. These new opportunities demonstrate that there has never been a more exciting time to engage with the field of structural glaciology.

—Stephen J. A. Jennings (stephen.ja.jennings@gmail.com;  0000-0003-4255-4522), Polar-Geo-Lab, Department of Geography, Masaryk University, Czech Republic; and Michael J. Hambrey ( 0000-0003-0662-1783), Centre for Glaciology, Department of Geography and Earth Sciences, Aberystwyth University, United Kingdom

Understanding Aurora Formation with ESA’s Cluster Mission

EOS - Tue, 09/07/2021 - 13:22

Earth’s aurorae form when charged particles from the magnetosphere strike molecules in the atmosphere, energizing or even ionizing them. As the molecules relax to the ground state, they emit a photon of visible light in a characteristic color. These colliding particles—largely electrons—are accelerated by localized electric fields parallel to the local magnetic field occurring in a region spanning several Earth radii.

Evidence of these electric fields has been provided by sounding rocket and spacecraft missions dating to as far back as the 1960s, yet no definitive formation mechanism has been accepted. To properly discriminate between a number of hypotheses, researchers need a better understanding of the spatial and temporal distribution and evolution of these fields. When the European Space Agency’s (ESA) Cluster mission lowered its perigee in 2008, these observations became possible.

Cluster consists of four identical spacecraft, flying with separations that can vary from tens of kilometers to tens of thousands. Simultaneous observations between the four craft enable space physicists to deduce the 3D structure of the electric field.

Marklund and Lindqvist collect and summarize the contributions of Cluster to our understanding of the auroral acceleration region (AAR), the area of space in which the above-described processes take place.

By collecting a large number of Cluster transits through this region, physicists have deduced that the AAR can generally be found somewhere between 1 and 4.4 Earth radii above the surface, with the bulk of the acceleration taking place in the lower third. Despite this relatively broad “statistical AAR,” the acceleration region at any given moment is usually thin; in one observation, for example, the AAR was confined to an altitude range of 0.4 Earth radius, whereas the actual layer was likely much thinner than that. The observations cannot uniquely determine the thickness of the actual layer, which could be as small as the order of 1 kilometer, the authors say. Such structures are observed to remain stable for minutes at a time.

Cluster measurements also have shed light on the connection between the observed shape of the electron acceleration potential and the underlying plasma environment. So-called S-shaped potentials arise in the presence of sharp plasma density transitions, whereas U-shaped ones are related to more diffuse boundaries. However, the dynamic nature of space plasma means that the morphology of a boundary can shift on timescales of minutes, as exemplified by a case study.

In sum, 2 decades of Cluster observations have significantly improved our understanding of the processes—both local and broad—that result in our planet’s beautiful aurorae. With the missions extended through 2022, we can expect more insight in the coming years. (Journal of Geophysical Research: Space Physics, https://doi.org/10.1029/2021JA029497, 2021)

—Morgan Rehnberg, Science Writer

路边沟渠可有效脱氮

EOS - Tue, 09/07/2021 - 13:22

This is an authorized translation of an Eos article. 本文是Eos文章的授权翻译。

路边的沟渠汇集了落在路面的雨水以及草坪或田地里的径流。尽管沟渠在景观中无处不在,但它们的潜力远远超过了雨水管道。事实上,沟渠是人类制造的低地,经常扮演着湿地的角色,有起伏的水位和大量的植被和微生物。

在这些人造景观中,微生物和植被有能力将氮从流入的水中剥离出来,将其从系统中移除。在此过程中,沟渠中脱氮可以减少过量营养物质对下游的影响,如藻华和死区。

但沟渠脱氮的效果如何呢?到目前为止,人们对其知之甚少。

在一项新的研究中,Tatariw等人比较了森林、城市和农田附近的沟渠的脱氮情况,以及在每个地方生活的微生物种类。他们观察了阿拉巴马州莫比尔湾附近的三个不同的流域,并对沿双车道公路延伸的96个不同的沟渠进行了取样。每个流域代表沿森林、已开发土地或农业用地的沟渠。

为表征这些沟渠,研究小组考察了植物生物量、水中无机氮含量和土壤特征。因为微生物太小,即使用显微镜也无法识别,所以科学家们使用16S rRNA基因来识别和分析每个样本中的不同微生物。

最后,研究人员通过采集土壤样本、加水和制作沟渠材料浆料计算出了每个样本去除硝酸盐的潜力。通过在浆料中加入稳定的氮同位素(15硝酸盐),来观察样品中的微生物减少了多少氮。

研究发现沟渠中的微生物对硝酸盐(NO3 -)的去除潜力平均高达89%。尽管不同沟渠之间的土壤特征相似,研究小组指出,在人类活动普遍的城市和农业沟渠中,特定的微生物——如亚硝化球藻科、亚硝化索藻科、盖叶目和粘粒菌——更为丰富。

总的来说,沟渠被发现具有与湿地和河流等许多自然生态系统相类似的脱氮潜力。这项新的研究表明,路边的沟渠可能是去除环境中氮的重要区域。(Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1029/2020JG006115, 2021)

-科学作家Sarah Derouin

This translation was made by Wiley. 本文翻译由Wiley提供。

Share this article on WeChat. 在微信上分享本文。

Recognizing Geology’s Colonial History for Better Policy Today

EOS - Tue, 09/07/2021 - 13:21

At the University of Minnesota (UMN), we are reckoning with the story we tell about geology in our state. In the 19th century, the belief in manifest destiny drove white settlers to expand rapidly into the West, leading to the widespread removal of Indigenous Peoples from their homelands, genocide, and harm to their knowledge systems and lifeways. Geological mapping played a significant role in identifying which lands were profitable for U.S. settlement through gold and other natural resource extraction. The Minnesota Geologic and Natural History Survey was founded in 1872 and restarted in 1911 as the Minnesota Geological Survey (MGS) under UMN’s Newton Horace Winchell School of Earth and Environmental Sciences. The original purpose of the survey was to economically evaluate the “mineral kingdom” of Minnesota; today, the MGS mission is to identify and support stewardship of water, land, and mineral resources. The program was named for and first led by Newton Horace Winchell—a pioneering geologist discussed in most UMN geology courses. Winchell led mapping surveys, sometimes accompanied by the U.S. military, including George Armstrong Custer, into Indigenous land that directly led to mining explorations, white settlement, and, eventually, U.S. takeover of these lands through violence and coercion.

Geologists learn about Newton Horace Winchell’s discoveries stripped from the violence that followed or made them possible.But geologists—including the ones serving at MGS today—learn about Winchell’s discoveries stripped from the violence that followed or made them possible. Although geologic mapping of the state contributed to advances in the stewardship of natural resources and public health, we must acknowledge the cost of these benefits. As two white geologists in the MGS and the UMN School of Earth and Environmental Sciences, we and our colleagues are making efforts to integrate lessons from our regional history into our policies that govern our relationships with Indigenous Peoples. Not only will this critical reflection acknowledge racist actions and the immense pain caused by them, but it also will allow the MGS to conduct ongoing work more justly, by collaborating with tribal neighbors to decide how and where MGS performs its mapping.

Geologic Mapping and Land Dispossession

First, we must look back at our history, starting with the U.S. government–funded expeditions to map west of the Mississippi River in the early 19th century. These expeditions included naturalists who described the surrounding nature and geology in works credited today for advancing the knowledge of the geography of Minnesota. These surveys inspired settlers to move to Dakota and Ojibwe homelands in what the U.S. government began calling the territory of Minnesota on the basis of the Dakota name, Mni Sóta Maḳoce.

These surveys were not the first scientific understandings of these landscapes. Indigenous Peoples have always had their own knowledge systems and sciences of Earth [see Daniel, 2019; Evans, 2020; Cartier, 2019; Reano and Ridgeway, 2015]. To the Dakota and Ojibwe, some rocks and metals such as pipestone and copper hold deep spiritual importance and animacy. Indigenous place-names and maps also reveal a rich understanding of place [see Brooks, 2018; Smith, 2018]. Generations of European and American explorers relied extensively on Ojibwe and Dakota guides to navigate the waterways and landscapes of Minnesota.

In 1837, the U.S. government began promulgating a series of treaties that forced the Dakota and Ojibwe people to cede most of their land. One concession would lead directly to another. A geological report by David Dale Owen, published in 1852, mapped the mineral potential of land ceded in the 1851 Dakota Land Cession Treaties, as well as land still held by the Dakota and Ojibwe. Owen documented, for example, unceded tracts of bedrock along the north shore of Lake Superior in the Arrowhead region of Minnesota. This report likely aided mining companies in persuading the federal government to acquire the land—which they did 2 years later in 1854.

The federal government paid pennies on the dollar to the Dakota on the value appraisals gave the land at the time, turning it into wealth that the UMN continues to benefit from today.Minnesota was granted statehood in 1858, and in 1865, the Minnesota legislature appointed Henry H. Eames as the first state geologist. Eames produced two annual reports that focused on the region north of Lake Superior, where he claimed to have found a major gold deposit. In the heady gold rush that followed, a “quasi-military organization” made up of former soldiers recently returned from the Civil War set up shop as the first gold mining company at Lake Vermilion. They settled in unceded lands where the Bois Forte Band of Chippewa resided. To avoid violence, the Bois Forte Band ceded their land surrounding the lake and were forced to accept a smaller reservation farther northwest. But the “second California,” as Eames boasted to potential financiers in New York, never materialized because Eames’s claims of gold turned out to be fraudulent. The damage was done, however, and geologic mapping and mining research around the lake turned to focus on the nearby massive iron ore deposits.

Indigenous land dispossession not only resulted from geological mapping; it actually funded some of the surveys. The Morrill Land-Grant Act handed over 145 square miles (375 square kilometers), 98% of it ceded Dakota territory, to the State of Minnesota, which later assigned the benefits of the act to the University of Minnesota in 1868. Additional lands were granted specifically to fund the Minnesota Geologic and Natural History Survey in 1873. The federal government paid pennies on the dollar to the Dakota on the value appraisals gave the land at the time, turning it into wealth that the UMN continues to benefit from today.

Our Stories of Geology

Winchell’s legacy is still discussed in modern textbooks and biographies published as recently as 2020. These works largely excuse or ignore the racism embedded in his views and the ethics of his science as being a consequence of a different time. Michi Saagiig Nishnaabeg scholar, writer, and activist Leanne Simpson points out in her book As We Have Always Done, however, that this type of thinking normalizes the white, settler perspective and erases the perspective of Indigenous Peoples. She asks, “Whose historical context and whose standards” are we evaluating history based upon? Winchell’s legacy must also be evaluated in the historical context and standards of Indigenous Peoples.

Early in his career as survey director, Winchell and other scientists accompanied George Armstrong Custer in the 1874 Black Hills Expedition, an expedition that advanced the dispossession of Lakota homelands. Sioux scholar Nick Estes explains in his book Our History Is the Future that the Black Hills—He Sapa in the Lakota language—are “the heart of everything that is” to the Lakota people and sacred to more than 50 Indigenous nations. The 1868 Treaty of Fort Laramie banned U.S. citizens from entering the Lakota reservation encompassing the Black Hills except for those employed and authorized by the U.S. government to do so. Custer’s military obtained this authorization by claiming the expedition was for reconnaissance, whereas the geologists, including Winchell, said they were looking for fossils. The expedition’s miners actually struck gold, and the rush of white settlement and U.S. land claims that followed were in clear violation of the treaty. Although Winchell himself may not have been seeking gold, he chose to participate in a military expedition on Lakota territory prominently connected with gold exploration and the drive for U.S. expansion. Yet the ethics of this research are not widely discussed, if at all, in the history of geology presented at UMN or in biographies.

Without this context, Winchell and his research appear unattached to the ongoing genocide and land dispossession of Indigenous Peoples as the U.S. expanded into the Midwest during the 19th century. Geology is not neutral within the politics of colonization, a point further developed and expanded in Kathryn Yusoff’s book A Billion Black Anthropocenes or None. How we tell the story of geology either contributes to the status quo of colonization or challenges these privileged narratives to open up the possibility for a more just future.

Opportunities for Change

In 2019, we at the UMN School of Earth and Environmental Sciences were given an opportunity to better understand our science’s history when one of the Minnesota tribal nations asked for their lands to be excluded from a MGS geologic map in progress. They believed that publicly available geologic information could jeopardize protection of their lands and that the MGS would not inform the tribe of all the potential uses of data collected. At first this request bewildered MGS geologists, who were not properly taught our discipline’s history, but then spurred movement within the organization to learn about our region’s treaties and the relationship between tribal sovereignty and geologic mapping.

These rights mean that each tribal nation can and should make its own decision on what geologic information the MGS can collect and how geologic mapping might affect its people.The MGS researchers learned that each tribal nation within the footprint of Minnesota and throughout the U.S. retains its sovereignty and, along with all Indigenous Peoples, has the right to self-determination. These rights mean that each tribal nation can and should make its own decision on what geologic information the MGS can collect and how geologic mapping might affect its people. With this knowledge, the MGS began to create its first policy for mapping and collecting data on tribal lands. We began by reaching out to the UMN senior director of American Indian Tribal Nation Relations, who advised the MGS to reach out to each tribe’s environmental resources program directors. In these meetings, MGS leadership shared its mission along with the mapping projects that would affect the tribe’s lands. Next, the tribal environmental resources directors asked their tribal government to vote on their involvement in the MGS project. Some tribes wanted updated geological maps from the MGS, whereas other tribes stated they already had the capacity to make their own maps and thus declined permission for the MGS to survey their land.

The new policy states that MGS will offer opportunities to tribes to consult on surveys and other operational activity that may affect their land, water, or other natural resources, even if that activity is not directly on their land. MGS now requires explicit permission from tribal governments to collect new data on tribal lands and will also, upon request, “gray out” tribal lands on geologic maps and related products. Although the MGS has customized its policies to honor specific requests made by the sovereign Indigenous nations in our region, we modeled them generally on policies of both the U.S. Geological Survey and the Minnesota Pollution Control Agency.

As our experience demonstrates, geologists must understand how the many discoveries in our discipline were actually made; otherwise, our work will continue to further the colonization of Indigenous Peoples. We must learn the complete history of geology’s relationship with colonization, invite reflection from Indigenous Peoples, and require our scientific community to understand and respect tribal sovereignty through our policies. This work must extend to our geoscience classrooms as we train the next generation of Earth and environmental scientists, consultants, and regulators. Students, faculty, staff, and researchers within the MGS and the UMN School of Earth and Environmental Science have raised these demands, supported by national efforts calling for this work.

The academic department within the UMN School of Earth and Environmental Sciences is also beginning to take action. The department is in the process of recommending American Indian studies courses for undergraduates and graduate students on our website and in advising meetings. Faculty and students continue to assess whether collections contain specimens that were stolen from Indigenous Peoples and are developing repatriation procedures. Indigenous Earth scientists and scholars are being invited to speak at department geology seminars as well as lunchtime justice, diversity, equity, and inclusion talks. Students and faculty are reviewing and editing department curriculum to ensure that it respectfully includes Indigenous Peoples’ histories and the colonial legacy of geoscience and geoengineering in Minnesota. Faculty and students are working with Indigenous scholars to create a land and water acknowledgment and develop policies for our department to work more ethically on Indigenous homelands. These actions are only the first steps, and we have a lot left to learn on this journey.

We still have many questions to discuss together: What should the MGS do with historical maps, data, and projects published without tribal consent? How can the MGS redress harms and incorporate the perspectives and knowledges of Indigenous Peoples?MGS geologists have embarked on composing a detailed report on their program’s past and ongoing colonial practices, which will be published on the MGS website. We also are continuing to refine our policies for geologic mapping on tribal lands. Building and maintaining respectful relationships with tribal governments will require time and accountability from the MGS, as another recent UMN-tribal collaboration has demonstrated. We still have many questions to discuss together: What should the MGS do with historical maps, data, and projects published without tribal consent? Can the MGS provide services to tribes in stewarding their land, mineral, and groundwater resources? How has the UMN geological research supported mining industries at the expense of Indigenous Peoples’ treaty rights, economic prosperity, self-determination, and access to sacred sites? How can the MGS redress harms and incorporate the perspectives and knowledges of Indigenous Peoples? Fundamentally, how can the MGS and the geological community develop and follow ethics that account for and disrupt this discipline’s entanglement in colonization?

Reimagining Our Science Together

Colonialism is ingrained within much of geology’s foundation. Geologic institutions must be forthright in recognizing the role scientists in our field have played—and continue to play—in the dispossession of Indigenous Peoples from their lands around the world and, consequently, disrupting their lifeways and knowledge systems. We also have an opportunity to make the practice of our science more just, as well as deeper and more expansive, by critically understanding our past, stopping and redressing harms, and building respectful relationships with Indigenous Peoples for an equitable exchange of knowledges. Our work at MGS and at the UMN Earth and Environmental Sciences Department is far from done, but we look forward to reimagining our science and how it can support the well-being and self-determination of Indigenous Peoples.

Acknowledgments

This paper would not have been possible without the guidance, support, and feedback from Mike Dockry (UMN forestry professor), Margaret Watkins (Grand Portage Band of Lake Superior Chippewa water quality specialist), Kari Hedin (Fond du Lac Band of Lake Superior Chippewa watershed specialist), Tony Runkel (MGS lead geologist), Hannah Jo King (UMN natural resources, sciences, and management graduate student), Crystal Ng (UMN hydrology professor), Laura Paynter (UMN public policy graduate student), Erick Moore (head of UMN Archives), the UMN American Indian and Indigenous Studies writing workshop, the UMN Institute for Advanced Studies Land-Grant/Land-Grab Fellowship cohort, the Earth and Environmental Sciences Department Unlearning Racism in Geoscience (URGE) Pods, and the Kawe Gidaa-naanaagadawendaamin Manoomin project team.

Tibetan Plateau Lakes as Heat Flux Hot Spots

EOS - Fri, 09/03/2021 - 11:59

The largest alpine lake system in the world sits atop the Qinghai-Tibet Plateau, commonly known as the Tibetan Plateau, which is the highest and largest plateau in the world. Researchers know the lakes influence the transfer of heat between the land and atmosphere, affecting regional temperatures and precipitation. But little is known about the physical properties and thermal dynamics of Tibetan lakes, especially during the winter months when the lakes are covered in ice.

In a new study, Kirillin et al. looked at China’s Ngoring Lake—the largest freshwater lake (610 square kilometers) on the plateau—which is typically covered in ice from December through mid-April. The team moored temperature, pressure, and radiation loggers in one of the deepest parts of the lake in September 2015. They observed an anomalous warming trend after the lake surface froze over, as solar radiation at the surface warmed the upper water layers under the ice. Strong convective mixing left Ngoring Lake fully mixed down to its mean depth within a month of full ice cover.

In most ice-topped lakes, water temperatures typically remain below the maximum density temperature, but here the authors found that the water temperature was higher than the maximum freshwater density by the middle of the ice season, which accelerated the ice melt at the end of the winter season. As the ice broke up, water temperature dropped by nearly 1°C, releasing some 500 watts per square meter of heat into the atmosphere in just 1 or 2 days.

The study demonstrates that lakes do not lie dormant under ice. But the impacts extend beyond local lake effects; taken together, the thousands of lakes across the plateau could be heat flux hot spots after ice melt, releasing the heat absorbed from solar radiation and driving changes in temperatures, convection, and water mass flux with potential impacts at even global scales. (Geophysical Research Letters, https://doi.org/10.1029/2021GL093429, 2021)

—Kate Wheeling, Science Writer

Satellites Allow Scientists to Dive into Milky Seas

EOS - Fri, 09/03/2021 - 11:57

For centuries, sailors have reported sightings of large patches of glowing oceans, stretching like snowfields from horizon to horizon. The ephemeral phenomenon, incidents of which can grow larger than some states, has long evaded close examination by scientists. But now, thanks to a little assistance from space, researchers may finally be able to dive into these milky seas.

Milky seas are associated with bioluminescence, light created by living organisms using biochemical triggers. Most well-known examples of bioluminescence are short-lived flashes, like those emitted by fireflies. But milky seas last for days or even weeks, a steady glow of light in the dark ocean visible only on moonless nights. Scientists suspect tiny, bioluminescent bacteria are responsible, but because glimpses of milky seas are so fleeting, researchers have had virtually no opportunity to directly examine the phenomenon.

Hunting for milky seas from space in near-real time may change that. Researchers using two NOAA satellites—the Suomi National Polar-orbiting Partnership (NPP) and the Joint Polar Satellite System (JPSS)—have developed the ability to quickly identify milky seas, potentially opening the possibility for study before the glow disappears.

“Now we have a way of proactively identifying these candidate areas of milky seas,” said Steve Miller, a senior research scientist at Colorado State University and lead author of the new study, which was published in Scientific Reports. “If we do have assets in the area, the assets could be forward-deployed in a SWAT-team-like response.”

Rapid observations of the fleeting phenomena could help answer several lingering mysteries around milky seas, including how and why they form and why they are so rare.

“We really want to get out to one of these things and sample it and understand the structure,” Miller said.

Turning On the Lights

Milky seas have been described by sailors for more than 200 years. Reports characterize them as having a pale glow, and travel through them is described as like moving across a snowfield or cloud tops. Ships’ propellers create a dark wake as they move through the seas. The glow is so faint that moonlight renders it invisible to the human eye. The unusual waters seem more like the stuff of science fiction than science; indeed, they played a role in the Jules Verne novel Twenty Thousand Leagues Under the Seas.

Scientists experienced the spectacle only once, when R/V Lima chanced upon glowing waters in the Arabian Sea in 1985. Water samples from the ship identified algae covered with the luminous bacteria Vibrio harveyi, leading scientists to hypothesize that milky seas are associated with large collections of organic material.

Small groups of V. harveyi and other similar bacteria lack the faint shimmer found in milky seas. But once the population grows massive enough, the bacteria switch on their luminescence by the process of quorum sensing. Each individual bacterium seeds the water with a chemical secretion known as an autoinducer. Only after the emissions reach a certain density do the bacteria begin to glow.

“You know when you see these lights that there are a lot of luminescent bacteria there,” said Kenneth Nealson, who along with Woody Hastings identified the phenomenon in the 1960s and was not a part of the new study. Nealson, a professor emeritus at the University of Southern California, estimated it would take around 10 million bacteria per milliliter of water to turn on the lights.

Gathering so many bacteria in one part of the ocean requires a significant source of food, and scientists suspect the bacteria are feasting on the remains of massive algal blooms. “If you give them something good to eat, they’ll double about every half hour,” Nealson said. “It doesn’t take more than a day for them to have well over 10 million per milliliter.”

Unlike algal blooms that drive phenomena like red tides, which are supposed to drive fish away, milky seas may be working to attract fish. Fish eat the bacteria as well as the dying algae, and consumption doesn’t end the bacteria’s life cycle.

“For [the bacteria], the inside of a fish’s stomach is a favorable environment,” said Steve Haddock, a biologist at Monterey Bay Aquarium Research Institute in California and one of the authors of the new research. “They can live inside [a fish’s] stomach just like bacteria live inside our bodies.”

Seas from Space

This isn’t Miller’s first foray into using satellites to hunt for milky seas. After a conversation with his colleagues questioned whether bioluminescent activity could be detected from space, Miller wondered what sort of ocean activity might be visible. He found a report from Lima that listed its coordinates and the date and time of the 3-day-long encounter. Using this information, he hunted through archival data collected by the U.S. Defense Meteorological Satellite Program constellation of satellites, a collection of polar-orbiting satellites surveying Earth in visible and near-infrared light. In 2005, he and Haddock, along with two other researchers, reported the first detection of a milky sea from space.

“It was really difficult to find that milky sea in that older generation of data,” Miller said. He attributed the success to the clear records kept by Lima. “There was no way to pick it out on my own independently.” It turned out the ship had navigated through only a small part of the 15,400-square-kilometer glowing sea, which stretched to roughly the size of the state of Connecticut.

Miller and his coauthors identified a dozen milky sea incidents between 2012 and 2021.Encouraged by his success, Miller turned his attention to the newly launched Suomi NPP and its Day/Night Band (DNB) instrument, which breaks down light into gradients. Suomi NPP can sift through lights from cities, wildfires, and the atmospheric glow caused as ultraviolet light breaks apart molecules. Finding the faint light from milky seas required looking for dim seas and pulling out the short-lived events.

“It was a decade of learning,” Miller said of the time spent culling transient events in search of milky seas.

After determining that most historical sightings of the glowing bacteria over the past 200 years occurred in the Indian Ocean and around Indonesia, researchers concentrated their hunt on that region. Moonlit nights were eliminated because they were too bright. Ultimately, Miller and his coauthors identified a dozen milky sea incidents between 2012 and 2021.

The largest milky sea satellite spotting occurred south of Java in 2009. The DNB detected a dimly lit sea on 26 July and continued to track it until 9 August, when the Moon once again drowned out the bacteria. Imagery confirmed that the luminescent sea spanned more than 100,000 square kilometers. Estimates place the number of bacteria involved in the event as exceeding 10 sextillion (a sextillion is 1,000 trillion), making it the largest event on record.

“This is just an inconceivable number of bacteria participating in that event,” Haddock said.

“Almost all of the information on milky seas up to the 1990s was anecdotal from people on ships. Now we have remote observations from satellites showing exactly where these phenomena are happening and how they change with time. That’s a major step forward.”Satellite observations also allowed researchers to take stock of the conditions of the ocean when milky seas are present. The new research measured details like water temperature and the amount of chlorophyll present.

“There’s no doubt that there’s a connection between a high level of chlorophyll and milky seas,” Nealson said. “Nobody’s been closer to an answer for the phenomena than [Miller, Haddock, and their colleagues]; they did a really wonderful job.”

Biologist Peter Herring, a retired professor at the National Oceanography Centre in Southampton, U.K., agreed. “Almost all of the information on milky seas up to the 1990s was anecdotal from people on ships,” Herring said. “Now we have remote observations from satellites showing exactly where these phenomena are happening and how they change with time. That’s a major step forward.”

Diving into the Seas

Although satellite imagery is an important tool, Miller hopes that the project will eventually lead to real-time observations. There are a lot of unanswered questions about milky seas, some quite basic. For instance, scientists aren’t sure whether the bacteria form a thin film on the surface or stretch deeper beneath the water. Nor are they certain that algal blooms are the primary food source for the bacteria.

“If you were in the middle of one of these blooms, a lot of the things that we talk about would become obviously right or wrong,” Nealson said. “That’s very unusual in science, that you could get such a clear answer.”

But real-time, in-person study may continue to prove elusive. There are no major ocean facilities near the region where milky seas seem to be most prevalent, and the seas are rife with pirates and other dangers that keep many research vessels away.

Nor have photos or videos ever reliably captured milky seas. The closest attempt was in 2010, when a crew tried to take a photo of the glowing sea using a flash, which promptly washed out the dim phenomenon. Miller hopes more commercial crews can be equipped with cameras specially designed to photograph bioluminescence.

In the meantime, Miller hopes to one day experience the fleeting mystery in person.

“I’ve always wanted to dive into a milky sea and see if it’s still glowing under the surface,” he said.

—Nola Taylor Tillman (@NolaTRedd), Science Writer

Longer Days Likely Boosted Earth’s Early Oxygen

EOS - Fri, 09/03/2021 - 11:57

Gregory Dick, a professor of Earth and environmental sciences at the University of Michigan, had just completed a public lecture on a Saturday morning in 2016 when a colleague asked him a question.

Dick and postdoctoral researcher Judith Klatt were studying the role of cyanobacteria in oxygenating Earth’s atmosphere billions of years ago. Klatt, now at the Max Planck Institute for Marine Microbiology, had found that mats of photosynthetic cyanobacteria begin to release oxygen into the water only after a long lag early in the day as they compete with other microbes for sunlight.

Brian Arbic, a Michigan oceanographer who was attending the lecture, asked if the changing length of Earth’s day could have affected photosynthesis and hence the amount of oxygen released into the atmosphere. “I hadn’t ever heard about the changing day length,” recalled Dick. “We got excited about it, and Judith and I started working on the problem.”

“It is clear that photosynthesis is, and always has been, the only significant source of oxygen on our planet. And oxygenic photosynthesis was ‘invented’ by cyanobacteria.”Five years later, Klatt, Arbic, Dick, and colleagues reported their results in Nature Geoscience: The increase in the length of the day could have played a key role in allowing cyanobacterial mats to pump oxygen into the air, setting the stage for the development of complex life.

“One of the most enduring questions in the Earth sciences is how Earth became the oxygen-rich planet that we could evolve on,” said Klatt, the lead author of the study. “It is clear that photosynthesis is, and always has been, the only significant source of oxygen on our planet. And oxygenic photosynthesis was ‘invented’ by cyanobacteria.”

Although fossil records indicate that cyanobacteria first appeared on Earth at least 3.5 billion years ago, atmospheric oxygen didn’t begin to appear until about 2.4 billion years ago. Scientists have wondered why there was such a long lag between the two milestones.

Combining Experiments, Modeling

Klatt and her colleagues probed that question through a combination of laboratory experiments and modeling.

For the experiments, they collected samples of bacterial mats from the bottom of Middle Island Sinkhole, a collapsed limestone cave in Lake Huron, about 3 kilometers off the northeastern coast of Michigan’s lower peninsula. Conditions in the sinkhole approximate those of shallow coastal waters all across Earth billions of years ago. Competing layers of microbes jockey for position during the day, with purple oxygen-producing cyanobacteria rising to the top layer during the morning hours.

Similar mats “might have dominated Earth’s coasts for much of its history and were likely the arena for oxygenic photosynthesis,” said Klatt. “Nowadays, we only find microbial mats in ‘extreme’ environments….One such place is the Middle Island Sinkhole.”

“This is one of those rare places that we might call a process analog,” said Woodward Fischer, a professor of geobiology at the California Institute of Technology who wasn’t involved in the research. “They’re not looking at an environment that’s an exact reproduction [of ancient mats], but it’s a place where processes are playing out that remind us of that.”

In Dick’s laboratory, the team exposed the mats to day-night cycles of different lengths, corresponding to the length of Earth’s day at different times in its past.

Judith Klatt scrapes a sample of a Middle Island Sinkhole microbial mat from a sediment core into a jar for study. Credit: Jim Erickson, University of Michigan News

Shortly after the formation of the Moon, more than 4 billion years ago, the day was just 6 hours long. Lunar tides, however, function as a brake, slowing Earth’s rotation and making the days longer. (To balance the books, the Moon moves farther from Earth; it’s currently receding at about 3.8 centimeters per year.)

By 2.4 billion years ago—a time that corresponds to the Great Oxidation Event, the first big pulse of oxygen into the atmosphere—the day had extended to about 21 hours. It stayed at that length (perhaps, Arbic said, because of a counteracting thermal atmospheric tide that was unrelated to the lunar tides) for more than a billion years. At the end of that period, the lunar tides regained supremacy, and the day lengthened again, to about 24 hours. That increase corresponds to the second big jump in atmospheric oxygen, the Neoproterozoic Oxygenation Event, about 600 million years ago.

The Length of the Day Matters

With a longer day, more oxygen diffused into the water.The experiments showed that although total oxygen production by the photosynthetic cyanobacteria was about the same regardless of day length, the physics of diffusion limited the amount of oxygen that entered the water for up to several hours after sunrise. Short days left little time for that process to play out—by the time the oxygen factory had shifted into high gear, the Sun was setting, and it was time to shut down for the night. With a longer day, though, more oxygen diffused into the water.

“The total amount of sunlight is the same whether the day is 16 hours long or 24 hours or whatever,” said Arbic. “It’s just that with a shorter day, you turn it off and on more often. Since there’s a lag in the process, that’s why it matters if you have a longer day.”

The researchers then extended their work by developing a general model of mat systems and changing daylight conditions. The model considered the physiology of various mat systems, differing photosynthesis rates, and conditions in the water column, among other factors. “All scenarios showed a dependency of net productivity on day length,” Klatt said.

“The modeling was really exciting because it showed that this mechanism [for producing oxygen] doesn’t have anything to do with the particular behaviors of the organisms in one site versus another or in the modern world versus the ancient world,” said Dick. “We think this is a really robust effect that should operate anywhere you have oxygen being produced in these microbial mats.”

“This is part of the picture, but not the whole picture,” of Earth’s oxygenation history, Fischer said. “This mechanism extends just to mats living on the seafloor, and we don’t have a perfect geological record. But it’s absolutely part of the picture.”

—Damond Benningfield (damond5916@att.net), Science Writer

The Challenges of Forecasting Small, But Mighty, Polar Lows

EOS - Fri, 09/03/2021 - 11:53

Sailors in Scandinavian countries have told tales about dangerous encounters with small, intense storms since time immemorial. These maritime storms, known as polar lows, are believed to have claimed many small boats in North Atlantic waters [Rasmussen and Turner, 2003]. In a recent case in October 2001, strong winds associated with a polar low that developed near the Norwegian island of Vannøya capsized a boat, causing the death of one of its two crew members.

Polar lows are not only found in the North Atlantic but also are common in the North Pacific and in the Southern Ocean. In Japan, for example, tragedy struck in December 1986, when strong winds from a polar low caused a train crossing the Amarube Bridge to derail and fall from the tracks onto a factory below, killing six people [Yanase et al., 2016].

Forecasting these systems remains challenging because of their relatively small size, rapid formation, and short duration (most last less than 2 days). However, as global warming and receding sea ice make the Arctic more accessible and increase the vulnerability of coastal populations and ecosystems, it will become increasingly important to forecast these dangerous storms accurately. Studying the effects of a warming climate on where these storms form, as well as on their frequency, lifetime, and intensity, is also vital because this work will help determine which regions will be the most affected by polar lows in the future.

Dangerous High-Latitude Storms

Polar lows are a little-known part of the wider family of polar cyclones, which include polar mesoscale cyclones less than 1,000 kilometers in diameter as well as larger, synoptic-scale cyclones. With diameters between 200 and 1,000 kilometers—and most often 300–400 kilometers—polar lows are a subset of mesoscale cyclones.

The relatively small storms differ from other polar mesoscale cyclones in that they develop over the ocean and are especially intense. Polar lows are often associated with severe weather like heavy snow showers and strong winds that can reach hurricane force. Thus, they sometimes lead to poor visibility, large waves, and snow avalanches in mountainous coastal regions. Changes in meteorological conditions can be abrupt, with winds increasing from breeze to gale force in less than 10 minutes, for example. Such severe weather can force affected countries to close roads and airports.

With their high winds and waves, polar lows threaten many communities and ecosystems with extreme weather as well as potential coastal erosion and effects on ocean primary productivity.Polar lows can even cause the formation of rare, extreme storm waves known as rogue waves. One such wave, named the Draupner wave, was observed in the North Sea in 1995 and reached a height of 25.6 meters [Cavaleri et al., 2016].

With their high winds and waves, polar lows threaten many communities and ecosystems with extreme weather as well as potential coastal erosion and effects on ocean primary productivity. They also pose significant risks to marine-based industries, such as fishing and onshore and offshore resource extraction. Roughly 25% of the natural gas and 10% of the oil produced worldwide are produced in the Arctic, and despite the strong influence of fossil fuel use on continuing climate change, interest in further extraction of offshore resources in this region is growing.

In addition, as summer sea ice extent decreases because of climate change, shipping seasons will become longer, and new shipping routes will open up, making the Arctic more accessible and potentially increasing the likelihood of storm-related accidents. The possibility of shipping accidents or other disasters causing oil spills in the Arctic is particularly concerning because the lack of infrastructure in this remote region means that it could take a long time to respond to spills. With so many concerns and at-risk communities, there is a pressing need to improve forecasting of polar lows and other extreme Arctic weather to reduce risk.

Where Do Polar Lows Form?

Polar lows are predominantly a cold season phenomenon, developing near the sea ice edge and the coasts of snow-covered continents during cold air outbreaks, when very cold air over the ice or landmass flows out over the relatively warm ocean.

The area around Tromsø, Norway, seen here, is affected by polar lows during the Northern Hemisphere winter. Credit: Marta Moreno Ibáñez

Southern Hemisphere polar lows, which have received less attention from researchers, develop mainly near the Antarctic sea ice edge, far from human settlements, and they tend to be less intense than their northern counterparts. Northern Hemisphere polar lows develop above about 40°N, thus affecting several Arctic countries. They are more frequent in the North Atlantic than in the North Pacific [Stoll et al., 2018], mainly forming in the Nordic Seas, the Denmark Strait, the Labrador Sea, and Hudson Bay. Every year, some of the polar lows that develop in the Nordic Seas make landfall on the coast of Norway, affecting its coastal population.

In the North Pacific, polar lows primarily form over the Sea of Okhotsk, the Sea of Japan, the Bering Sea, and the Gulf of Alaska. Densely populated areas of Japan are especially vulnerable when marine cold air outbreaks in the Sea of Japan lead to polar lows.

An Elusive Phenomenon

The origins and characteristics of polar lows largely remained a mystery until the beginning of the satellite era in the 1960s. With resolution in atmospheric models being too coarse to capture the storms until relatively recently, satellite infrared images have been key to identifying polar lows. These images have shown that some polar lows are shaped like commas, similar to midlatitude synoptic-scale (i.e., extratropical) cyclones, whereas others are spiraliform like hurricanes (i.e., tropical cyclones; Figure 1).

Fig. 1. These satellite infrared images show (a) a comma-shaped polar low over the Norwegian Sea (which made landfall in Norway), captured by the Advanced Very High Resolution Radiometer, and (b) a polar low with a spiraliform signature over the Barents Sea (which made landfall in Novaya Zemlya, Russia), captured by the Moderate Resolution Imaging Spectroradiometer. The blue outlining represents the coastline. Source: Moreno-Ibáñez et al. [2021], CC BY-NC 4.0How polar lows develop was long debated among researchers. Some argued that polar lows resembled small versions of synoptic-scale cyclones, which develop because of baroclinic instabilities arising from strong horizontal temperature gradients in the atmosphere. Others claimed they were akin to hurricanes, which intensify as a result of convection and are typically about 500 kilometers in diameter. Today, the research community agrees that development mechanisms of polar lows are complex and include some processes involved in the formation of synoptic-scale cyclones and some involved in hurricane formation. Among these processes are transfers of sensible heat from the ocean surface to the atmosphere through the effects of turbulent air motion, which play roles in the formation and intensification of polar lows.

Weather forecasting in polar regions remains challenging because atmospheric models still struggle to correctly represent certain key processes, such as air-sea interactions.In general, weather forecasting in polar regions remains challenging because atmospheric models still struggle to correctly represent certain key processes, such as air-sea interactions, in these regions. Because of their small size and short lifetimes, polar lows are particularly hard to forecast compared with larger polar cyclones. Compounding the challenge is the fact that these systems develop over the ocean at high latitudes, where conventional observations (e.g., from surface weather stations, buoys, and aircraft) are scarce.

With the advent of high-resolution nonhydrostatic atmospheric models with grid meshes of less than about 10 kilometers (which started to be implemented for weather forecasting in the 2000s), however, polar low forecasts have improved notably. Unlike models that assume hydrostatic conditions, nonhydrostatic models do not assume balance between the vertical pressure gradient force, which results from the decrease of atmospheric pressure with altitude, and the force of gravity—a balance that does not occur in intense small-scale systems. Compared to coarser models, high-resolution models better represent processes that occur near the surface (e.g., the influence of topography on wind) as well as convection, which play important roles in polar low development. Moreover, high-resolution models can better resolve the structure of polar lows (e.g., strong wind gradients).

Nevertheless, model improvements are still needed to forecast the trajectories and intensities of polar lows accurately [Moreno-Ibáñez et al., 2021]. For instance, the parameterization of turbulence is based on approximations that are not valid at the kilometer scale. In addition, more conventional observations of atmospheric variables at high latitudes, such as winds and temperatures at different levels of the atmosphere, are required to improve the initial conditions fed into the models.

Several major scientific questions about these storms also remain unanswered: What are the best objective criteria (e.g., size, intensity, lifetime) for identifying and tracking polar lows using storm tracking algorithms? What is the main trigger for polar low development? And, most intriguing, what is the role of polar lows in the climate system?

Actors in the Climate System

Little is known about how polar lows contribute to Earth’s climate system. A few studies have analyzed the effects of polar lows on the ocean, but results so far are inconclusive. On the one hand, the large sensible heat fluxes—which can reach more than 1,000 watts per square meter—from the ocean surface to the atmosphere that favor the development of these cyclones lead to cooling of the ocean surface [e.g., Føre and Nordeng, 2012]. On the other hand, the strong winds of polar lows induce upper-ocean mixing, which can warm the ocean surface in regions where sea surface temperatures are colder than underlying waters [Wu, 2021].

The warming or cooling effect of polar lows on the ocean surface may influence the formation rate of deep water, a major component of Earth’s global ocean circulatory system.The overall warming or cooling effect of polar lows on the ocean surface may influence the formation rate of deep water, a major component of Earth’s global ocean circulatory system. In one study, researchers found that polar mesoscale cyclones increase ocean convection and stretch convection to deeper depths [Condron and Renfrew, 2013]. However, this study used only a coupled ocean–sea ice model, relying on a parameterization to represent the effects (e.g., winds) of polar mesoscale cyclones in the ocean-ice model rather than explicitly resolving the cyclones. Therefore, the interactions between the ocean and the atmosphere, which are relevant for the deepwater formation, were not represented. This tantalizing, but hardly definitive, result highlights the need for further study of polar lows’ interaction with the ocean and climate.

Polar Lows in a Warmer Climate

The continuing decreases in Arctic sea ice extent and snow cover on land projected to occur with global warming, as well as increases in sea surface temperatures, will undoubtedly affect the climatology of polar lows. In the North Atlantic, polar lows have been projected to decrease in frequency, and the regions where they develop are expected to shift northward as sea ice retreats [Romero and Emanuel, 2017]. This shift means that newly opened Arctic shipping routes will not be spared from these storms.

We do not know yet what will happen in other regions because research investigating climate change impacts on the frequency, lifetime, intensity, and genesis areas of polar lows is still at an incipient stage. The few studies conducted so far have used dynamical or statistical downscaling methods to produce high-resolution information about the relatively small, localized phenomenon of polar lows from low-resolution data (e.g., from global climate models)—approaches that require far less computing resources than performing global, high-resolution climate simulations.

Unfortunately, current coarse-grained global climate models cannot resolve small-scale phenomena like polar lows. The typical resolution of the models included in the Coupled Model Intercomparison Project Phase 5 (CMIP5), endorsed by the World Climate Research Programme in 2008, was 150 kilometers for the atmosphere and 1° (i.e., 111 kilometers at the equator) for the ocean. As part of CMIP6, a High Resolution Model Intercomparison Project has been developed [Haarsma et al., 2016], including models with grid meshes ranging from 25 to 50 kilometers for the atmosphere and 10 to 25 kilometers for the ocean. These resolutions are fine enough to enable study of some mesoscale eddies in the atmosphere and the ocean [Hewitt et al., 2020], and important weather phenomena, such as tropical cyclones, can also be simulated [e.g., Roberts et al., 2020].

Nevertheless, atmospheric models at this resolution are still too coarse to resolve most polar lows. Moreover, the resolution of these ocean models is not high enough to resolve mesoscale eddies that develop poleward of about 50° latitude [Hewitt et al., 2020], so some mesoscale air-sea interactions cannot be adequately represented. Mesoscale air-sea interactions also affect sea ice formation, which influences where polar lows form. The recent Intergovernmental Panel on Climate Change report indicates that there is low confidence in projections of future regional evolution of sea ice from CMIP6 models.

Interdisciplinary Research Needed

The importance of interdisciplinary collaboration in polar low research cannot be overstated.Considering the interactions among the atmosphere, ocean, and sea ice involved in polar low development, the importance of interdisciplinary collaboration in polar low research cannot be overstated. Close cooperation among atmospheric scientists, oceanographers, and sea ice scientists is needed to enable a complete understanding of polar lows and their role in the climate system.

Improving forecasts and longer-term projections of polar lows requires coupling of high-resolution atmosphere, ocean, and sea ice models. High-resolution coupled model forecasts of polar lows are already practicable. With continuing increases in computational capabilities, it may become feasible to use coupled high-resolution regional climate models and variable-resolution global climate models to better study how polar low activity may change in a warming climate and the impact of polar lows on ocean circulation. Such interdisciplinary research will also help us better anticipate and avoid damaging effects of these small, but mighty, polar storms on people and productivity.

Acknowledgments

The author thanks René Laprise and Philippe Gachon, both affiliated with the Centre for the Study and Simulation of Regional-Scale Climate, University of Quebec in Montreal, for their constructive comments, which helped improve this article.

Telling the Stories Behind the Science

EOS - Thu, 09/02/2021 - 12:30

AGU journals and books have captured research in Earth and space science for over a century, providing a documented record of scientific discovery. There is another history, however, which how not been as well documented, and these are the stories of how that scientific research was accomplished. These are the stories that might be told in a department coffee room or recounted after-hours at a scientific meeting, often passed down informally from one generation of scientists to the next.

Perspectives is a collection reflecting on important scientific discoveries, advances, and events in Earth and space science, focusing on the process of scientific discovery.AGU launched a new journal, Perspectives of Earth and Space Scientists, to capture these stories. Perspectives is a collection of memoirs, essays, and insights by AGU Fellows and other invited authors reflecting on important scientific discoveries, advances, and events in Earth and space science, focusing on the process of scientific discovery. All articles are open access and are intended to be read and understood by the wider geosciences community and the science-interested public, both as a documentation of the past history of our fields and as inspiration for future scientists.

Scientific journals tend to record the what of the research and often skip over the why or how of the research, but these are often the most important aspects for young scientists. These stories address how challenges were met, how obstacles were overcome, and how funding was obtained. Research papers tend to avoid an author’s personality, except obliquely, but Perspectives stories revel in the personalities, revealing how deals were made, alliances forged, and sometimes how conflicts were resolved. This is often what young scientists want and need to know to succeed in their research.

Although Perspectives articles do not focus solely on the scientific research, they are still very clearly scientific articles, and as such they cite past research, contain reference lists, are peer-reviewed, and are themselves citable publications.

Perspectives articles contain a balance of both scientific and personal history, blended in an engaging story.Because Perspectives articles contain a balance of both scientific and personal history, blended in an engaging story, authors often find these articles to be more difficult to write than a more traditional paper. As we have all encountered, good storytelling is a challenging art form, often mastered only after years of practice. It is therefore not unlikely that the paper may go through a few rounds of revisions before its story becomes sufficiently impactful.

I don’t use the word “story” casually. Humans evolved to learn effectively through the telling of stories, as cultural memory was passed down with societies through storytelling for thousands of years before the written word was invented. Within the most recent pedagogical advances in science education, as exemplified by the phenomenon-based learning of the K-12 Next Generation Science Standards, student understanding is best attained when the science is structured around engaging storylines that address relevant and observable phenomena.

In the context of Perspectives, a good story is not a biography or Wikipedia entry or prosaic retelling of one’s CV. A story needs a thesis, a plotline, and usually some degree of character development (i.e., of the author). This story should be truthful and fair and have a takeaway message that can be of use to future scientists. Beyond that, however, there many different approaches that an author can take. How were you drawn to your chosen field? Were there critical events or turning points in your career? What obstacles did you overcome? How did a particular research field or scientific program evolve? What are some highlights and reflections on the current status of your field and where do you think it is going? Some degree of humor is often welcome but not required. Humility is always a necessity. The articles published thus far span a wide range of formats.

We seek submissions from a broad diversity of author identities, backgrounds, and career pathways.The first round of Perspectives articles were solicited from AGU Fellows, but we now seek further submissions from a broad diversity of author identities, backgrounds, and career pathways, to capture the full diversity of current AGU membership that may inspire researchers from diverse backgrounds.

Articles are published by invitation only but we welcome proposals. If you feel that you have a personal scientific story to document and disseminate, please send an article proposal for consideration by the Editorial Board. There are no charges for publishing articles in Perspectives and all articles are published with an open access license.

—Michael Wysession (mwysession@wustl.edu; 0000-0003-4711-3443), Editor in Chief, Perspectives of Earth and Space Science, and Department of Earth and Planetary Sciences, Washington University in St. Louis

How the “Best Accidental Climate Treaty” Stopped Runaway Climate Change

EOS - Thu, 09/02/2021 - 12:29

The international treaty that phased out the production of ozone-depleting chemicals has prevented between 0.65°C and 1°C of global warming, according to research.

The study also showed that carbon stored in vegetation through photosynthesis would have dropped by 30% without the treaty, which came into force in 1989.

Researchers from the United Kingdom, New Zealand, and the United States wrote in Nature that the Montreal Protocol was essential in protecting carbon stored in plants. Studies in the polar regions have shown that high-energy ultraviolet rays (UVB) reduce plant biomass and damage DNA. Forests and soil currently absorb 30% of human carbon dioxide emissions.

“At the ends of our simulations, which we finished around 2100, the amount of carbon which is being taken up by plants is 15% the value of our control world where the Montreal Protocol is enacted,” said lead author and atmospheric scientist Paul Young of Lancaster University.

In the simulation, the UVB radiation is so intense that plants in the midlatitudes stop taking up a net increase in carbon.

Plants in the tropics fare better, but humid forests would have 60% less ozone overhead than before, a state much worse than was ever observed in the Antarctic ozone hole.

A “World Avoided”

The study used a chemistry climate model, a weather-generating tool, a land surface model, and a carbon cycling model. It links ozone loss with declines in the carbon sink in plants for the first time.

Chlorofluorocarbons (CFCs), ozone-depleting chemicals phased out by the Montreal Protocol, are potent greenhouse gases. The study estimated that CFCs would warm the planet an additional 1.7°C by 2100. Taken together, the damage from UVB radiation and the greenhouse effect of CFCs would add an additional 2.5°C warming by the century’s end. Today, the world has warmed, on average, 1.1°C at the surface, leading to more frequent droughts, heat waves, and extreme precipitation.Carbon dioxide levels double in the “World Avoided” scenario.

Carbon dioxide levels in the atmosphere would reach 827 parts per million by the end of the century too, double the amount of carbon dioxide today (~412 parts per million).

The work analyzed three different scenarios: The first assumes that ozone-depleting substances stayed below 1960 levels when massive production kicked in. The second assumes that ozone-depleting chemicals peaked in the late 1980s before tapering off. The last assumes that ozone-depleting chemicals increase in the atmosphere every year by 3% through 2100.

The last scenario, called the “World Avoided,” assumes not only that the Montreal Protocol never happened but also that humans had no idea CFCs were harming ozone, even when the effects would become clear in the 2040s. The models also assume one kind of UVB damage to all vegetation, when in reality, plants react differently.

“Change Is Possible” The ozone layer over Antarctica has stabilized and is expected to recover this century. Credit: Amy Moran/NASA Goddard Space Flight Center

“The Montreal Protocol is regarded as one of the most successful global environmental treaties,” said University of Leeds atmospheric scientist Martyn Chipperfield, who was not involved in the research. “CFCs and other ozone-depleting substances are potent greenhouse gases, and the Montreal Protocol is known for having real benefits in addressing climate change by removing previous levels of high CFCs from the atmosphere.”

The Kigali Amendment to the Montreal Protocol in 2016 brought climate change to the forefront. Countries agreed to gradually phase out hydrofluorocarbons (HFCs), which are used in applications such as air conditioning and fire extinguishing systems. HFCs originally replaced hydrochlorofluorocarbons (HCFCs) and CFCs because they do not harm ozone. Yet HFCs are potent greenhouse gases.

The Montreal Protocol was the “best accidental climate treaty,” said Young. “It is an example of where science discovered there was a problem, and the world acted on that problem.”

Injecting sulfate aerosols into the stratosphere has been proposed as one geoengineering solution to slow global warming. “People are seriously talking about this because it’s one of the most plausible geoengineering mechanisms, yet that does destroy ozone,” Young said. Calculating the harm to the carbon cycle is “the obvious follow-up experiment for us.”

The research highlights the importance of the U.N. Climate Change Conference of the Parties (COP26) this fall, which will determine the success of worldwide climate targets.

Immediate and rapid reductions in greenhouse gases are necessary to stop the most damaging consequences of climate change, according to the Intergovernmental Panel on Climate Change.

—Jenessa Duncombe (@jrdscience), Staff Writer

Heat Pumps Can Lower Home Emissions, but Not Everywhere

EOS - Thu, 09/02/2021 - 12:25

In 1855, engineer Peter von Rittinger was concerned with salt production. He was building a device that could evaporate water from brine more efficiently than available methods. Later iterations of this device, the heat pump, would become tools to slow climate change. Today heat pumps aim to replace a home’s in situ oil or gas consumption with cleaner electricity use.

Researchers recently found that wider installation of residential heat pumps for space heating could lower greenhouse gas emissions. The results, published in Environmental Research Letters, showed that heat pumps would reduce emissions for two thirds of households and financially benefit a third of U.S. homeowners.

But only around 10% of homes use heat pumps, which pump heat out of the house in summer and into the house during winter. “The majority of heating in buildings, as well as hot water and cooking, relies on fossil fuels burned on site,” said Michael Waite, associate research scientist at Columbia University who was not involved in the new study. To reduce emissions, homeowners need to replace such heating systems. “The only direct way of doing that is through electrification of those uses,” said Waite.

Pros and Cons

But wide-scale heat pump adoption may have unintended, undesirable consequences. Thomas Deetjen, a research associate at the University of Texas at Austin, and his coauthors wanted to see which circumstances make heat pumps a wise choice for homeowners and society.

Using tools from the National Renewable Energy Laboratory (NREL), they simulated outcomes of widespread heat pump adoption. They modeled 400 locally representative single-family homes in each of 55 cities. To model the electric grid, the researchers assumed moderate decarbonization of the grid (a 45% decline in emissions over the 15-year lifetime of a heat pump).

Researchers evaluated effects on homeowners, comparing costs of heat pump installation to energy cost savings. They also analyzed changes in carbon dioxide emissions and air pollutants, putting a dollar amount to climate and health damages. Climate damages included costs associated with climate change–driven natural hazards such as flooding and wildfire. Health damages include premature deaths due to air pollution.

“The key finding is that for around a third of the single-family homes in the U.S., if you installed the heat pump, you would reduce environmental and health damages.”“The key finding is that for around a third of the single-family homes in the U.S., if you installed the heat pump, you would reduce environmental and health damages,” said Parth Vaishnav, an assistant professor at the School for Environment and Sustainability at the University of Michigan and a coauthor of the paper. Installing heat pumps would avoid $600 million in health damages and $1.7 billion in climate damages each year. It would also directly save homeowners money on energy costs. They also found that for all homes, assuming moderate electric grid decarbonization, heat pump use cut greenhouse gas emissions.

But heat pump installation did have other consequences. “Heat pumps are not necessarily a silver bullet for every house,” said Deetjen.

Although homeowners may trade a furnace for a heat pump, for example, the electricity for that pump could still come from a plant burning fossil fuels. The cost of generating electricity may be more than the cost of in situ fossil fuel use. “There are some houses that if they get a heat pump, it’s actually worse for the public,” said Deetjen. ”They end up creating more pollution.”

Heat pump benefits also depend on climate. Heat pumps operate less efficiently in the cold, running up electricity costs. In 24 of the studied cities, mostly in colder climates, peak residential electricity demand increased by over 100% if all houses adopted heat pumps, which would require grid upgrades.

“It could be challenging to meet that increase of winter peaking, because our system is not built that way,” said Ella Zhou, a senior modeling engineer at NREL not involved with this study. “We need to think about both the planning and operation of the grid system in a more integrated fashion with future use.”

Consequences of Widespread Electrification

The new research supported 32% of single-family homes converting to heat pumps. More widespread adoption came at much higher financial and health costs. If all U.S. houses adopted heat pumps, the study said, it would yield $6.4 billion in climate benefits. However, it would also cost homeowners $26.7 billion, and pollutants from increased electricity generation would result in $4.9 billion in health damages from financial burdens resulting from illnesses or premature deaths.

There is some uncertainty surrounding these findings. The study didn’t consider the cost of potential grid upgrades or what complete decarbonization would mean for heat pump adoption. Waite pointed out that as the grid evolves, future research should also determine whether renewable energy could even meet the demands of high electricity loads.

—Jackie Rocheleau (@JackieRocheleau), Science Writer

When Deep Learning Meets Geophysics

EOS - Wed, 09/01/2021 - 14:06

As artificial intelligence (AI) continues to develop, geoscientists are interested in how new AI developments could contribute to geophysical discoveries. A new article in Reviews of Geophysics examines one popular AI technique, deep learning (DL). We asked the authors some questions about the connection between deep learning and the geosciences.

How would you describe “deep learning” in simple terms to a non-expert?

Deep learning (DL) optimizes the parameters in a system, a so-called “neural network,” by feeding it a large amount of training data. “Deep” means the system consists of a structure with multiple layers.

DL can be understood from different angles. In terms of biology, DL is a bionic approach imitating the neurons in the human brain; a computer can learn knowledge as well as draw inferences like a human. In terms of mathematics, DL is a high-dimensional nonlinear optimization problem; DL constructs a mapping from the input samples to the output labels. In terms of information science, DL extracts useful information from a large set of redundant data.

How can deep learning be used by the geophysical community?

Deep learning-based geophysical applications. Credit: Yu and Ma [2021], Figure 4aDL has the potential to be applied to most areas of geophysics. By providing a large database, you can train a DL architecture to perform geophysical inferring. Take earthquake science as an example. The historical records of seismic stations contain useful information such as the waveforms of an earthquake and corresponding locations. Therefore, the waveforms and locations serve as the input and output of a neural network. The parameters in the neural network are optimized to minimize the mismatch between the output of the neural network and the true locations. Then the trained neural network can predict locations of new coming seismic events. DL can be used in other fields in a similar manner.

What advantages does deep learning have over traditional methods in geophysical data processing and analysis?

Traditional methods suffer from inaccurate modeling and computational bottlenecks with large-scale and complex geophysical systems; DL could be helpful to solve this. First, DL can handle big data naturally where it causes a computational burden in traditional methods. Second, DL can utilize historical data and experience which are usually not considered in traditional methods. Third, an accurate description of the physical model is not required, which is useful when the physical model is not known partially. Fourth, DL can provide a high computational efficiency after the training is complete thus enabling the characterization of Earth with a high resolution. Fifth, DL can be used for discovering physical concepts, such as the solar system is heliocentric, and may even provide discoveries that are not yet known.

In your opinion, what are some of the most exciting opportunities for deep learning applications in geophysics?

DL has already provided some surprising results in geophysics. For instance, on the Stanford earthquake data set, the earthquake detection accuracy improved to 100 percent compared to 91 percent accuracy with the traditional method.

In our review article, we suggest a roadmap for applying DL to different geophysical tasks, divided into three levels:

Traditional methods are time-consuming and require intensive human labor and expert knowledge, such as in the first-arrival selection and velocity selection in exploration geophysics. Traditional methods have difficulties and bottlenecks. For example, geophysical inversion requires good initial values and high accuracy modeling and suffers from local minimization. Traditional methods cannot handle some cases, such as multimodal data fusion and inversion.

What are some difficulties in applying deep learning in the geophysical community?

Despite the success of DL in some geophysical applications, such as earthquake detectors or pickers, its use as a tool for most practical geophysics is still in its infancy.

Despite the success of deep learning in some geophysical applications its use as a tool for most practical geophysics is still in its infancy.The main difficulties include a shortage of training samples, low signal-to-noise ratios, and strong nonlinearity. The lack of training samples in geophysical applications compared to those in other industries is the most critical of these challenges. Though the volume of geophysical data is large, available labels are scarce. Also, in certain geophysical fields, such as exploration geophysics, the data are not shared among companies. Further, geophysical tasks are usually much more difficult than those in computer vision.

What are potential future directions for research involving deep learning in geophysics?

Future trends for applying deep learning in geophysics. Credit: Yu and Ma [2021], Figure 4bIn terms of DL approaches, several advanced DL methods may overcome the difficulties of applying DL in geophysics, such as semi-supervised and unsupervised learning, transfer learning, multimodal DL, federated learning, and active learning. For example, in practical geophysical applications, obtaining labels for a large data set is time-consuming and can even be infeasible. Therefore, semi-supervised or unsupervised learning is required to relieve the dependence on labels.

We would like to see research of DL in geophysics focus on the cases that traditional methods cannot handle, such as simulating the atmosphere or imaging the Earth’s interior on a large spatial and temporal scale with high resolution.

—Jianwei Ma (jwm@pku.edu.cn,  0000-0002-9803-0763), Peking University, China; and Siwei Yu, Harbin Institute of Technology, China

Forecast: 8 Million Energy Jobs Created by Meeting Paris Agreement

EOS - Wed, 09/01/2021 - 14:05

The tricky part will be ensuring that laid-off fossil fuel workers have access to alternative employment.Opponents of climate policy say curbing fossil fuel emissions will kill jobs, but a new study showed that switching to renewables would actually create more jobs than a fossil fuel–heavy future will. The tricky part will be ensuring that laid-off workers have access to alternative employment.

Globally, jobs in the energy sector are projected to increase from 18 million today to 26 million in 2050 if the world cuts carbon to meet the well-below 2°C target set by the Paris Agreement, according to a model created by researchers in Canada and Europe. Renewables will make up 84% of energy jobs in 2050, primarily in wind and solar manufacturing. The new study was published earlier this summer in One Earth.

In contrast, if we don’t limit global warming to below 2°C, 5 million fewer energy jobs will be created.

The Future Is Looking Bright for Solar and Wind

The Intergovernmental Panel on Climate Change’s latest physical science assessment predicted that climate will be 1.5°C warmer than preindustrial levels by the 2030s unless there are strong, rapid cuts to greenhouse gases in the coming decades. Such cuts will necessitate a greater reliance on sustainable energy sources.

In 2020, renewables and nuclear energy supplied less than a quarter of global energy, according to BP’s 2021 report.

Many regions will gain energy jobs in the transition.This number is expected to rise, however, in part because solar photovoltaics and wind are now cheaper than fossil fuels per megawatt-hour and because many countries have set aggressive emissions-cutting goals.

According to the new study, many regions will gain energy jobs in the transition, including countries throughout Asia (except for China), North Africa, and the Middle East, as well as the United States and Brazil. Although fossil fuel extraction jobs will largely disappear, “massive deployment of renewables leads to an overall rise in jobs,” wrote the authors.

But not all countries will be so lucky: Fossil fuel–rich China, Australia, Canada, Mexico, South Africa, and sub-Saharan African countries will likely lose jobs overall.

Only jobs directly associated with energy industries, such as construction or maintenance, were included in the study. Other reports have included adjacent or induced jobs such as fuel transport, government oversight, and service industry.

Previous studies estimated a larger increase in energy jobs, using numbers compiled from the Organisation for Economic Co-operation and Development.

The new study instead compiled data from primary sources by mining fossil fuel company reports, trade union documents, government reports, national databases, and other sources that cover 50 countries representing all major players in fossil fuels and renewables. Lead study author Sandeep Pai ran the numbers through an integrated assessment model housed at the European Institute on Economics and the Environment. The model calculates job growth projections under different climate policies and social and economic factors. Pai is a lead researcher at the Just Transition Initiative supported by the nonprofit policy research organization the Center for Strategic and International Studies and the Climate Investment Funds.

Calls for Just Transitions

Crucially, the study found that nearly 8 million of the 26 million jobs (31%) in 2050 are “up for grabs,” said study author Johannes Emmerling, a scientist at the European Institute on Economics and the Environment.Renewable manufacturing isn’t tied to a particular location, unlike coal mining.

These jobs in renewable manufacturing aren’t tied to a particular location, unlike coal mining.

Pai concurred. “Any country with the right policies and incentives has the opportunity to attract between 3 [million and] 8 million manufacturing jobs in the future.”

Recently, countries have begun putting billions of dollars toward “just transition,” a loose framework describing initiatives that among other things, seek to minimize harm to workers in the fossil fuel industry. Concerns include salary loss, local revenue, and labor exploitation.

What could be done? Just transition projects may include employing fossil fuel workers to rehabilitate old coal mines or orphan oil wells, funding community colleges to train workers with new skills, supporting social services like substance abuse centers, and incentivizing local manufacturing.

“The just transition aspect is quite critical,” Pai said. “If [countries] don’t do that, this energy transition will be delayed.”

LUT University energy scientist Manish Thulasi Ram, who was not involved in the study, thinks the latest research underestimates the job potential of the energy transition. Using a different approach, Ram forecasts that close to 10 million jobs could be created from battery storage alone by 2050—a sector not considered in the latest analysis.

—Jenessa Duncombe (@jrdscience), Staff Writer

Does the Priming Effect Happen Underwater? It’s Complicated

EOS - Wed, 09/01/2021 - 14:03

In microbiology, the priming effect is the observation that the decomposition rate of organic material is often altered by the introduction of fresh organic matter. Depending on the context, the effect can be the increase or reduction of microbial consumption and a corresponding change in emitted carbon dioxide.

Although the mechanism isn’t fully understood, several contributing processes have been proposed. They include the shift of some specialist microbes to the consumption of only fresh or only older material, as well as increased decomposition of stable (older) matter in search of specific nutrients needed to sustain growth enabled by the addition of fresh material.

The priming effect has been well established in terrestrial soils, but experimental evidence has appeared more mixed in aquatic environments. Both the magnitude and the direction (i.e., increase versus decrease) of the effect have been contradictory in a variety of studies conducted in the laboratory and the field.

Sanches et al. performed a meta-analysis of the literature in an attempt to resolve these difficulties. The authors identified 36 prior studies that published a total of 877 results matching their experimental criteria. Of the subset that directly estimated priming, about two thirds concluded that there was no priming effect, with the majority of the remainder indicating an acceleration in decomposition. However, these past studies used a wide variety of metrics and thresholds to define the priming effect. Many others did not directly calculate the magnitude of the effect.

To overcome the range of methodologies, the researchers defined a consistent priming effect metric that can be calculated from the reported data. With this metric, they found support for the existence of a positive priming effect. Namely, the addition of new organic material increases decomposition on average by 54%, with a 95% confidence interval of 23%–92%. They attribute this divergence from the aggregated conclusions described above to a significantly larger data set (because they could calculate their metric even when the original authors did not), which enabled increased statistical significance.

The meta-analysis also indicated which experimental factors were most correlated with an observed priming effect. One key factor was the proxy chosen for microbial activity, as well as the addition of any other nutrients, such as nitrogen or phosphorus. Finally, the authors noted that other recent meta-analyses using differing methodologies have reported no priming effect; they concluded that the umbrella term “priming effect” may be better split into several terms describing related, but distinct, processes. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1029/2020JG006201, 2021)

—Morgan Rehnberg, Science Writer

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer