Syndicate content Eos
Science News by AGU
Updated: 1 day 6 hours ago

Simple Actions Can Help People Survive Landslides

Tue, 10/27/2020 - 12:56

Certain actions increase the chance of surviving a devastating landslide, and simple behavioral changes could save more lives than expensive engineering solutions, according to a new study in AGU’s journal GeoHealth.

In the study, Pollock and Wartman compiled and analyzed a data set of landslide events from around the world that affected occupied buildings, with most of the data coming from the United States. The results showed behavioral factors (such as having knowledge of local landslide hazards and moving to a higher floor during a landslide) had the strongest association with survival, regardless of the size or the intensity of landslide events. The authors also found that stories from landslide survivors provide general strategies for reducing the risk of death.

“There are, in fact, some really simple, cost-effective measures…that can dramatically improve the likelihood that one will survive a landslide,” said Joseph Wartman, a geotechnical engineer at the University of Washington in Seattle and senior author of the new study.

Worldwide, landslides cause over 4,000 deaths per year, on average. In the United States, they are estimated to kill 25 to 50 people each year. In March 2014, the Oso landslide in Washington became the deadliest landslide event in U.S. history, resulting in 43 deaths and destroying 49 homes and structures. Yet scientists haven’t analyzed why some people survive landslides and others don’t.

The researchers found some behaviors, despite being performed by only a small number of people in a community, often save lives.“We really found there was very little information out there,” said William Pollock, a geotechnical engineer at the University of Washington and lead author of the new study. He and Wartman dug through the scientific literature, newspaper articles, medical examiner reports, and other documents to produce a detailed catalog of fatalities caused by landslides hitting occupied buildings. The data set includes information from 38 events that occurred from 1881 to 2019 in Asia, North America, and Oceania. Their analysis showed factors like the distance of the building to the landslide slope, one’s gender, and one’s age were less associated with survival than behavioral factors like moving away from the direction of the threat, rather than getting closer out of curiosity.

In particular, the researchers found some behaviors, despite being performed by only a small number of people in a community, often save lives. These actions are much simpler and may be more effective than expensive engineering solutions, according to the study authors. Specifically, individuals can do the following to increase their chance of survival during a landslide.

Before an Event

Be informed about potential hazards, and talk to people who have experienced them. Move areas of high occupancy, such as bedrooms, upstairs or to the downhill side of a home.

During an Event

Escape vertically—this includes moving upstairs and even on top of counters to avoid being swept away. Identify and relocate to the interior, unfurnished areas. Open downhill doors and windows to let debris escape.

After an Event

If caught in landslide debris, continue to make noise and motion to alert rescuers.

The authors have produced the most robust analysis of interactions between landslides and humans yet, said Dave Petley, vice president for innovation at the University of Sheffield in the United Kingdom and author of the Landslide Blog. Although the new study provides practical advice for people living in landslide-prone areas, the analysis was limited to people in buildings, added Petley, who was not involved in the research. Further analyses of other scenarios, such as people who encounter landslide events on roads or out in the open, may provide additional findings.

The scientific results in the study offer the possibility of markedly decreasing the number of lives lost to landslides in the United States, Wartman said, especially if included in community awareness programs.

“This is a message of hope,” Wartman said. (GeoHealth, https://doi.org/10.1029/2020GH000287, 2020)

—Jack J. Lee (@jackjlee), Science Writer

Urbanization, Agriculture, and Mining Threaten Brazilian Rivers

Tue, 10/27/2020 - 12:55

Researchers in Brazil and the United States have found that agriculture, urbanization, and mining are great threats to water quality in Brazilian rivers. These land uses are important sources of pollution but often remain unaccounted for in analyses of water quality.

Published in September in the Journal of Environmental Management, the study is the first review of land use impacts on water quality across all Brazilian regions. The authors noted that human transformation of native vegetation alters runoff, infiltration, and water evaporation in river catchments, “which affect streamflow, flow dynamics, and nutrient, sediment, and toxic loads to water bodies.”

Lead author Kaline de Mello, from the Institute of Biosciences at the University of São Paulo, said that expansion of the agricultural frontier in regions like the Amazon and the Cerrado—and now the Pantanal—are of great concern. “Deforestation is one of the main causes for river degradation, because forests protect rivers from runoffs of chemicals from agriculture or urban areas, for example.” Fertilizers in pasture regions often cause toxic algal blooms and eutrophication as they are washed into rivers.

“Together, urbanization and agriculture are the main sources of pollution to Brazilian rivers.” The uncontrolled growth of cities is also a point of concern for riverine water quality. Almost half of Brazil’s 212 million citizens have no access to proper sanitation, and urban rivers receive considerable quantities of organic waste. Incorrectly disposed of garbage and road toxins also end up contaminating rivers, especially after rain. “Together, urbanization and agriculture are the main sources of pollution to Brazilian rivers,” De Mello noted.

Another source of river pollution is mining. “Even if it takes considerably less area than agriculture and urbanization, it still has an important impact on river water quality,” De Mello said. Mining sites produce wastewater with high concentrations of metals such as aluminum, zinc, and lead, which are harmful for human and animal life. Mine tailings can be flushed into rivers by dam bursts near mining sites, such as those in Mariana in 2015 and Brumadinho in 2019.

“Some other studies show that even in a scenario of universal access to sanitation in Brazil, water quality would still be compromised by this type of pollution,” said Guilherme Marques, a professor at the Institute of Hydraulic Research at the Federal University of Rio Grande do Sul who was not a part of the study.

Diffuse Pollution

Marques said that water pollution from industry and sewage treatment plants is more visible and easier to measure, as it comes from a single source. Pollutants that come in smaller portions from different sources that are vastly widespread, called diffuse pollution, are much harder to quantify and observe. Diffuse pollution may include the flush of heavy metals carried from the asphalt of cities by rain or runoff from agricultural fertilizers, for example.

The inclination of a river and the type of soil and vegetation around it interact with diffuse pollution and make its impacts harder to quantify and analyze than point source pollution. “Besides being more distributed, there’s a myriad of physical, chemical, and biological processes that happen to the pollutants and sediments even before they fall into a river,” Marques said.

Sounding the Alarm

“[A single model] cannot give answers—but this review really sounds the alarm.” Jean Minella, a professor in the Soils Department at Brazil’s Federal University of Santa Maria who did not participate in the study, said that these studies are undertaken all over the world and each river basin behaves differently. “It is very complex to study this scenario in Brazil with its soil, climate, and vegetation variation. [A single model] cannot give answers—but this review really sounds the alarm,” he said.

There should be more fine-grained data to aid the design of specific policies for each Brazilian river basin, Minella said. He added that the Brazilian National Water Agency has a monitoring network, but it has a very low frequency of sampling for analysis of water quality (“something in the order of four a year”).

“There’s little use in enhancing mathematical models when there is so little data to analyze,” Minella said. “There’s a need for heavy investments in hydrological monitoring in Brazil—universities do not have enough resources to do research at the scale and frequency the Brazilian society needs.”

“Policymakers should be aware of the costs water shortages can bring to their cities. Present benefits cannot make up for future costs.”To Marques, it is of utmost importance to act on the information already available. River pollution is, after all, a matter of water security, he said. “Policymakers should be aware of the costs water shortages can bring to their cities. Present benefits cannot make up for future costs.”

Viviane Buchianeri, an agronomy engineer at the state Secretariat for Infrastructure and Environment in São Paulo, pointed to the Nascentes Program as an example of how policymakers are attempting to address this balance between present benefits and future costs. Producers in São Paulo can have fines for environmental infractions converted into environmental services such as reforestation. More than 35 million tree seedlings have been planted as a result of the program.

But all efforts will be insufficient if complex issues such as housing and sanitation are not solved, Buchianeri said. “We will be walking in circles unless access to housing and sanitation is universal. This is why the first of the U.N.’s Sustainable Development Goals is ending poverty. You can only work effectively on the other problems starting from there.”

—Meghie Rodrigues (@meghier), Science Writer

Torrential Rains and Poor Forecasts Sink Panama’s Infrastructure

Tue, 10/27/2020 - 12:53

December 2010 was the wettest month on record in Panama. So much rain fell so quickly that flooding was widespread across the country. One storm on 7–8 December of that year affected water intake at the Chilibre water treatment plant, leaving Panama City—the capital and largest city in the country—without clean water for 50 days. It also caused the closing of the Panama Canal for just the third time in history. This event, locally called La Purísima, produced more rainfall than any previously observed heavy-rain event in the Panama Canal Watershed [Murphy et al., 2014], costing upward of $150 million in damage.

In both extreme rainfall events, Panama’s national weather forecast system failed to forecast the areas affected by, or the substantial impacts of, these meteorological phenomena.In late November 2016, another heavy-rain event struck Panama, wreaking havoc. As Hurricane Otto crossed from the Caribbean to the Pacific, it dropped a month’s worth of rain in a day on the country, causing nine deaths, destroying hundreds of homes, closing schools, and interrupting activities at Tocumen International Airport, the international airport of Panama City. To maintain canal operations and lower the water level, the Panama Canal Authority opened 13 of the 14 gates of the Gatún Dam.

In both extreme rainfall events, Panama’s national weather forecast system failed to forecast the areas affected by, or the substantial impacts of, these meteorological phenomena.

The public began clamoring for the development of a more accurate high-resolution precipitation forecast system for Panama. That was the main motivation for a research project proposed by the Centro del Agua del Trópico Húmedo para América Latina y el Caribe (CATHALAC) and the Instituto de Meteorología de Cuba (INSMET) to the Secretaría Nacional de Ciencia, Tecnología e Innovación de Panamá (SENACYT). In June 2018, the project, titled “Análisis del Modelo Numérico WRF-ARW para la predicción de lluvia a escala de cuencas en Panamá” (“Analysis of the WRF-ARW numerical model for basin-scale rainfall prediction in Panama”), was approved and funded by SENACYT. And in February 2020, many of the people working on this project met to share and highlight its outcomes and progress so far and to assess directions for continuing work, discussions we summarize here.

Weather Forecasting in Panama

Despite its relatively small size, Panama, located at the eastern end of the Central America isthmus, plays crucial roles in regional and global economies. It hosts the Panama Canal, through which hundreds of millions of dollars’ worth of cargo pass each year, and is the international flight hub of Copa Airlines, for example. Activities along the canal and at Tocumen International Airport, as well as those related to the lives of almost 3 million inhabitants in Panama City, have been seriously affected by rainfall events that were not well forecast.

Two major circulation systems trigger convective activity near Panama: the Intertropical Convergence Zone and disturbances associated with it and cold fronts penetrating from northern higher latitudes. The isthmus’ long, narrow shape, its complex orography, and its position between two large bodies of water—the Caribbean Sea and Pacific Ocean—that both contribute large amounts of moisture to the atmosphere all contribute to unstable convective activity that leads to cloud formation and to the occurrence of strong rainfall. Because of the atmospheric instability in this region, however, rainfall amounts are very difficult to predict over short timescales.

Panama City is especially prone to damage from significant rainfall because its population growth has outpaced the growth of services infrastructure like stormwater disposal systems.Panama City is especially prone to damage from significant rainfall because its population growth has outpaced the growth of services infrastructure like stormwater disposal systems. So when heavy rains hit, they can cause services to be paralyzed for long periods, bringing subsequent economic and social damage. Panama’s problems with severe flooding thus arise from a combination of heavy rains, poor infrastructure, and the lack of quality forecasting.

There are two main reasons why weather forecasts fail in Panama. First, Panama’s operational forecasting system prior to the new project consisted of just two daily simulations with relatively coarse spatial resolutions. Second, and more important, was the lack of sensitivity studies used to determine the best physics and atmospheric dynamics parameters to describe the formation of storms over Panama.

Empresa de Transmisión Electrica (ETESA), Panama’s state electrical transmission company, oversees weather forecasts in the country through its meteorology and hydrology unit (HYDROMET). HYDROMET was supposed to rely on the next-generation Weather Research and Forecasting (WRF) model. When we started this project in 2018, though, HYDROMET’s WRF model for Panama was inactive. HYDROMET also did not have a platform by which model outputs could be disseminated, so even if the WRF model had been active, the public had no way to get the information. The other group that used the WRF model for weather forecasting for Panama was CATHALAC.

The Panama Precipitation Prediction Project

The goal of our project is to create an effective forecasting system for Panama using Advanced Research WRF (ARW) models that can identify with sufficient lead time the occurrence of extreme precipitation events, particularly over Panama City. The project, which involves researchers from the Center for Atmospheric Physics at the Institute of Meteorology of Cuba, CATHALAC, ETESA, and the Autoridad del Canal de Panama, comprises three stages.

The first stage involved studying how various dynamic factors directly influence the form (i.e., from what type of cloud it falls) and amount of precipitation over Panama using computational experiments and relying on data from many weather stations, as well as weather radar and upper air sounding instrumentation. Such dynamic factors include cloud microphysics (i.e., the necessary conditions for the formation of drops of precipitation), cumulus parameterization (the way the numerical models simulate clouds and their interactions with the environment), and the limit layer parameterizations (the way the atmospheric layer closest to Earth’s surface interacts with the layer where the clouds form).

We looked at how to implement an efficient and robust forecasting system with available technological and human resources.We examined the combination of cumulus and microphysics parameterization schemes that best reproduce rainfall events in Panama and which model spatial scales yielded good representations of such events. Then we looked at how to implement an efficient and robust forecasting system with available technological and human resources and how to build the key institutional arrangements, research activities, and capacity-building processes to ensure further development of the implemented system.

The second stage of the project has focused on determining which of the parameterization schemes studied in stage 1 has the greatest potential to be implemented [Sierra-Lorenzo et al., 2020] and to provide accurate forecasts in enough time to alert users to the presence of severe weather in the study area, given current computational capacity. We have also been creating a system that gives users the ability to access the output data from these precipitation forecasts in real time. For this step, the project working group decided to use an immediate forecast system tool called Sistema de Pronostico Inmediato, or simply SisPi, which was developed in Cuba and has been used successfully in other Caribbean regions. SisPi allows for the visualization of numerical weather model outputs, providing a series of additional products with which forecasters can quickly assess a specific meteorological situation.

The third and final stage involves determining the necessary requirements for the forecasting system to be used at maximum capacity and figuring out what technological challenges must be overcome to achieve the best forecast in the fastest possible time.

After 18 months of research, including analyzing 150 case studies representing different seasons and 10 synoptic conditions over Central America, we found that no individual model configuration was able to accurately predict all intense rain events with acceptable skill. So we selected the three most skillful WRF-ARW model configurations and incorporated them within the weather forecast system under development. We also decided to build a Panama-specific version of SisPi (SisPi-Panama) within CATHALAC’s facilities and to run it automatically four times a day. This system will be the first in Central America that is based on a robust sensitivity assessment of its configuration, and it will be implemented to share different meteorological products widely through the SisPi-Panama web platform.

Where We Go from Here

During the first 18 months of this project, we fulfilled the first two objectives originally proposed. But to improve upon the work so far and meet the third objective, we submitted a second project proposal to SENACYT. The proposal included an idea from the workshop earlier this year to create an application for cell phones that would show the results of weather forecasts in real time and allow feedback from app users. This feedback should enable us to evaluate the skill of our system objectively and, when the feedback sample size becomes large enough, to evaluate statistics that may offer us the chance to perform bias correction in the models.

Recommendations for ways to continue improving forecasts include using emerging methods like machine learning and neural networks and conducting additional experiments in sensitivity studies.At the workshop, participants discussed new ideas for meeting the third project objective that should be taken into account in future efforts. Recommendations for ways to continue to improve forecasts included using emerging methods like machine learning and neural networks and conducting additional experiments in sensitivity studies. Participants were also keen on the possibilities of developing and producing real-time forecasts specific for solar and wind energy potential. We are also looking to partner with a United Nations agency like the World Meteorological Organization or the International Renewable Energy Agency to improve our computing capabilities.

Next steps for the project involve our partner institutions simulating combinations of multiple ARW model ensembles to test which combination most robustly forecasts heavy-rain events in Panama and to improve the SisPi-Panama web tool so that the information is available to all users. These steps will account for the recommendations that came out of the workshop, including developing a cell phone app. Further efforts will be needed to implement a system to assimilate existing high-quality data from satellite images, radar, upper air sounding measurements of atmospheric variables, and weather station data to improve the nowcasting scheme.

With what has been achieved in this project already, Panama is now equipped with a more robust forecasting system to predict severe rainfall events. This system will favor the country’s economy and safeguard human lives.


This research was funded through project 2018-4-IOMA17-011, “Análisis del Modelo Numérico WRF-ARW para la predicción de lluvia a escala de cuencas en Panamá,” which was financed by SENACYT and coordinated by CATHALAC.

Wildfires Threaten West Coast’s Seismic Network

Mon, 10/26/2020 - 12:56

As climate change increases the threat of wildfires, U.S. states are battling historic blazes. On the West Coast, the fires have put at risk several hundred seismic stations tasked with protecting citizens from earthquakes—nonseasonal but ever present scourges.

The network of seismic stations inform ShakeAlert, an earthquake early-warning system designed to give people enough time to drop, cover, and hold on before an earthquake’s waves roll through. Eliminating stations risks slowing these alerts.

“There is no one person tracking all seismic stations that may be affected by the fires,” said Kasey Aderhold, a seismologist with the Incorporated Research Institutions for Seismology (IRIS). Instead, several organizations oversee subsets of the network, monitoring the health of their charges by watching real-time data streams. “If data [are] coming in,” Aderhold said, “we are good. If the data connection flatlines, we investigate.”


“If a fire wants to eat your station, it’ll find a way to eat your station.” Wildfires attack seismic stations both directly and indirectly by excising them from the rest of the network. The sensors and electronics that record the quakes often withstand direct assaults, although Paul Bodin, a seismologist and network manager of the Pacific Northwest Seismic Network, noted that “if a fire wants to eat your station, it’ll find a way to eat your station.”

Often, the stations’ most vulnerable hardware—communications and power—may end up scorched. For example, newer stations have solar panels necessarily exposed to both sky and flame. Fire disables these stations until repairs can commence, explained Peggy Hellweg, an operations manager at the Berkeley Seismological Laboratory.

When wildfire indirectly incapacitates stations, “the telemetry,” said Bodin, “is particularly fragile.” Telemetry refers to instruments that determine how stations communicate data in real time—by Ethernet, satellite, cell, or radio. Fires can cause cell tower outages, temporarily decommissioning any connected stations by amputating the data feed. In such a scenario, stations typically come back online when power is restored.

If an off-line station doesn’t reappear in the data stream, said Hellweg, “we have to visit to see what the details are.” However, sending field personnel into hazardous situations like a wildfire, especially in the COVID-19 era, is not a good option, explained Bodin.

Fewer Stations, Less Coverage

“Sometimes stations are set up to daisy-chain or wheel-and-spoke back to a communication hub, often through low-cost radio connections,” explained Aderhold. “If a key data connection is severed…then it can be problematic for seismic monitoring.”

In 2015, this scenario played out in California’s Butte Fire, where, in addition to burned stations, a swath of stations lost their hub, said Corinne Layland-Bachmann, a seismologist at Lawrence Berkeley National Laboratory. Shortly thereafter, at the request of the U.S. Geological Survey, Layland-Bachmann calculated how the loss of these stations affected the health of the seismic network using a probability-based method that determines whether the network can detect small earthquakes. She concluded that by lancing these 28 stations from the network, the fire noticeably decreased the network’s ability to see tiny temblors, particularly in the wildfire-affected region.

This image shows how the magnitude of completeness, a measure of how sensitive the existing seismic network is, changed after removal of 28 seismic stations (white triangles) because of wildfires. California’s state boundary is shown by a black line. Credit: Corinne Layland-Bachmann

For the Pacific Northwest, Bodin said, “I’m not worried about earthquake early warning and fires at this point.” He explained that in Washington, fires tend to rage in the east, which is less seismically active. Also, when stations receive upgrades, they “are armored against fire.” For example, replacing plants with gravel removes fuel for encroaching fires.

California, however, hosts fires that regularly cross active faults enveloped in dense instrumentation. “Every station missing in the network is a problem for earthquake early-warning [systems] because it will take longer to detect an earthquake with fewer stations,” said Hellweg. She argued that even small, undetected earthquakes matter. “Every measurement we make of an earthquake brings us another step forward in terms of understanding how they happen, why they happen, and when they happen and will help us in our ability to forecast earthquakes.”

For now, both Bodin and Hellweg agreed that they’ve been lucky, considering the historic infernos. Hellweg estimated that five to six of the stations she manages have been burned. She said, “Stations from other networks in the state have also been affected.” Likewise, Bodin guessed that between two and 10 stations of the several hundred under his watch have been affected by fire. “It’s a dynamic situation,” he said.

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

How Long Does Iron Linger in the Ocean’s Upper Layers?

Mon, 10/26/2020 - 12:52

Iron is in our blood, our buildings, and our biomes. In our oceans, iron has helped regulate global climate by sustaining carbon-catching phytoplankton. However, current environmental models have difficulty pinning down the relationship between climate and marine iron cycling because they have little data to go on.

In new research, Black et al. conducted an extensive observational study as part of the international GEOTRACES program to investigate iron residence times in the top 250 meters of the ocean and to close gaps in previous results.

Iron in the ocean comes in various forms, depending on its bonding with other materials. Particulate iron may coat dust grains suspended in the water column, for example, and is largely inaccessible to marine life because it typically sinks out of the upper ocean before it dissolves and becomes available for organisms. Dissolved molecular iron, meanwhile, is more bioaccessible. Because these and other forms of iron behave differently in marine environments, their residence times will differ.

Whereas previous studies have estimated residence times ranging from days to years, the new research narrows the window, finding that in most cases, the residence time for total iron is between 10 and 100 days. Subantarctic regions are an exception. As there is very little iron in these areas, residence times depend on local events like infusions of iron from atmospheric dust or from eddies and vary from just a day up to decades.

The researchers also found that dissolved iron has inconsistent residence times that vary in month- to yearlong cycles depending on local conditions. Organisms may respond to seasonal or other changes in their environment to take up more or less dissolved iron.

All told, the residence times of iron indicated in the new study are shorter than what previous studies have estimated. The researchers suggest that the new data and results should help to develop improved biogeochemical models that better predict carbon sequestration in the ocean. (Global Biogeochemical Cycles, https://doi.org/10.1029/2020GB006592, 2020)

—Elizabeth Thompson, Science Writer

Zero-valent Iron in the Oxidizing Atmosphere?

Mon, 10/26/2020 - 11:30

The presence of iron in the atmosphere has significant implications for global nutrient delivery, human health, and several chemical cycles. Globally, the primary source of iron in the atmosphere is wind-blown dust from arid environments. However, in urban areas, iron is also produced via various combustion processes such as power generation and vehicle exhaust.

Salazar et al. [2020] present field data from the Platte River Air Pollution and Photochemistry Experiment (PRAPPE). They compare atmospheric particulate matter samples collected during the summer and winter at an urban, agricultural, and a mixed site on the Eastern Colorado plains. As metallic iron has not been found in atmospheric dust, the pervading presence of metallic iron across all their sites (although concentrated in the rural site) raises intriguing new questions about the origin of iron in the atmosphere. Also, the previously unobserved presence of atmospheric iron found in conjunction with organic matter introduces the possibility that combustion-derived iron is more prevalent than scientists had initially considered.

Citation: Salazar, J. R., Pfotenhauer, D. J., Leresche, F., Rosario‐Ortiz, F. L., Hannigan, M. P., Fakra, S. C., & Majestic, B. J. [2020]. Iron speciation in PM2.5 from urban, agriculture, and mixed environments in Colorado, USA. Earth and Space Science, 7, e2020EA001262. https://doi.org/10.1029/2020EA001262

—Jonathan H. Jiang, Editor, Earth and Space Science

Laike Mariam Asfaw (1945–2020)

Fri, 10/23/2020 - 13:08
Laike Mariam Asfaw. Credit: Atalay Ayele

Prof. Laike Mariam Asfaw, a renowned seismologist and geophysicist and a mentor to many at Addis Ababa University in Ethiopia, where he led the Geophysical Observatory from 1978 to 2008, passed away following an accident this past March.

Laike graduated with a bachelor of science in mathematics from Addis Ababa University (formerly Haile Selassie I University) in 1968. He then completed a master of science degree in applied mathematics in 1971 and a Ph.D. in 1975 in continuum mechanics at the University of Liverpool in England. Upon his return to Ethiopia, he became an assistant professor of geophysics at the Geophysical Observatory, which had been established during the International Geophysical Year (1957–1958) as a geomagnetic observatory. A seismometer (station AAE) was installed soon after in 1959 through the World-Wide Standardized Seismograph Network.

A Life of Science and Service

Through his mentorship of early-career scientists and his encouragement of international collaboration, Laike made the institute an excellent research hub for Earth and space scientists.Laike managed to keep the Geophysical Observatory operational during the military junta and social upheaval of the mid-1970s, despite temporary closure of the university and close scrutiny of international collaborations. Through his mentorship of early-career scientists and his encouragement of international collaboration, he made the institute an excellent research hub for Earth and space scientists. Laike was the Ethiopian project leader for the 2001–2004 Ethiopia Afar Geoscientific Lithospheric Experiment, which involved some 80 Ethiopian and overseas scientists, including nearly every geophysicist in Ethiopia thanks to Laike and his team.

Owing in large part to his leadership and vision, the Geophysical Observatory in 2005 became the Institute of Geophysics, Space Science and Astronomy, which incorporated other units within Addis Ababa University’s faculty to become a truly interdisciplinary research center. Laike left a legacy of productive international collaborations that involved both strong commitments and contributions from the Ethiopian side. As a result, several successful overseas projects were conducted in recent years that helped to improve our understanding of the Afar region and the Main Ethiopian Rift and to mitigate earthquake, volcano, landslide, and fissuring hazards. Overall, he advised more than 20 national and international projects about earthquake and related hazards.

Laike also authored or coauthored more than 60 papers published in international peer-reviewed journals, focusing mainly on seismicity, seismology, geodesy, and volcanic hazards. His solo-authored 1982 paper titled “Development of Earthquake-Induced Fissures in the Main Ethiopian Rift” and another he coauthored in 1992 titled “Recent Inactivity in African Rift,” both published in Nature, motivated decades of field and modeling studies investigating East Africa’s rift features. They certainly inspired our own research decisions and directions.

Laike was an emblematic ambassador for African geosciences—and for Ethiopia. He served as the first president of the Ethiopian Association of Seismology and Earthquake Engineering. He also participated in numerous committees, including the International Association of Seismology and Physics of the Earth’s Interior Committee for Developing Countries, the International Association of Geodesy (IAG) working group on the application of geodetic studies for earthquake prediction, the working group on verification technology for the Comprehensive Nuclear-Test-Ban Treaty, and the International Union of Geodesy and Geophysics/IAG working group on dynamic isostasy.

Laike became an associate of the United Kingdom’s Royal Astronomical Society in 2007, awarded for his long-standing service as director of the Geophysical Observatory. He was also a member of the Ethiopian Geosciences and Mineral Engineering Association and of AGU. In 2008, Laike was the second recipient of AGU’s International Award, given for his ability to help acquire high-quality data and his commitment to furthering the careers of younger colleagues.

A Legacy of Generosity and Support

Laike will be remembered for his selfless and kind personality and for his commitment over more than 50 years to institutional service and to mentoring generations of geoscientists.Prof. Laike Mariam Asfaw was a modest giant in geophysics and in science in general. He will be remembered by his colleagues and students for his selfless and kind personality and for his commitment over more than 50 years to institutional service and to mentoring generations of geoscientists. He will also be remembered for his Volkswagen Beetle, which was perpetually parked in front of the observatory and was kept company by a giant land tortoise that kept the grass tidy.

In a recent note to us, Ian Bastow of Imperial College London wrote about his extended stays at the Geophysical Observatory during his doctoral studies: “With Atalay, [Asfaw] built an observatory that became a magnet for researchers worldwide. In 2019, when I returned to Ethiopia to deploy a new seismic network, I recall a day when I needed a signature, but many observatory staff were away at a conference in Hawassa. But who was there in his office, still reading, as he had been 15 years earlier? Dr. Laike, of course. Future generations of Ethiopian scientists need look no further than Dr. Laike for a blueprint for running an observatory. His love for, and selfless dedication to, learning seemed to drive everything he did. As a visiting scientist, one could not possibly ask for a better mentor and scientific colleague.”

Laike’s intellectual curiosity and generosity enriched the careers of the many Ethiopian and international students and researchers whose work he encouraged and supported. The two of us, as well as many others, would not have succeeded in field studies without his wisdom, support, and kindness. Laike Mariam Asfaw’s legacy endures at the Institute for Geophysics, Space Science and Astronomy at Addis Ababa University.

Author Information

Atalay Ayele, Addis Ababa University, Ethiopia; and Cynthia Ebinger (cebinger@tulane.edu), Tulane University, New Orleans, La.

Rising Seas and Agriculture Created Wetlands Along the U.S. East Coast

Fri, 10/23/2020 - 13:06

Most of the tidal marshes along the eastern coast of the United States formed within the past 6,000 years, according to new research published in Geophysical Research Letters.

Braswell et al. used new and existing data to assess the age of coastal wetland areas. The new study found evidence for two bouts of tidal marsh formation, corresponding to a decline in the rate of sea level rise after the last ice age and the arrival and settlement of Europeans in North America.

Tidal marshes mark the boundary between the saltwater of oceans and the freshwater of upland river systems, where tides create a partially flooded, saline landscape habitable by only a few highly specialized plants, such as cordgrass, rushes, and mangroves.

“Coastal wetlands are important for a myriad of reasons,” said Anna Braswell, an assistant professor at the University of Florida and lead author of the new study. “They act as nurseries for fish, provide filtration of nutrients, and they’re huge carbon stores.”

Marshes Old and Young

Braswell and her colleagues collected 97 core samples from five marshes along the Atlantic coast, from Georgia to Massachusetts; performed radiocarbon dating to estimate their age; and scoured the literature for previously dated material, pulling data from over 43 scientific publications to use in their final analyses.

According to their results, most of the tidal marshes along the North American Atlantic coast began forming 6,000 years ago, following a period of rapidly rising sea levels that had gradually begun to slow.

Agriculture and deforestation associated with early colonization released large amounts of sediment that were subsequently transported by streams and rivers to the coast.But some salt marshes are much younger. The new study also found evidence for a pulse of marsh formation corresponding with European colonization starting 400 years ago, suggesting that agriculture and deforestation associated with early colonization released large amounts of sediments that were subsequently transported by streams and rivers to the coast.

Previous research had shown a correlation between marsh formation and colonial agricultural activity at Plum Island in Massachusetts and the bird-foot delta along the Mississippi River, but the current study indicates that this trend was much more widespread than previously assumed, occurring along the northeastern and southeastern parts of the U.S. Atlantic coast.

“This paper provides the strongest evidence we have today to show that European colonization led to a second period of marsh formation in some parts of the U.S.,” said Christopher Craft, a wetland scientist at Indiana University who was unaffiliated with the study.

“There have been many small-scale studies done on this topic over the last several decades, and what we need to do now is pull together existing data to look for broad patterns, such as the ones uncovered here.”

Ancient Ice Sheets and Sea Level Rise

The formation of coastal marshes is often a race between the rates of rising seas and the amount of sediment rivers transport downstream.The formation of coastal marshes is often a race between the rates of rising seas and the amount of sediment rivers transport downstream. As these sediment-laden waters make their way to the sea, they create sandbars and tidal flats in deltas and estuaries, leading to the formation of tidal marshes.

In addition, as water flows through already existing wetland areas, the fibrous stems and leaves of the plants that grow there facilitate the deposition of suspended particles, thus increasing the size and elevation of already existing marshes.

If sea level rise is slow enough (over the course of thousands of years), sediment is trapped by plants in large-enough quantities to keep them from drowning. However, during periods of rapid sea level rise, marsh plants are unable to establish islands quickly enough, and the transition zone between ocean and mainland contains few wetlands.

Sea levels have risen and fallen several times over the past several thousand years during the Pleistocene ice ages, making the timing of origin for the coastal marshes that exist today unclear.

At the end of the last ice age about 10,000 years ago, the Laurentide Ice Sheet that covered much of North America began to melt. For the next 4,000 years, this large block of ice covering millions of square kilometers dissipated as meltwater, causing rapid sea level rise.

But with the final dissolution of the Laurentide Ice Sheet and the stabilization of the nearby Greenland Ice Sheet, sea level rise slowed to a statelier pace some 6,000 years ago, which—according to the new study—is when tidal marshes began to appear along ocean margins.

Increased Sedimentation due to Colonial Agriculture

When the first European settlers began colonizing North America along the coasts, they cut down trees for timber, ship masts, and fuel and cleared away land for agriculture. This activity caused an increase in the amount of sediment carried away by nearby rivers and streams, according to the new study. This would have led directly to an increase in the rate of tidal marsh formation.

Those marshes most strongly associated with the onset of European colonization were located primarily in the northeastern United States, the new study found.

“This signal could possibly be due to the fact that the coastal plain is shorter in the northeast than in the southeast,” Braswell said. “This means sediment transport from erosion to the marsh would have occurred faster and have had less of a chance of getting trapped in a watershed somewhere on its way to the coast.”

The northeastern portion of the present-day United States also had some of the earliest established colonies. Consequently, deforestation would have occurred in this region before the construction of large dams.

“Dams trap sediment and prevent it from being distributed downstream,” Braswell said. “This may be one of the reasons we found a stronger signal for colonial marsh formation in the northeast.” (Geophysical Research Letters, https://doi.org/10.1029/2020GL089415)

—Jerald Pinson (@jerald_pinson), Science Writer

Radar Observations of a Tornado Associated with Typhoon Hagibis

Fri, 10/23/2020 - 11:30

Tropical cyclones not only bring strong winds and heavy rains but can also generate tornadoes. Even though tropical cyclone‐tornados occur relatively frequently, worldwide documentation is sparse and studies have been limited. Put simply, the processes of tornado formation, or “tornadogenesis”, for tropical cyclone‐tornados is not well understood, especially in comparison to more frequently studied classic supercell tornadoes.

Adachi & Mashiko [2020] used rapid-scan radar at close range to examine the tornadogenesis associated with Typhoon Hagibis in 2019 that struck Chiba Prefecture in Japan. They found that the coupling between a preexisting large-scale vortex and a smaller-scale transient vortex was crucial for the formation of the tornado. The high temporal and spatial resolution in their unique observational data makes it possible to examine the details of the process of tornadogenesis associated with tropical cyclones and provide new insights.

Citation: Adachi, T., Mashiko, W. [2020]. High temporal‐spatial resolution observation of tornadogenesis in a shallow supercell associated with Typhoon Hagibis (2019) using phased array weather radar. Geophysical Research Letters, 47, e2020GL089635. https://doi.org/10.1029/2020GL089635

—Suzana Camargo, Editor, Geophysical Research Letters

To Save Low-Lying Atolls, Adaptive Measures Need to Start Now

Thu, 10/22/2020 - 12:57

Low-lying reef islands like the Pacific Ocean’s Marshall Islands could become unstable by midcentury if measures to adapt to rising sea levels are not implemented, according to new research.

Coral reef atoll islands are home to thousands of people around the world, but researchers still don’t agree on how sea level rise will impact these islands and their communities. Conflicting messages can slow or hinder the abilities of local communities to develop effective plans to protect their islands’ livability in the coming century.

In a new study, Kane and Fletcher used an integrated model that incorporates the 5,000-year geological history of the Marshall Islands with updated emissions and sea level rise projections to understand what will happen to atoll islands in the coming decades. The results suggest higher tides and more destructive waves will inundate the Marshall Islands and deteriorate their freshwater sources and forests as soon as midcentury.

Islanders can proactively bolster their islands’ natural ability to adapt and change, however, by preserving and restoring reef ecosystems that protect coasts and provide new sediment to the structure of the island. Without such measures, the islands could be permanently lost as soon as 2080.

Young, Dynamic Islands

Reef atolls are low-lying islands made up of sand and coral that support lush forests and narrow bands of rain-fed freshwater aquifers. Researchers have debated how these islands will be affected by climate change–driven sea level rise because they have been subjected to rising seas before. Most of these islands formed less than 5,500 years ago, but even in that short time, they have been subjected to sea levels 1 to 2 meters higher than current levels.

Some scientists argue that this means reef islands could be resilient to human-caused sea level rise, but most such studies consider islands and their people to be separate entities, said Haunani Kane, a coastal geologist with the University of Hawai‘i at Hilo and lead author of the new study.

“I found that as really being disconnected from the way that we live on islands,” she said. “This study eliminates some of the confusion and the gaps in knowledge related to how sea level rise will impact low-lying islands because it considers the impacts upon and resilience of both the place and the people.”

Modeling Future Change

To address this disconnect, Kane developed a model that treats islands and their people as inseparable components. The model uses a mixture of fossil data, historical photographs, and modern observations of tide and wave events to understand the geological processes of the Marshall Islands over 5,000 years. Using this information about how the islands have grown and responded to past changes over time, she projected each island’s ability to adapt to the next century’s increasing rate of sea level rise.

Kane found that individual islands will respond slightly differently depending on their shape and location. “Even within one nation, there are differences in how islands will respond,” she said.

Broadly, however, she found that the rate of sea level rise will be at least 10 times faster than what the islands were exposed to in the geologic past, and by 2080, sea levels will be higher than anything the islands have experienced in their lifetimes. Through analysis of fossil reef cores and sediment, Kane realized the island’s structure was largely made up of just one species of foraminifera, single-celled organisms with a hard shell. Currently, no new foraminifera are being deposited on the islands. As sea levels rise, waves could wash more of these island-building organisms ashore, but the processes that encourage such growth involve big waves and a dynamic coastline.

“That can be a good thing for the islands, but it could make it difficult to live on an island,” Kane said. Encouraging the islands’ natural ability to grow and change could save them, but living on such a dynamic landmass comes with trade-offs. Kane stresses that she does not seek to tell islanders what to do with her study, only to arm them with information based on the latest science. “The people of that place have the ultimate authority over the decisions that they think are best,” she said. (Earth’s Future, https://doi.org/10.1029/2020EF001525, 2020)

—Rachel Fritts (@rachel_fritts), Science Writer

The Legacy of Nitrogen Pollution

Wed, 10/21/2020 - 13:20

Kim Van Meter jokes that she can trace her career back to a family road trip across the Midwest she took as a child, when an aunt promised a restless Van Meter a penny for each cow she counted along the way. Van Meter dutifully tallied up thousands of cows in pastures along the roadside, collecting all of her aunt’s poker money at the end of the trip.

More recently, as an assistant professor of ecohydrology at the University of Illinois at Chicago, Van Meter found herself counting cattle again, this time to help create an unprecedented database of nitrogen inputs and outputs across the United States dating back to the 1930s, which could help researchers and policymakers understand nitrogen pollution and how to address it.

Nitrogen is “essential, but at the same time our inefficient use of it makes it one of the major environmental problems of our time.”Nitrogen is not like most other environmental pollutants, in that we need it to feed the world. “Half of the food that we eat wouldn’t be around if it weren’t for synthetic fertilizer,” said David Kanter, an assistant professor of environmental studies at New York University who was not involved in the study. “It’s essential, but at the same time our inefficient use of it makes it one of the major environmental problems of our time.”

Nitrogen-rich runoff from farms and urban areas has poisoned groundwater and led to oxygen-deprived dead zones and toxic algae in water bodies. Nitrogen also forms a potent greenhouse gas that adds to global warming. And although agriculture is the dominant source of nitrogen pollution, it’s not the only one. Indeed, nitrogen has such a large number of sources and environmental impacts that it’s been a major challenge for both researchers and policymakers to get a full picture of the issue.

In the new study, published recently in Global Biogeochemical Cycles, Van Meter and her coauthors tallied up 8 decades of nitrogen inputs (from fertilizer, biological processes, manure, and human waste) and uptake by crops to determine just how much excess nitrogen may be present in counties across the United States.

“The fact that they manage to collect such a multiplicity of sources at such a fine-grain scale, and particularly over such a long time period, is impressive,” said Kanter.

Much of the data were drawn from the U.S. Department of Agriculture’s agricultural census and transferred into spreadsheets to get the numbers into a form the team could work with.

“We had an army of undergrads, going line by line transcribing all the numbers, every small detail for orange trees, cows, chickens,” said Danyka Byrnes, a doctoral student at the University of Waterloo and first author of the study. It was daunting and, at times, incredibly tedious work—but it wasn’t mindless. Over the years, the census categories evolved as the purpose of the census, the leadership, and the country itself changed. “It wasn’t just pulling the numbers, but thinking about how the census changed and reported data in different ways over different decades and years,” said Nandita Basu, an associate professor of water sustainability and ecohydrology at Waterloo and a study author.

The Case of the Missing Cows

The team focused on county-level data because that’s the level at which most agricultural data were reported. But not every category of information, such as the number of cattle or the atmospheric deposition of nitrogen, was available at that fine-grain level for the entire study period, from 1930 through 2012. Often, data resolution increases over time, as the methods and technologies researchers use to measure variables become more precise. But the team encountered the opposite trend for livestock data.

The authors had watched livestock numbers rise steadily after the 1930s and 1940s in some counties across the United States. Then, very suddenly, the cattle disappeared from the county-level data.

The researchers were baffled, until they learned that the agricultural census sometimes suppresses data to protect the privacy of individual farmers. As livestock farming shifted from several small-scale operations to just a small number of farmers raising large numbers of cattle or chickens, the census stopped reporting those data at the county level.

“We had to find ways to not let that suppressed data escape our detection, so we would look at state-scale data or even national-scale data to help us find the missing cows or the missing chickens,” said Van Meter. “It’s like putting puzzle pieces together.”

These artificial inputs of nitrogen, these commercial nitrogen fertilizers, they can completely homogenize landscapes that in all other ways would look completely different from each other. That is the power of the nitrogen.When the group looked at various parts of the country with widely variable characteristics, they found that nitrogen had a homogenizing impact. The team compared three counties in Washington, Iowa, and Southern California—regions with widely variable climates, crops, and natural landscapes—and found that the nitrogen surplus numbers were roughly the same. “It makes clear how these artificial inputs of nitrogen, these commercial nitrogen fertilizers, they can completely homogenize landscapes that in all other ways would look completely different from each other,” Van Meter said. “That is the power of the nitrogen.”

But it’s not yet clear how nitrogen pollution affects those various regions, according to Kanter. Do nitrogen surpluses translate to more nitrogen pollution in the atmosphere, reducing air quality, or does nitrogen wind up in rivers and lakes, affecting water quality? Those are questions that this data set might help answer going forward. “This is a really useful long-term data set, but what needs to be done now is to connect it to the potential impacts, because that’s ultimately what policymakers and politicians care about,” Kanter said.

A Legacy of Pollution

The data set also helps to quantify the legacy impacts of nitrogen, which can stick around for decades in the environment, frustrating policymakers who have spent significant time and resources trying to reduce nitrogen runoff from farms, with little to no improvement in water and air quality.

“The water quality we’re seeing today is not caused by what we’re doing this year or even what we’ve done the last 5 years. There’s decades of history behind what we’re seeing today.”“There’s this frustration that keeps growing because we’ll reduce the amount of fertilizer we use, we’ll put in cover crops, and things don’t improve, or they don’t improve as quickly as we think they’re going to,” Van Meter said. “This lack of immediate response is getting more people to realize that there are these legacy effects. The water quality we’re seeing today is not caused by what we’re doing this year or even what we’ve done the last 5 years. There’s decades of history behind what we’re seeing today.”

Understanding the legacy impacts of nitrogen will change the way we work to address nitrogen pollution, broadening our focus from the farm to include the areas where the nitrogen ends up.

The biggest benefits of addressing nitrogen pollution are local improvements to water and air quality, but doing so also has important climate implications. Nitrous oxide is a greenhouse gas some 300 times more potent than carbon dioxide. “What’s difficult about convincing a national or global population when dealing with climate change is that the benefits of reducing greenhouse gas emissions are shared globally, where the costs are felt locally, so you don’t necessarily see much bang for your buck at home,” Kanter said. The fact that the local benefits of addressing nitrogen pollution outweigh the global benefits makes it “one of the most politically feasible climate mitigation strategies today.”

—Kate Wheeling (@katewheeling), Science Writer

Redes Sociales Ayudan a Revelar la Causa del Tsunami en Indonesia en el 2018

Wed, 10/21/2020 - 13:18

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

El 28 de septiembre de 2018, un terremoto de magnitud 7,5 sacudió la isla de Sulawesi en Indonesia y provocó un tsunami que afectó a Palu, la capital de esta provincia. El terremoto y el tsunami resultante, con olas que superaron los 5 metros en la bahía de Palu, causaron grandes daños en una de las islas más grandes y pobladas de Indonesia. Como resultado 4,340 personas murieron a causa del tsunami y miles de edificios resultaron dañados o destruidos.

La región no es ajena a los grandes terremotos. El este de Indonesia se caracteriza por una tectónica compleja, y Sulawesi descansa sobre la falla Palu-Koro, una falla importante que se extiende 220 kilómetros. Durante el siglo pasado, el área ha experimentado 15 terremotos mayores que la magnitud 6.5.

Sin embargo, el tsunami de 2018 fue un misterio. No existía un mecanismo evidente por el cual el terremoto causado por falla de deslizamiento pudiera generar un tsunami tan masivo, y las simulaciones subestimaron repetidamente el empuje tierra adentro de las olas que se reportó en las encuestas posteriores al tsunami. Los investigadores plantearon dos posibles explicaciones—el desplazamiento del fondo marino por el terremoto y los deslizamientos submarinos—pero los escasos datos instrumentales impidieron sacar conclusiones firmes sobre la fuente del tsunami.

Tomando prestado el enfoque utilizado en estudios acerca de los tsunamis de 2004 en Indonesia y 2011 en Japón, Sepúlveda et al. complementaron los datos del mareógrafo en Palu Bay con una compilación de 43 videos provenientes de sitios de redes sociales como Twitter y YouTube, así como de canales locales de televisión de circuito cerrado. En trabajos anteriores, los investigadores geoetiquetaron las ubicaciones específicas de los videos haciendo coincidir las características visibles en mapas de Google y luego usaron los videos para señalar el momento del tsunami y el nivel de agua correspondiente en cada ubicación. Los historiales de nivel del mar por tiempo, derivados de los videos, sirvieron como pseudo-observaciones donde faltaban datos de mareógrafos.

Al incluir los datos derivados de las redes sociales, así como los datos del radar de apertura sintética interferométrica satelital, que mide los cambios en la altitud de la superficie terrestre, los autores del nuevo estudio pusieron a prueba las dos hipótesis de las posibles causas del tsunami utilizando modelos de terremotos. Descubrieron que la deformación del lecho marino desempeñaba un papel menor. En cambio, un puñado de deslizamientos de tierra en Palu Bay resultó ser la principal causa del tsunami.

Según los autores, el evento Palu 2018 destaca las deficiencias de usar solo mareógrafos para documentar los eventos de tsunami y desafía los modelos convencionales sobre los peligros de tsunami causados ​​por terremotos de deslizamiento, revelando que los deslizamientos de tierra causados ​​por terremotos de fallas de deslizamiento pueden producir tsunamis mortales. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2019JB018675 , 2020)

—Aaron Sidder (@sidquan), Escritor de ciencias

Machine Learning for Magnetics

Wed, 10/21/2020 - 11:30

Interpretation of (aero)magnetic anomaly maps typically involves a series of steps after optimization of raw flight data. Nurindrawati and Sun [2020] explore the performance of machine learning methods for this kind of problem. They test various sets of convolutional neural networks (CNN) and train the two optimal CNN designs (one for declination and one for inclination) to predict declination and inclination directly from input total field maps. No reduction to pole or similar data pretreatment is required. Classic magnetic anomaly interpretation is rather often critically impacted by constraints on the depth and shape of the anomaly source body.

Importantly, the authors illustrate that their CNN approach is less influenced by source body shape, under the proviso the center of the source body is reasonably placed, for example as inferred from existing depth estimate methods. The CNN approach is anticipated to develop into a welcome addition to the existing tool kit for magnetic anomaly interpretation.

Citation: Nurindrawati, F., Sun, J. [2020]. Predicting magnetization directions using convolutional neural networks. Journal of Geophysical Research: Solid Earth, 125, e2020JB019675. https://doi.org/10.1029/2020JB019675

—Mark J. Dekkers, Associate Editor, JGR: Solid Earth

Finding Value in the Margins to Build a Bioeconomy

Tue, 10/20/2020 - 12:08

Transitioning from the use of fossil fuels to biofuels—particularly in the transportation sector, which is a major source of direct greenhouse gas emissions—is one of the key means envisioned to reach emissions reduction targets outlined in the Paris Agreement. Growth rates of biofuel production are lagging, however.

Providing sustainably sourced biomass that supports biofuel production but does not displace food crops or biodiversity conservation is a challenge. To avoid such displacements, farmers often use marginal agricultural lands, but this can come with economic disadvantages. One way to balance the checkbook is to pay farmers for environmental services—that is, for benefits proffered by cultivated landscapes, like increased pollination, carbon sequestration, and flood control, among others. (Ecosystem services refer to the contributions of native landscapes.)

Monetizing environmental services can guide public subsidies for biofuel crops and incentivize agricultural producers to put their marginal lands to work. In a new study, Von Cossel et al. calculated the value of environmental services provided by Miscanthus Andersson, a promising biofuel feedstock, in the agricultural region of Brandenburg, Germany. Native to East Asia, the perennial grass can be used to produce isobutanol, a replacement for ethanol, and it delivers high yields in varied environments. Research has shown that the plant also reduces erosion, improves soil fertility, and protects groundwater; however, it remains underused in the United States and Europe.

The researchers referenced previous work to valorize various environmental services associated with Miscanthus. In total, the authors found that Miscanthus cultivation is annually worth between about $1,400 and $4,900 (€1,200–€4,183) per hectare, 3 times more than the value of the raw material for biofuel. The results showed that Miscanthus annually provides up to $900 (€771) per hectare for flood control and almost $60 (€50) for pollination, for instance.

The authors say that analyses such as this one are critical components in the transition to a bioeconomy. And they suggest that monetizing environmental services can help pave the way for the world to reach established biofuel targets, such as that set by the International Energy Agency, which envisions biofuels with a 10% share in the transportation sector by 2030. (Earth’s Future, https://doi.org/10.1029/2020EF001478, 2020)

—Aaron Sidder, Science Writer

Biggest Risk to Surface Water After a Wildfire? It’s Complicated

Tue, 10/20/2020 - 12:07

California’s CZU Lightning Complex Fire is 100% contained, but it’s still smoldering almost 2 months after it started and people were evacuated. For those residents of Santa Cruz County who returned to find their homes still standing, the sense of relief soon turned back to anxiety as they received mixed messages about how safe their water was.

Weeks after the fire was contained, some residents were still under “do not drink/do not boil” orders—unable to use their tap water to even bathe their children, said Hannah Hageman, a journalist living in Santa Cruz. Others for whom such orders have been lifted still don’t know whether to trust their water, Hageman said. “It’s been a traumatic couple of months for residents” of the coastal community.

Typically, the biggest concern for water managers and water researchers after a wildfire is sediment, as erosion contributes to sedimentation in surface water everywhere a wildfire burns.

After the Burn

In Santa Cruz County, however, benzene, a known carcinogen, and other volatile organic compounds (VOCs) were found in drinking water samples. VOCs and semivolatile organic compounds are especially worrisome, said Andrew Whelton, an environmental engineer at Purdue University in Indiana who coauthored a study in the July/August issue of AWWA Water Science. Whelton and his colleagues noted that benzene identified in Santa Rosa, Calif.’s water after the Tubbs Fire in 2017 and in Paradise, Calif., after the Camp Fire in 2018 was found in the water distribution network where plastic pipes and houses burned, not in the source water.

Each watershed will experience a discrete set of primary risks for a fire—topography, vegetation, burn patterns and intensity, soil quality, and hydrology all contribute to how a wildfire will affect surface water. For source water, immediate concerns in the wake of a fire include microbes or bacteria like E. coli, mercury and other metals that might be volatilized by a fire and rained down or deposited by ash, ash itself, increased sediment, and disinfection by-product precursors. Disinfection by-products are compounds formed when water treatments like chlorine or other disinfectants react with dissolved organic matter (DOM) in water.

All of these water problems can arise with any wildfire, said Chauncey Anderson, a water quality specialist at the U.S. Geological Survey (USGS) Oregon Water Science Center. Each watershed will experience a discrete set of primary risks for a fire, he said. Topography, including slope, affects how much erosion occurs. The type of vegetation (or structures) burned and what was in the soil affect what ends up in surface water. The fire’s mechanics, like how hot the fire burned and how much of the watershed burned, affect what’s in the runoff. The hydrology of the area, the climate of the area, and how the watershed supplies people also matter, Anderson said. Thus, the exact “effects of these fires on drinking water are going to be highly variable.”

Sedimentation: A Wide-Reaching Threat

Sedimentation, however, is the first and arguably biggest threat to surface water supplies, Anderson said. By denuding landscapes of their erosion-controlling vegetation, wildfires leave them primed for erosion. When rains hit, tremendous amounts of ash and sediment wash into rivers and reservoirs, causing physical disruptions in two forms. In the first, large-scale runoff (in the form of landslides and debris flows) clogs intake pipes at water treatment facilities and fills reservoirs with sediment. In the second, smaller-scale runoff increases the amount of fine sediment suspended in water.

Sedimentation also decreases the water quality itself by changing the amount and type of DOM, said Alex Chow, a biogeochemist at Clemson University and coauthor of a study in Water Research about DOM, disinfection by-products, and nitrogen after fire. As more and different organic carbon is released into surface water after a fire, it is more difficult for municipalities to use conventional treatment processes without accidentally creating disinfection by-products. Nitrogen compounds are another issue, Chow said. Fire releases nitrogen stored in plants and trees. “In the first year after a fire, we mainly see ammonium. In the second year, we see more nitrates released,” he said. Chow’s research has shown that watersheds can be disrupted for up to 15 years after a fire.

Different trees release different metals, in different concentrations, when burned.Research also indicates that the presence of phosphorus and some metals may increase in water after a fire, said Charlie Alpers, a USGS research chemist. The 2015 Rocky and Jerusalem Fires in Northern California, for example, burned an area affected by historical mercury mining in Cache Creek. The fire caused mercury levels in burnt soils to decrease because the mercury was volatilized and transported elsewhere. But the increased quantity of suspended sediment for a given flow balanced the effect of lower mercury on suspended particles from eroded soil, he said; thus, the overall amount of particulate mercury transported downstream was similar to prefire levels.

Johanna Blake, a geochemist at the USGS New Mexico Water Science Center, has also found that different trees release different metals, in different concentrations, when burned. When one type of tree, say, a ponderosa pine, burns, it releases whatever metals it has absorbed from the soil and air, like iron and manganese. When another type of tree, like an aspen or spruce, burns, it might emit more vanadium, lead, magnesium, or copper, Blake said. “But we’ve barely scratched the surface” on questions like “what’s going to cause a [metal’s] release from the ash and what kind of geochemical reactions the ash and sediment will cause in the water.”

Sedimentation is also dependent on precipitation. For instance, Alpers said, the first year after the Rocky Fire was pretty dry, so the effect of higher than usual suspended sediment concentration (for a given flow) persisted in the river for more than a year. If there’s a drought, he said, the rain is not there to flush out the system. On the other hand, if there’s a lot of rain, sedimentation may be extremely high immediately but taper off quickly, said Kevin Bladon, a forest hydrologist at Oregon State University.

If It Burns, It Will Run Off

Because whatever is in the sediment of a burned area will likely end up in surface water—whether it’s contaminants from paints and pesticides or ash from vegetation—scientists and water managers want to know what is in the sediment. Alpers and Anderson, among others, are often in the field sampling soils and water even before fires are completely extinguished.

After the Holiday Farm Fire in Oregon this fall, Anderson and his USGS colleagues scrambled to install a new stream gauge and real-time water quality monitoring station on the McKenzie River after the one upstream of water treatment facilities burned. “We talked to our partner agencies, to water quality managers, to find out what they needed: They needed basic infrastructure back up and running so they could see what’s coming in real time,” Anderson said.

In California, Alpers and his USGS colleagues sampled ash and soils near Lake Berryessa, a surface water reservoir whose watershed burned during the LNU Lighting Complex Fire. Over a gradient of burn severity, the team is evaluating mercury, nitrogen, phosphorus, and carbon content. They are working with the Bureau of Reclamation to use these constituents to build a model for water managers to use after a fire. The model would ideally let water managers know “here’s how long you can expect water quality to be different, how long will it take to recover depending on how many storms and what magnitude they are, and how much sediment’s going to come into your reservoir,” Alpers said.

USGS teams sampled ash and soils near Lake Curry, which burned during the LNU Lighting Complex Fire in August and September 2020. Credit: USGS

“There’s a new push to do more research related to wildfires and water quality—especially after this year,” Blake noted. Part of the reason for that is that most of the postfire water quality research is based on data from California fires, Bladon said. But a place like Oregon has a vastly different precipitation regime, plus different soils and geology, than California or elsewhere, Bladon said. So it’s important to capture as much data from Oregon—and everywhere—as possible. After all, he said, these fires are going to keep happening.

And, Anderson said, they are leaving lasting effects: “In Oregon, we haven’t had many prior situations where significant drinking water supplies were so seriously and directly affected by wildfire,” he said. “The overall effect is that a very large percentage of Oregon’s population is likely going to have [long-term] effects that range from increased treatment costs to potential health effects.”

Megan Sever (@MeganSever4), Science Writer

Disseminating Scientific Results in the Age of Rapid Communication

Tue, 10/20/2020 - 12:01

Earlier this year as the COVID-19 virus spread around the world, countries responded by imposing lockdown measures one by one. Scientists, seeing the subsequent satellite observations, rushed to publish papers about improved air quality, many of which appeared in short order on preprint servers. These preprints, which are often published in tandem with their submission to a peer-reviewed journal, spawned a host of press releases and news articles that were spread on social media.

We’re living in a time when calamitous current events are escalating the need for more information.We’re living in a time when calamitous current events are escalating the need for more information. In response, scientists and media both are making leaps between observation and conclusion. If we want to make sure that assertions are accurate before they are widely disseminated, our rigorous and necessary review systems must be modernized so they aren’t circumvented. We must also make sure that the move toward open data is made with the means for nonexperts to understand the context and limitations of that information.

The Leap to Conclusions

The rush to print by both scientists and the press can raise questions about the validity of the research conclusions. One study published on a preprint server in early April by Harvard University researchers [Wu et al., 2020] was widely circulated on social media. The authors, who simultaneously submitted it to the New England Journal of Medicine, claimed in their paper that an increase in long-term exposure to particle pollution of 1 microgram per cubic meter can lead to a 15% increase in mortality from COVID-19, leading to many news stories on the correlation of air pollution and death rates from the virus.

Two epidemiologists reviewed the preprint and concluded that its assertions were not robust. For example, to specify fine-particle (PM2.5) exposure levels, Wu et al. averaged particle concentration estimates across the United States from satellite observations and models covering a 17-year period at a spatial resolution of 0.01° × 0.01° (about 1 × 1 kilometer). They then mapped these results to county levels by spatial averaging. But assigning a single particle concentration value on the basis of a 17-year mean to a large region is problematic; such coarse representation does not capture the variability of human exposure to particle concentrations in space and time. Several weeks after the preprint was published, the researchers revised their mortality estimate of 15% down to 8%.

Researchers illustrated those issues after several news outlets used TROPOMI data to illustrate impacts of COVID-19 lockdowns on reduced traffic and improved air quality.The Tropospheric Monitoring Instrument (TROPOMI) is a sensor aboard the European Space Agency’s (ESA) Sentinel-5 Precursor satellite that collects atmospheric composition data at a relatively high spatial resolution (3.5 × 5.5 kilometers) at daily intervals, providing greater insights into urban-scale changes in air quality. Its data can be found on the mission’s data hub. TROPOMI observations of nitrogen dioxide (NO2) have been widely used by the media as indicators of urban- to regional-scale economic activity and how it is changing during the pandemic. In principle, NO2, a pollutant produced by high-temperature combustion and a precursor for photochemical smog, is ideal for this application because it has a short chemical lifetime and remains near its source.

However, satellite data often come with caveats, quality flags, and recommendations on how to use or not use the data (e.g., accounting for transport by winds or screening for clouds, if they are present). Scientists routinely access this information in users’ guides and algorithm theoretical basis documents, but nonexpert users either don’t know to look for these resources or may not have access to them. There is a potential for misinterpretation of the results when these data are used without properly heeding these recommendations. Researchers at the Copernicus Atmosphere Monitoring Service illustrated those issues after several news outlets used TROPOMI data to illustrate impacts of COVID-19 lockdowns on reduced traffic and improved air quality.

Travails of the Peer Review Process

Scientists, of course, regularly present ongoing work and discuss it with peers at conferences. These presentations are opportunities to obtain feedback and revise work in preparation for publishing in a peer-reviewed journal. The peer review process has provided a successful mechanism for legitimizing research and informing the world of new science discoveries. Yet it is fraught with issues, including “peer review rings,” in which a feature that allows authors to suggest reviewers is misused to fast-track a paper into publication without real scrutiny. There’s also the argument that the peer review process itself remains unvalidated.

The sometimes exasperating delays associated with the peer review process have led to efforts to circumvent the process.Finally, the sometimes exasperating delays associated with the peer review process have led to efforts to circumvent the process. Researchers risk losing their claim to a discovery if another paper reporting the same discovery is published by a speedier journal. Scientists have thus begun turning to other avenues to get eyes on their results. Publishing a preprint and sharing it on social media allow authors to get attention and feedback, as well as put a date stamp on their work. By the end of September, a search for “COVID-19 and air quality” called up 1,445 preprints on the medRxiv server; medRxiv (pronounced “med archive”) is a free online archive and distribution server for complete but unreviewed manuscripts in the medical, clinical, and related health sciences.

Along with preprint servers, predatory publishing outlets continue to operate around the world. They practice “pay to publish” under the guise of open access practices. Though the community continually works to identify and blacklist these fraudulent journals [Strinzel et al., 2019], threats of lawsuits often restrict sharing of those lists. As long as scientists’ recognition is tied to their number of publications, the race to publish—and the opportunities for those journals to take advantage of the system—will continue.

Rapid Dissemination but with Rigorous Scientific Analysis

What options are available to scientists who want to disseminate their findings quickly but still operate under the safeguards of rigorous review? In June, the MIT Press responded to the deluge of pandemic-related papers being published before peer review by launching Rapid Reviews: COVID-19, an open access journal that offers accelerated reviews of preprints.

Other journals are developing more open review processes. Authors who submit to AGU Advances are encouraged to publish in the Earth and Space Science Open Archive, ESSOAr, while undergoing peer review. The European Geosciences Union’s Copernicus journals publish manuscripts that pass a rapid peer review in open access discussion forums to solicit comments from the community; those comments are considered when the paper undergoes formal peer review.

We must also, as scientists, find better ways for our community to expedite the sharing of results while still ensuring proper scrutiny of our methods.Meanwhile, agencies that are responsible for freely available data can adapt in several ways. First, they can offer accessible documents and manuals with information on data quality and its limitations. They can also ensure that the data are analysis ready and packaged for convenient use by nonexperts. And they can ensure that all these offerings are taken advantage of through regular training for media and other nonexpert users. ESA, for example, has developed a dedicated service to provide analysis-ready NO2 data sets from TROPOMI to the public. Space agencies are launching user-friendly dashboards with Earth-observing data and even coronavirus-specific data. Additionally, NASA, NOAA, and other institutions are developing training materials for media.

The COVID-19-related demand for environmental data caught most research institutions by surprise. As we continue to embrace FAIR (findable, accessible, interoperable, and reusable) data policies that offer free and open access to the observations, we must also embrace policies that encourage best practices for use of those data. We must also, as scientists, find better ways for our community to expedite the sharing of results while still ensuring proper scrutiny of our methods. By implementing these safeguards, we reduce the desire to circumvent the system completely and ensure that the public can quickly get much-needed scientific information that they can rely on.

Modeling the Cascading Infrastructure Impacts of Climate Change

Mon, 10/19/2020 - 12:27

With wildfires burning across the West, hurricanes intensifying in the Southeast, and sea levels rising along nearly every coastline, it has become clear that the United States must adapt its infrastructure to handle increasingly frequent and intense climate shocks. But adaptation is complicated by the fact that links between different infrastructure systems can lead to cascading disruptions.

Such impacts are readily apparent along shorelines, where protections against sea level rise in one location may lead to more flooding elsewhere along the coast and where flooding of major roadways can disrupt traffic well beyond the inundation zone. A new study looks at this disruptive cascade scenario in the San Francisco Bay Area, where officials are already battling sea level rise.

Hummel et al. integrated coastal flooding and traffic models for three counties in the Bay Area—Alameda, San Mateo, and Santa Clara—to find out how shoreline adaptation decisions can lead to local and regional changes in traffic flows. The researchers used the U.S. Geological Survey’s Coastal Storm Modeling System to model the hydrodynamic impacts of shoreline protections for each county and simulated potential traffic impacts on the basis of the current roadway infrastructure and existing commuter data. They also evaluated how one county’s decision to protect or not protect its shoreline would affect travel delays in the others.

In most cases, when one county decided not to protect its shoreline, allowing its roads to flood, the result was cascading traffic delays in the other counties, forcing commuters to spend as much as 10.7% more time on the roads. On the other hand, when one county took action to protect its shores, the others experienced increases in flooded areas and roadways that also caused travel delays, although the magnitude of the hydrodynamic impacts did not always scale with the traffic impacts. When Santa Clara’s shoreline was protected in the simulation, for example, roadways in San Mateo that otherwise would not have flooded did flood, but this flooding had little effect on travel times. But when Alameda protected its shoreline and Santa Clara flooded, vehicle hours traveled increased by 7.2%.

The results reveal the inherent complexity in predicting impacts in interconnected infrastructure systems, the authors say, and they highlight the importance of holistic and coordinated strategies for policymakers hoping to adapt to the changing climate. (Earth’s Future, https://doi.org/10.1029/2020EF001652, 2020)

—Kate Wheeling, Science Writer

Dune Universe Inspires Titan’s Nomenclature

Mon, 10/19/2020 - 12:27

Frank Herbert’s Dune tells the story of Paul Atreides, a son of a noble family sent to the hostile desert planet Arrakis to oversee the trade of a mysterious drug called melange (nicknamed “spice”), which gives its consumers supernatural abilities and longevity. Betrayal, chaos, and political infighting ensue.

Imagine standing on Arrakis, surrounded by an ocean of sand. The air is unbreathable, the sky hazy, the landscape mysterious. Sand for miles, as far as the eye can see. You know that several hundred kilometers away is a vast network of canyons that from above, look like they could have been carved by massive worms.

Before you get too excited, it’s important to know that this isn’t the notorious desert planet featured in the Dune novels.

No, this Arrakis is closer to our own world.

This Arrakis is only about 1 billion kilometers from Earth, on a world orbiting Saturn.

We’ve even landed a spacecraft near there.

If you haven’t already guessed, this Arrakis—officially called Arrakis Planitia—belongs to the second-largest moon in our solar system, Titan. Arrakis is a vast, undifferentiated plain of sand, but not sand as we know it. Titan’s sand is made of large organic molecules, which would make it softer and stickier, said Mike Malaska, a planetary scientist at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, Calif.

All of the features on Titan (here photographed in ultraviolet and infrared by the Cassini orbiter) are named after places in Frank Herbert’s Dune novels. Credit: NASA/JPL/SSI

Malaska likes to imagine that Titan’s hydrocarbon sand, which is actually referred to as tholin, or complex organic gunk, could double as the infamous spice at the center of Dune’s expansive narrative arc.

In the Dune books, spice smells like cinnamon, whereas tholin on Titan “probably smells like bitter almonds…and death,” Malaska said.

Arrakis isn’t the only name from the Dune novels that adorns Titan’s geological features. All of Titan’s named undifferentiated plains and labyrinths (canyon-like features carved into the surface) are named after planets from the Dune series. There’s Buzzell Planitia, named after the “punishment planet” used by an ancient order of women with supernatural abilities. There’s Caladan Planitia, named after the home planet of Dune’s main hero, Paul Atreides. There’s Salusa Labyrinthus, named after a prison planet. And more.

“I am just amazed [at] how much Titan resembles the Arrakis description,” Malaska said. In addition to the vast plains of hydrocarbon sands that stretch across Titan’s surface, the moon’s complex climate of storms and methane rain feel Dune-like. “Titan is Dune.”

And, of course, there are the dunes. Titan’s dune fields circle the moon’s 16,000-kilometer-long equator. The moon has more dunes than Earth has deserts.

Rosaly Lopes, another planetary scientist at JPL, was one of the first people to see Titan’s dunes. She and other Cassini team members were analyzing images from one of the spacecraft’s first flybys of Titan, back in 2005, and they saw weird, curved features on the surface.

“When we first saw the dunes, we didn’t know they were dunes,” Lopes said. It wasn’t until a subsequent Cassini flyby that they confirmed that Titan sported dunes wrapped around its equator.

Although Herbert was originally inspired by sand dunes of the Oregon coast, he could have also been imagining Mars.In fact, Lopes was the first to suggest naming Titan’s plains and labyrinths after planets in the Dune universe back in 2009, although she doesn’t remember exactly how the idea came up. She said it just made sense, considering Titan’s dunes.

Planetary scientists don’t name features until there’s a scientific need for them, Lopes said. A theme must first be chosen, whether it’s mythical birds for interesting areas on the asteroid Bennu, or gods of fire for volcanoes on Jupiter’s moon Io (Lopes named two of these, Tupan and Monan, after deities of indigenous cultures in her home country of Brazil). There are other literary features across the solar system, like Mercury’s craters named after famous artists and writers.

Although Herbert was originally inspired by sand dunes of the Oregon coast, Malaska imagines that Herbert—and his many readers—could have also been imagining Mars, the only desert-like planet we knew of around the time Dune was published, in 1965. In fact, that same year, NASA made its first successful flyby of Mars with its Mariner 4 spacecraft and humanity got its first close-up look of the Red Planet.

But Titan’s dune fields are unique in the solar system, and it’s only fitting that this mysterious moon bear the name of a revolutionary science fiction universe.

—JoAnna Wendel (@JoAnnaScience), Science Writer

Unmixing Magnetic Components – An Experimental Twist

Mon, 10/19/2020 - 11:30

A rock’s magnetic properties can be a sensitive probe of past geological and environmental conditions; they are captured in a (pretty diverse) set of magnetic parameters. However, the interpretation of those parameters is often non-unique and may be biased by the interpreter’s preference. To alleviate these drawbacks several unmixing approaches have been developed during the last 25 years or so, mathematically based on either forward or inverse approaches.

He et al. [2020] put these approaches to the test by investigating how well-known experimental mixtures of two magnetically rather distinct endmembers are unmixed; the endmembers explored are andesite rock and a magnetotactic bacterium strain. They show that inverse approaches perform well under the proviso that the full data variability is captured in the data collection. In the case of known endmembers, a hybrid mixing approach is favored. A collection of most meaningful endmembers would be amenable to machine learning; i.e. the search for such endmembers is on!

Citation: He, K., Zhao, X., Pan, Y., Zhao, X., Qin, H., & Zhang, T. [2020]. Benchmarking component analysis of remanent magnetization curves with a synthetic mixture series: Insight into the reliability of unmixing natural samples. Journal of Geophysical Research: Solid Earth, 125, e2020JB020105. https://doi.org/10.1029/2020JB020105

—Mark J. Dekkers, Associate Editor, JGR: Solid Earth

Megadrought Caused Yellowstone’s Old Faithful to Run Dry

Fri, 10/16/2020 - 21:14

Yellowstone’s most famous geyser, Old Faithful, has erupted with an unusual degree of regularity since records were first taken, in the 1870s, discharging thousands of gallons of boiling, silica-rich water over 30 meters into the air an average of 16 times a day.

Scientists studying fossilized wood samples buried and preserved by the geyser have now confirmed that Old Faithful is likely thousands of years old, according to a new study by Hurwitz et al. They also found that the famous geyser was dormant for several decades during the 13th century due to a megadrought that gripped much of western North America. Finally, with warmer temperatures and extended droughts now expected to increase in the region due to climate change, researchers expect longer intervals between Old Faithful’s eruptions.

Old Faithful’s Medieval Forest

Despite its being one of the most popular and recognizable geologic features on Earth, scientists know surprisingly little about Old Faithful, such as how old it is or what its eruption patterns looked like through prerecorded history.

As geysers erupt over the course of hundreds to thousands of years, dissolved silicate minerals in the discharged water slowly build up around their bases, forming a slightly elevated landscape known as a sinter mound. Plant growth is conspicuously absent near these mounds due to high soil temperatures, an alkaline pH, and a high concentration of silica—so any fossilized plants found buried within the mound likely grew at a time when the geyser was inactive.

Scientists have known there was fossilized plant material buried near Old Faithful since at least the early 1950s, when a geologist working for the U.S. Geological Survey (USGS) used what was then the newly developed method of radiocarbon dating on a single wood sample retrieved from Old Faithful’s sinter mound in an attempt to determine the geyser’s age.

Radiocarbon dating indicated the wood was about 730 years old with an estimated error of 200 years. This meant that sometime in around the 12th or 13th century, Old Faithful ran dry.

A Sudden Disappearance

Now, for the first time in over 50 years, scientists have collected additional fossil wood samples from the geyser’s sinter mound in an attempt to determine what may have caused Old Faithful’s dry spell. Their radiocarbon results matched almost perfectly with the date obtained more than a half century ago.

Trees grew on Old Faithful’s sinter mound for about 100 years during the 13th and 14th centuries, which coincides with the tail end of what’s become known as the Medieval Climate Anomaly.“The radiocarbon dating methods we have today are nothing close to what they had back in the early 50s, so when we got back our first batch of results, I thought it was almost too good to believe,” said Shaul Hurwitz, a research hydrologist with the USGS at the California Volcano Observatory and lead author of the new study.

The new study found that the wood fossils were 750 years old. They were located near the top of the mound, which, according to Hurwitz, indicates that Old Faithful was already hundreds, if not thousands, of years old by the time the trees grew there.

By taking radiocarbon dates from the oldest and youngest parts of a single wood sample, Hurwitz and his team determined that trees grew on Old Faithful’s sinter mound for about 100 years during the 13th and 14th centuries, which coincides with the tail end of what’s become known as the Medieval Climate Anomaly.

The Medieval Climate Anomaly

During the Medieval Climate Anomaly, which started at about 900 CE and ended some 400 years later, Earth underwent a slight warming period that resulted in severe droughts in western North America.

By using tree ring data taken from long-lived juniper trees—some of which are more than 1,400 years old—in the northern part of Yellowstone National Park, scientists can infer approximately how much water was available during a given year.

Without enough rain to recharge groundwater reserves, Old Faithful—likely along with many of the surrounding geysers—went dry.During the 13th century, Yellowstone was experiencing a prolonged period of severe drought. By combining this information with the radiocarbon dates obtained from the geyser’s sinter mound, scientists concluded that without enough rain to recharge groundwater reserves, Old Faithful—likely along with many of the surrounding geysers—went dry.

“We know from paleoclimate studies that this was a time of more fires in Yellowstone, which is usually associated with warm temperatures and drought,” said Cathy Whitlock, a paleoecologist at Montana State University in Bozeman who was not involved in the new study.

Looking to the Future

This isn’t the first time environmental conditions have been shown to affect the timing of Old Faithful’s eruptions. A series of earthquakes in the area starting in 1959 caused the intervals between eruptions to increase by several minutes. The timing between intervals was further increased by the Turn of the Century Drought that struck the western United States between the years 2000 and 2010, the worst dry spell in the region since the Medieval Climate Anomaly.

“This pattern could play out again in the future as global warming creates drier conditions,” Whitlock said. “Old Faithful already seems to be going off less frequently, which suggests that climate is affecting the water and underground plumbing.” (Geophysical Research Letters, https://doi.org/10.1029/2020GL089871, 2020)

—Jerald Pinson (@jerald_pinson), Science Writer

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer