EOS

Syndicate content Eos
Science News by AGU
Updated: 3 years 12 weeks ago

Earth’s Continents Share an Ancient Crustal Ancestor

Mon, 08/23/2021 - 13:34

The jigsaw fit of Earth’s continents, which long intrigued map readers and inspired many theories, was explained about 60 years ago when the foundational processes of plate tectonics came to light. Topographic and magnetic maps of the ocean floor revealed that the crust—the thin, rigid top layer of the solid Earth—is split into plates. These plates were found to shift gradually around the surface atop a ductile upper mantle layer called the asthenosphere. Where dense oceanic crust abuts thicker, buoyant continents, the denser crust plunges back into the mantle beneath. Above these subduction zones, upwelling mantle melt generates volcanoes, spewing lava and creating new continental crust.

To better understand the production and recycling of crust, some scientists have shifted from studying massive moving plates to detailing the makeup of tiny mineral crystals.From these revelations, geologists had a plausible theory for how the continents formed and perhaps how Earth’s earliest continents grew—above subduction zones. Unfortunately, the process is not that simple, and plate tectonics have not always functioned this way. Subsequent research since the advent of plate tectonic theory has shown that subduction and associated mantle melting provide only a partial explanation for the formation and growth of today’s continents. To better understand the production and recycling of crust, some scientists, including our team, have shifted from studying the massive moving plates to detailing the makeup of tiny mineral crystals that have stood the test of time.

Starting in the 1970s, geologists from the Greenland Geological Survey collected stream sediments from all over Greenland, sieving them to sand size and chemically analyzing them to map the continent-scale geochemistry and contribute to finding mineral occurrences. Unbeknownst to them at the time, tiny grains of the mineral zircon contained in the samples held clues about the evolution of Earth’s early crust. After decades in storage in a warehouse in Denmark, the zircon grains in those carefully archived bottles of sand—and the technology to analyze them—were ready to reveal their secrets.

This cathodoluminescence image shows the internal structure of magnified zircons analyzed by laser ablation. Credit: Chris Kirkland

Zircon occurs in many rock types in continental crust, and importantly, it is geologically durable. These tiny mineral time capsules preserve records of the distant past—as far back as 4.4 billion years—which are otherwise almost entirely erased. More than just recording the time at which a crystal grew, zircon chemistry records information about the magma from which it grew, including whether the magma originated from a melted piece of older crust, from the mantle, or from some combination of these sources. Through the isotopic signatures in a zircon grain, we can track its progression, from the movement of the magma up from the mantle, to its crystallization, to the grain’s uplift to the surface and its later erosion and redeposition.

The Past Is Not Always Prologue

New continental crust is formed above subduction zones, but it is also destroyed at subduction zones [e.g., Scholl and von Heune, 2007]. Formation and destruction occur at approximately equal rates in a planetary-scale yin and yang [Stern and Scholl, 2010; Hawkesworth et al., 2019]. So crust formation above subduction zones cannot satisfactorily account for growth of the continents.

If subduction didn’t generate the volume of continental crust we see today, what did?What’s more, plate tectonic movements like we see on Earth today did not operate the same way during Earth’s early history. Although there are indications that subduction may have occurred in Earth’s early history (at least locally), many geochemical, isotopic, petrological, and thermal modeling studies of crust formation processes suggest that plate tectonics started gradually and didn’t begin operating as it does today until about 3 billion years ago, after more than a quarter of Earth’s history had already passed [e.g., McClennan and Taylor, 1983; Dhuime et al., 2015; Hartnady and Kirkland, 2019]. Because the mantle was much hotter at that time, more of it melted than it does now, producing large amounts of oceanic crust that was both too thick and too viscous to subduct.

Nonetheless, although subduction was apparently not possible on a global scale before about 3 billion years ago, geochemical and isotopic evidence shows that a large volume of continental crust had already formed by that time [e.g., Hawkesworth et al., 2019; Condie, 2014; Taylor and McClennan 1995].

If subduction didn’t generate the volume of continental crust we see today, what did?

How Did Earth’s Early Crust Form?

The nature of early Earth dynamics and how and when the earliest continental crust formed have remained topics of intense debate, largely because so little remains of Earth’s ancient crust for direct study. Various mechanisms have been proposed.

Perhaps plumes of hot material rising from the mantle melted the oceanic crustal rock above [Smithies et al., 2005]. If dense portions of this melted rock “dripped” back into the mantle, they could have stirred convection cells in the upper mantle. These drips might have also added water to the mantle, lowering its melting point and producing new melts that ascended into the crust [Johnson et al., 2014].

Or maybe meteorite impacts punched through the existing crust into the mantle, generating new melts that, again, ascended toward the surface and added new crust [Hansen, 2015]. Another possibility is that enough heat built up at the base of the thick oceanic crust on early Earth that parts of the crust remelted, with the less dense, buoyant melt portions then rising and forming pockets of continental crust [Smithies et al., 2003].

By whichever processes Earth’s first continental crust formed, how did the large volume of continental crust we have now build up? Our research helps resolve this question [Kirkland et al., 2021].

Answers Hidden in Greenland Zircons

We followed the histories of zircon crystals through the eons by probing the isotopes preserved in grains from the archived stream sediment samples from an area of west Greenland. These isotopes were once dissolved within molten mantle before being injected into the crust by rising magmas that crystallized zircons and lifted them up to the surface. Eventually, wind and rain erosion released the tiny crystals from their rock hosts, and rivulets of water tumbled them down to quiet corners in sandy stream bends. There they rested until geologists gathered the sand, billions of years after the zircons formed inside Earth.

In the laboratory, we extracted thousands of zircon grains from the sand samples. These grains—mounted inside epoxy resin and polished—were then imaged with a scanning electron microscope, revealing pictures of how each zircon grew, layer upon layer, so long ago.

Researchers used the laser ablation mass spectrometer at Curtin University to study isotopic ratios in zircon crystals. Credit: Chris Kirkland

In a mass spectrometer, the zircons were blasted with a laser beam, and a powerful magnetic field separated the resulting vapor into isotopes of different masses. We determined when each crystal formed using the measured amounts of radioactive parent uranium and daughter lead isotopes. We also compared the hafnium isotopic signature in each zircon with the signatures we would expect in the crust and in the mantle on the basis of the geochemical and isotopic fractionation of Earth through time. Using these methods, we determined the origins of the magma from which the crystals grew and thus built a history of the planet from grains of sand.

The oldest continental crust might have survived to serve as scaffolding for successive additions of younger continental crust.Our analysis revealed that the zircon crystals varied widely in age, from 1.8 billion to 3.9 billion years old—a much broader range than what’s typically observed in Earth’s ancient crust. Because of both this broad age range and the high geographic density of the samples in our data set, patterns emerged in the data.

In particular, some zircons of all ages had hafnium isotope signatures that showed that these grains originated from rocks that formed as a result of the melting of a common 4-billion-year-old parent continental crust. This common source implied that early continental crust did not form anew and discretely on repeated occasions. Instead, the oldest continental crust might have survived to serve as scaffolding for successive additions of younger continental crust.

In addition to revealing this subtle, but ubiquitous, signature of Earth’s ancient crust in the Greenland samples, our data also showed something very significant about the evolution of Earth’s continental crust around 3 billion years ago. The hafnium signature of most of the zircons from that time that we analyzed showed a distinct isotopic signal linked to the input of mantle material into the magma from which these crystals grew. This strong mantle signal in the hafnium signature showed us that massive amounts of new continental crust formed in multiple episodes around this time by a process in which mantle magmas were injected into and melted older continental crust.

Geologists work atop a rock outcrop in the Maniitsoq region of western Greenland. Credit: Julie Hollis

The idea that ancient crust formed the scaffolding for later growth of continents was intriguing, but was it true? And was this massive crust-forming event related to some geological process restricted to what is now Greenland, or did this event have wider significance in Earth’s evolution?

A Global Crust Formation Event

To test our hypotheses, we looked at data sets of isotopes in zircons from other parts of the world where ancient continental crust is preserved. As with our Greenland data, these large data sets all showed evidence of repeated injection of mantle melts into much more ancient crust. Ancient crust seemed to be a prerequisite for growing new crust.

This “hot flash” in the deep Earth may have driven a planetary continent growth spurt.Moreover, the data again showed that these large volumes of mantle melts were injected into older crust everywhere at about the same time, between 3.2 billion and 3.0 billion years ago, timing that coincides with the estimated peak in Earth’s mantle temperatures. This “hot flash” in the deep Earth may have enabled huge volumes of melt to rise from the mantle and be injected into existing older crust, driving a planetary continent growth spurt.

The picture that emerges from our work is one in which buoyant pieces of the oldest continental crust melted during the accrual and trapping of new mantle melts in a massive crust-forming event about 3 billion years ago. This global event effectively, and rapidly, built the continents. With the onset of the widespread subduction that we see today, these continents have since been destroyed, remade, and shifted around the surface like so many jigsaw pieces in perpetuity through the eons.

Index Suggests That Half of Nitrogen Applied to Crops Is Lost

Mon, 08/23/2021 - 13:33

Nitrogen use efficiency, an indicator that describes how much fertilizer reaches a harvested crop, has decreased by 22% since 1961, according to new findings by an international group of researchers who compared and averaged global data sets.“If we don’t deal with our nitrogen challenge, then dealing with pretty much any other environmental or human health challenge becomes significantly harder.”

Excess nitrogen from fertilizer and manure pollutes water and air, eats away ozone in the atmosphere, and harms plants and animals. Excess nitrogen can also react to become nitrous oxide, a greenhouse gas that is 300 times more potent than carbon dioxide.

Significant disagreements remain about the exact value of nitrogen use efficiency, but current estimates are used by governments and in international negotiations to regulate agricultural pollution.

“If we don’t deal with our nitrogen challenge, then dealing with pretty much any other environmental or human health challenge becomes significantly harder,” David Kanter, an environmental scientist at New York University and vice-chair of the International Nitrogen Initiative, told New Scientist in May. Sri Lanka and the United Nations Environment Programme called for countries to halve nitrogen waste by 2030 in the Colombo Declaration.

Whereas the global average shows a decline, nitrogen fertilizing has become more efficient in developed economies thanks to technologies and regulations, and new results out last month from the University of Minnesota as well as field trials by the International Fertilizer Development Center are just two examples of ongoing research to limit nitrogen pollution without jeopardizing yield.

Too Much of a Good Thing

Nitrogen is an essential nutrient for plant growth: It is a vital aspect of amino acids for proteins, chlorophyll for photosynthesis, DNA, and adenosine triphosphate, a compound that releases energy.

Chemist Fritz Haber invented an industrial process to create nitrogen fertilizer in 1918, and the practice spread. Since the 1960s, nitrogen inputs on crops have quadrupled. In 2008, food production from nitrogen fed half the world’s population.

“One of the things that is evident in nitrogen management, generally, is that there seems to be a tendency to avoid the risk of too low an application rate.”Yet nitrogen applied to crops often ends up elsewhere. Fertilizer placed away from a plant’s roots means that some nitrogen gets washed away or converts into a gas before the plant can use it. Fertilizer applied at an inopportune moment in a plant’s growth cycle goes to waste. At a certain point, adding more fertilizer won’t boost yield: There’s a limit to how much a plant can produce based on nitrogen alone.

“One of the things that is evident in nitrogen management, generally, is that there seems to be a tendency to avoid the risk of too low an application rate,” said Tony Vyn, an agronomist at Purdue University.

In many parts of the world, cheap subsidized fertilizer is critical for producing enough food. But gone unchecked, subsidies incentivize farmers to apply more than they need. And according to plant scientist Rajiv Khosla at Colorado State University, who studies precision agriculture, farmers struggle to apply just the right amount of fertilizer probably 90% of the time.

The 90% Efficiency Goal

According to an average of 13 global databases from 10 data sources, in 2010, 161 teragrams of nitrogen were applied to agricultural crops, but only 73 teragrams of nitrogen made it to the harvested crop. A total of 86 teragrams of nitrogen was wasted, perhaps ending up in the water, air, or soil. The new research was published in the journal Nature Food in July.

Globally, nitrogen use efficiency is 46%, but the ratio should be much closer to 100%, said environmental scientist Xin Zhang at the University of Maryland, who led the latest study. The crops with the lowest nitrogen efficiency are fruits and vegetables, at around 14%, said Zhang. In contrast, soybeans, which are natural nitrogen fixers, have a high efficiency of 80%.

The European Union Nitrogen Expert Panel recommended a nitrogen use efficiency of around 90% as an upper limit. The EU has reduced nitrogen waste over the past several decades, though progress has stagnated.

The United States has similarly cut losses by improving management and technology. For instance, even though the amount of nitrogen fertilizer per acre applied to cornfields was stable from 1980 to 2010 in the United States, the average crop grain yields increased by 60% in that period, said Vyn. Those gains can be hidden in broad-stroke indices like global nitrogen use efficiency.

“The most urgent places will be in China and India because they are two of the top five fertilizer users around the world,” Zhang said. China set a target for a zero increase in fertilizer use in 2015, which showed promising early results.

Cultivating Solutions

New research from the University of Minnesota using machine learning–based metamodels suggested that fertilizer amount can be decreased without hurting the bottom line.Just a 10% decrease in nitrogen fertilizer led to only a 0.6% yield reduction.

Just a 10% decrease in nitrogen fertilizer led to only a 0.6% yield reduction and cut nitrous oxide emissions and nitrogen leaching. “Our analysis revealed hot spots where excessive nitrogen fertilizer can be cut without yield penalty,” said bioproducts and biosystems engineer Zhenong Jin at the University of Minnesota.

Applying fertilizer right at the source could help too: A technique developed by the International Fertilizer Development Center achieved an efficiency as high as 80% in field studies around the world using urea deep placement. The method buries cheap nitrogen fertilizer into the soil, which feeds nitrogen directly into a plant and reduces losses.

Many other initiatives and technologies, from giving farmers better data to nitrogen-fixing bacteria, also show promise. Even practices as simple as installing roadside ditches can help.

Meanwhile, Vyn said researchers must focus on sharpening scientific tools to measure nitrogen capture. The differences in nitrogen inputs in the databases analyzed by the latest study were as high as 33% between the median values and the outliers.

“The nitrogen surplus story is sometimes too easily captured in a simple argument of nitrogen in and nitrogen off the field,” Vyn said. “It’s more complex.” His research aims to improve nitrogen recovery efficiency by understanding plant genotypes and management factors.

One of Zhang’s next research steps is to refine the quantification of nitrogen levels in a crop, which is currently based on simplistic measurements. “There has been some scattered evidence that as we’re increasing the yield, the nitrogen content is actually declining. And that also has a lot of implications in terms of our calculated efficiency,” Zhang said.

—Jenessa Duncombe (@jrdscience), Staff Writer

Lightning Tames Typhoon Intensity Forecasting

Fri, 08/20/2021 - 13:58

Each year, dozens of typhoons initiate in the tropics of the Pacific Ocean and churn westward toward Asia. These whirling tempests pose a problem for communities along the entire northwestern Pacific, with the Philippines bearing the brunt of the battering winds, waves, and rain associated with typhoons. In 2013, for instance, deadly supertyphoon Haiyan directly struck the Philippines, killing more than 6,000 people.

“On average,” said Mitsuteru Sato, a professor at Hokkaido University, “the Philippines will be hit [by] about 20 typhoons a year.”

Throughout the Pacific, the intensity of typhoons and torrential rainfall has been increasing, said Yukihiro Takahashi of Hokkaido University. “We need to predict that.”

Scientists today forecast intensity with less certainty than in decades past.Though scientists have improved typhoon tracking over the past 20 years, errors for intensity-related metrics, like pressure and wind speed, have counterintuitively increased, Sato said. In other words, scientists today forecast intensity with less certainty than in decades past, in spite of sophisticated models and expensive meteorological satellites.

Studies suggest that by measuring the lightning occurrence number (the number of strokes from cloud to ground or cloud to ocean), scientists may be able to forecast just how intense a typhoon might be about 30 hours before a storm reaches its peak intensity, said Sato. Today Sato and colleagues are using an inexpensive, innovative network of automated weather stations that more accurately measure a typhoon’s lightning occurrence number to convert that information into a prediction of how intense an incoming storm might be.

Philippine-Focused Forecasting

As a typhoon’s thunderclouds rise high into the atmosphere, its water-rich updrafts force ice and water particles to collide and become either positively or negatively charged, explained Kristen Corbosiero, an atmospheric scientist at the University at Albany. Lightning is nature’s attempt to neutralize the charge differences.

Lightning is controlled by a typhoon’s track, seawater temperature, and other variables, said Sato. Determining how these factors affect the complex interplay between lightning and storm intensity is the goal of the new Pacific-centric lightning monitoring system designed to detect and geolocate much weaker lightning than detected by existing global lightning monitoring networks.Determining how these factors affect the complex interplay between lightning and storm intensity is the goal of the new Pacific-centric lightning monitoring system.

This system includes six stations residing on Philippine islands and five stations distributed throughout the northwestern Pacific region. Each station, which monitors an area of about 2,000 kilometers, comes equipped with several sensors that measure rain, very low frequency signals produced by lightning, and other weather-related phenomena. The off-grid stations use solar power, storing energy in batteries for overcast days. An internal computer sends data over 3G cellular networks. The cost for each station totals about $10,000, substantially less expensive than meteorological satellites.

Because this system should more accurately measure the number of lightning flashes, Corbosiero said, “it does certainly have potential to improve forecasts.”

“If We Can Make the Precise Predictions, We Can Save Lives”

Sato, Takahashi, and their colleagues in the Philippines hope to refine and begin applying the lightning detection and forecasting system within the next 1–2 years, as data arrive from the nascent network of stations.

The network’s focus on the Philippines is key to its value: The Philippines’ furious rainy season means more data. More data contribute to more precise forecasts about a storm’s strength at landfall. More accurate forecasts will give emergency managers the information they need to inform the public about the risks of rain, storm surges, and wind. Combined, rainfall and storm surges can cause more damage than winds alone, said Corbosiero.

Perhaps more important, improving the accuracy of forecasts will help people believe that a storm is coming, said Takahashi. “In many, many cases, the people don’t believe” forecasts, he said. “They don’t want to evacuate.”

An additional consideration, said Takahashi, is integrating alert systems with lightning monitoring and forecasting. In developed and developing countries alike, everyone has a smartphone. With smartphones, “we can distribute this precise information directly to the people,” he said, “and precise information is the necessary condition to make [the people] believe.”

“If we can make the precise predictions,” Takahashi said, “we can save [lives].”

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Swipe Left on the “Big One”: Better Dates for Cascadia Quakes

Fri, 08/20/2021 - 13:58

The popular media occasionally warns of an impending earthquake—the “big one”—that could devastate the U.S. Pacific Northwest and coastal British Columbia, Canada. Although ranging into hyperbole at times, such shocking coverage provides wake-up calls that the region is indeed vulnerable to major earthquake hazards courtesy of the Cascadia Subduction Zone (CSZ).

The last behemoth earthquake on the Cascadia Subduction Zone (CSZ) struck on 26 January 1700. We know this age because of several lines of geologic proxy evidence, in addition to Japanese historical records.The CSZ is a tectonic boundary off the coast that has unleashed massive earthquakes and tsunamis as the Juan de Fuca Plate is thrust beneath the North American Plate. And it will do so again. But when? And how big will the earthquake—or earthquakes—be?

The last behemoth earthquake on the CSZ, estimated at magnitude 9, struck on 26 January 1700. We know this age with such precision—unique in paleoseismology—because of several lines of geologic proxy evidence that coalesce around that date, in addition to Japanese historical records describing an “orphan tsunami” (a tsunami with no corresponding local earthquake) on that particular date [Atwater et al., 2015]. Indigenous North American oral histories also describe the event. Geoscientists have robust evidence for other large earthquakes in Cascadia’s past; however, deciphering and precisely dating the geologic record become more difficult the farther back in time you go.

Precision dating of and magnitude constraints on past earthquakes are critically important for assessing modern CSZ earthquake hazards. Such estimates require knowledge of the area over which the fault has broken in the past; the amount of displacement, or slip, on the fault; the speed at which slip occurred; and the timing of events and their potential to occur in rapid succession (called clustering). The paucity of recent seismicity on the CSZ means our understanding of earthquake recurrence there primarily comes from geologic earthquake proxies, including evidence of coseismic land level changes, tsunami inundations, and strong shaking found in onshore and marine environments (Figure 1). Barring modern earthquakes, increasing the accuracy and precision of paleoseismological records is the only way to better constrain the size and frequency of megathrust ruptures and to improve our understanding of natural variability in CSZ earthquake hazards.

Fig. 1. Age ranges obtained from different geochronologic methods used for estimating Cascadia Subduction Zone megathrust events are shown in this diagram of preservation environments. At top is a dendrochronological analysis comparing a tree killed from a megathrust event with a living specimen. Here 14C refers to radiocarbon (or carbon-14), and “wiggle-match 14C” refers to an age model based on multiple, relatively dated (exact number of years known between samples) annual tree ring samples. Schematic sedimentary core observations and sample locations are shown for marsh and deep-sea marine environments. Gray probability distributions for examples of each 14C method are shown to the right of the schematic cores, with 95% confidence ranges in brackets. Optically stimulated luminescence (OSL)-based estimates are shown as a gray dot with error bars. Click image for larger version.

To discuss ideas, frontiers, and the latest research at the intersection of subduction zone science and geochronology, a variety of specialists attended a virtual workshop about earthquake recurrence on the CSZ hosted by the U.S. Geological Survey (USGS) in February 2021. The workshop, which we discuss below, was part of a series that USGS is holding as the agency works on the next update of the National Seismic Hazard Model, due out in 2023.

Paleoseismology Proxies

Cascadia has one of the longest and most spatially complete geologic records of subduction zone earthquakes, stretching back more than 10,000 years along much of the 1,300-kilometer-long margin, yet debate persists over the size and recurrence of great earthquakes [Goldfinger et al., 2012; Atwater et al., 2014]. The uncertainty arises in part because we lack firsthand observations of Cascadia earthquakes. Thus, integrating onshore and offshore proxy records and understanding how different geologic environments record past megathrust ruptures remain important lines of inquiry, as well as major hurdles, in CSZ science. These hurdles are exacerbated by geochronologic data sets that differ in their precision and usefulness in revealing past rupture patches.

One of the most important things to determine is whether proxy evidence records the CSZ rupturing in individual great events (approximately magnitude 9) or in several smaller, clustered earthquakes (approximately magnitude 8) that occur in succession. A magnitude 9 earthquake releases 30 times the energy of a magnitude 8 event, so the consequences of misinterpreting the available data can result in substantial misunderstanding of the seismic hazard.

Some of the best proxy records for CSZ earthquakes lie onshore in coastal environments, such as coastal wetlands.Geologic proxies of megathrust earthquakes are generated by different aspects of the rupture process and can therefore inform us about specific rupture characteristics and hazards. Some of the best proxy records for CSZ earthquakes lie onshore in coastal environments. Coastal wetlands, for example, record sudden and lasting land-level changes in their stratigraphy and paleoecology when earthquakes cause the wetlands’ surfaces to drop into the tidal range (Figure 1) [Atwater et al., 2015]. The amount of elevation change that occurs during a quake, called “coseismic deformation,” can vary along the coast during a single event because of changes in the magnitude, extent, and style of slip along the fault [e.g., Wirth and Frankel, 2019]. Thus, such records can reveal consistency or heterogeneity in slip during past earthquakes.

Tsunami deposits onshore are also important proxies for understanding coseismic slip distribution. Tsunamis are generated by sudden seafloor deformation and are typically indicative of shallow slip, near the subduction zone trench (Figure 1) [Melgar et al., 2016]. The inland extent of tsunami deposits, and their distribution north and south along the subduction zone, can be used to identify places where an earthquake caused a lot of seafloor deformation and can tell generally how much displacement was required to create the tsunami wave.

Offshore, seafloor sediment cores show coarse layers of debris flows called turbidites that can also serve as great proxies for earthquake timing and ground motion characteristics. Coseismic turbidites result when earthquake shaking causes unstable, steep, submarine canyon walls to fail, creating coarse, turbulent sediment flows. These flows eventually settle on the ocean floor and are dated using radiocarbon measurements of detrital organic-rich material.

Geochronologic Investigations Fig. 2. These graphs show the age range over which different geochronometers are useful (top), the average record length in Cascadia for different environments (middle), and the average uncertainty for different methods (bottom). Marine sediment cores have the capacity for the longest records, but age controls from detrital material in turbidites have the largest age uncertainties. Radiocarbon (14C) ages from bracketing in-growth position plants and trees (wiggle matching) have much smaller uncertainties (tens of years) but are not preserved in coastal environments for as long. To optimize the potential range of dendrochronological geochronometers, the reference chronology of coastal tree species must be extended further back in time. The range limit (black arrow) of these geochronometers could thus be extended with improved reference chronologies. Click image for larger version.

To be useful, proxies must be datable. Scientists primarily use radiocarbon dating to put past earthquakes into temporal context. Correlations in onshore and offshore data sets have been used to infer the occurrence of up to 20 approximately magnitude 9 earthquakes on the CSZ over the past 11,000 years [Goldfinger et al., 2012], although uncertainty in the ages of these events ranges from tens to hundreds of years (Figure 2). These large age uncertainties allow for varying interpretations of the geologic record: Multiple magnitude 8 or magnitude 7 earthquakes that occur over a short period of time (years to decades) could be misidentified as a single huge earthquake. It’s even possible that the most thoroughly examined CSZ earthquake, in 1700, might have comprised a series of smaller earthquakes, not one magnitude 9 event, because the geologic evidence providing precise ages of this event comes from a relatively short stretch of the Cascadia margin [Melgar, 2021].

By far, the best geochronologic age constraints for CSZ earthquakes come from tree ring, or dendrochronological, analyses of well-preserved wood samples [e.g., Yamaguchi et al., 1997], which can provide annual and even seasonal precision (Figure 2). Part of how scientists arrived at the 26 January date for the 1700 quake was by using dendrochronological dating of coastal forests in southwestern Washington that were killed rapidly by coseismic saltwater submergence. Some of the dead western red cedar trees in these “ghost forests” are preserved with their bark intact; thus, they record the last year of their growth. By cross dating the dead trees’ annual growth rings with those in a multicentennial reference chronology derived from nearby living trees, it is evident that the trees died after the 1699 growing season.

The ghost forest, however, confirms only that coseismic submergence in 1700 occurred along the 90 kilometers of the roughly 1,300-kilometer-long Cascadia margin where these western red cedars are found. The trees alone do not confirm that the entire CSZ fault ruptured in a single big one.

Meanwhile, older CSZ events have not been dated with such high accuracy, in part because coseismically killed trees are not ubiquitously distributed and well preserved along the coastline and because there are no millennial-length, species-specific reference chronologies with which to cross date older preserved trees (Figure 2).

Advances in Dating

At the Cascadia Recurrence Workshop earlier this year, researchers presented recent advances and discussed future directions in paleoseismic dating methods. For example, by taking annual radiocarbon measurements from trees killed during coseismic coastal deformation, we can detect dated global atmospheric radiocarbon excursions in these trees, such as the substantial jump in atmospheric radiocarbon between the years 774 and 775 [Miyake et al., 2012]. This method allows us to correlate precise dates from other ghost forests along the Cascadian coast from the time of the 1700 event and to date past megathrust earthquakes older than the 1700 quake without needing millennial-scale reference chronologies [e.g., Pearl et al., 2020]. Such reference chronologies, which were previously required for annual age precision, are time- and labor-intensive to develop. With this method, new data collections from coastal forests that perished in, or survived through, CSZ earthquakes can now give near-annual dates for both inundations and ecosystem transitions.

Numerous tree rings are evident in this cross section from a subfossil western red cedar from coastal Washington. Patterns in ring widths give clues about when the tree died. Credit: Jessie K. Pearl

Although there are many opportunities to pursue with dendrochronology, such as dating trees at previously unstudied sites and trees killed by older events, we must supplement this approach with other novel geochronological methods to fill critical data gaps where trees are not preserved. Careful sampling and interpretation of age results from radiocarbon-dated material other than trees can also provide tight age constraints for tsunami and coastal submergence events.

For example, researchers collected soil horizons below (predating) and overlying (postdating) a tsunami deposit in Discovery Bay, Wash., and then radiocarbon dated leaf bases of Triglochin maritima, a type of arrowgrass that grows in brackish and freshwater marsh environments. The tsunami deposits, bracketed by well-dated pretsunami and posttsunami soil horizons, revealed a tsunamigenic CSZ rupture that occurred about 600 years ago on the northern CSZ, perhaps offshore Washington State and Vancouver Island [Garrison-Laney and Miller, 2017].

Multiple bracketing ages can dramatically reduce uncertainty that plagues most other dated horizons, especially those whose ages are based on single dates from detrital organic material (Figure 2). Although the age uncertainty of the 600-year-old earthquake from horizons at Discovery Bay is still on the order of several decades, the improved precision is enough to conclusively distinguish the event from other earthquakes dated along the margin.

Further advancements in radiocarbon dating continue to provide important updates for dating coseismic evidence from offshore records. Marine turbidites do not often contain materials that provide accurate age estimates, but they are a critically important paleoseismic proxy [Howarth et al., 2021]. Turbidite radiocarbon ages rely on correcting for both global and local marine reservoir ages, which are caused by the radiocarbon “memory” of seawater. Global marine reservoir age corrections are systematically updated by experts as we learn more about past climates and their influences on the global marine radiocarbon reservoir [Heaton et al., 2020]. However, samples used to calibrate the local marine reservoir corrections in the Pacific Northwest, which apply only to nearby sites, are unfortunately not well distributed along the CSZ coastline, and little is known about temporal variations in the local correction, leading to larger uncertainty in event ages.

Jessie Pearl samples a subfossil tree in a tidal mudflat in coastal Washington State in summer 2020. This and other nearby trees are hypothesized to have died in a massive coseismic subsidence event about 1,000 years ago. Researchers are using the precise ages of the trees to determine if past land level changes can be attributed to earthquakes on the Cascadia Subduction Zone or on shallower, more localized faults. Credit: Wes Johns

These local corrections could be improved by collecting more sampled material that fills spatial gaps and goes back further in time. At the workshop, researchers presented the exciting development that they were in the process of collecting annual radiocarbon measurements from Pacific geoduck clam shells off the Cascadian coastline to improve local marine reservoir knowledge. Geoducks can live more than 100 years and have annual growth rings that are sensitive to local climate and can therefore be cross dated to the exact year. Thus, a chronology of local climatic variation and marine radiocarbon abundance can be constructed using living and deceased specimens. Annual measurements of radiocarbon derived from marine bivalves, like the geoduck, offer new avenues to generate local marine reservoir corrections and improve age estimates for coseismic turbidity flows.

Putting It All Together        

An imminent magnitude 9 megathrust earthquake on the CSZ poses one of the greatest natural hazards in North America and motivates diverse research across the Earth sciences. Continued development of multiple geochronologic approaches will help us to better constrain the timing of past CSZ earthquakes. And integrating earthquake age estimates with the understanding of rupture characteristics inferred from geologic evidence will help us to identify natural variability in past earthquakes and a range of possible future earthquake hazard scenarios.

Useful geochronologic approaches include using optically stimulated luminescence to date tsunami sand deposits (Figure 1) and determining landslide age estimates on the basis of remotely sensed land roughness [e.g., LaHusen et al., 2020]. Of particular value will be focuses on improving high-precision radiocarbon and dendrochronological dating of CSZ earthquakes, paired with precise estimates of subsidence magnitude, tsunami inundation from hydrologic modeling, inferred ground motion characteristics from sedimentological variations in turbidity deposits, and evidence of ground failure in subaerial, lake, and marine settings. Together, such lines of evidence will lead to better correlation of geologic records with specific earthquake rupture characteristics.

Ultimately, characterizing the recurrence of major earthquakes on the CSZ megathrust—which have the potential to drastically affect millions of lives across the region—hinges on the advancement and the integration of diverse geochronologic and geologic records.

Acknowledgments

We give many thanks to all participants in the USGS Cascadia Recurrence Workshop, specifically J. Gomberg, S. LaHusen, J. Padgett, B. Black, N. Nieminski, and J. Hill for their contributions.

Ten Years on from the Quake That Shook the Nation’s Capital

Fri, 08/20/2021 - 13:58

The earthquake caused an estimated $200 million to $300 million in damages, including substantial damage 130 kilometers away in Washington, D.C.Ten years ago, early in the afternoon of 23 August 2011, millions of people throughout eastern North America—from Florida to southern Canada and as far west as the Mississippi River Valley—felt shaking from a magnitude 5.8 earthquake near the central Virginia town of Mineral [Horton et al., 2015a]. This is one of the largest recorded earthquakes in eastern North America, and it was the most damaging earthquake to strike the eastern United States since a magnitude 7 earthquake near Charleston, S.C., in 1886. Considering the population density along the East Coast, more people may have felt the 2011 earthquake than any other earthquake in North America, with perhaps one third of the U.S. population having felt the shaking.

The earthquake caused an estimated $200 million to $300 million in damages, which included the loss of two school buildings near the epicenter in Louisa County, substantial damage 130 kilometers away at the National Cathedral and the Washington Monument, and minor damage to other buildings in Washington, D.C., such as the Smithsonian Castle. Significant damage to many lesser-known buildings in the region was also documented. Shaking led to falling parapets and chimneys, although, fortunately, there were no serious injuries or fatalities. Rockfalls were recorded as far as 245 kilometers away, and water level changes in wells were recorded up to 550 kilometers away.

Fig. 1. Red dots denote epicenters of earthquakes of magnitude 3.5 or greater recorded since 1971 and indicate that earthquakes occur across large areas of eastern North America. The epicenter of the 2011 Mineral, Va., event is shown by the yellow star. Credit: USGS

The intraplate Mineral earthquake (meaning it occurred within a tectonic plate far from plate boundaries) is the largest eastern U.S. earthquake recorded on modern seismometers, which allowed for accurate characterization of the rupture process and measurements of ground shaking. Technologies developed in the past few decades, together with an evolving understanding of earthquake sources, created opportunities for comprehensive geological and geophysical studies of the earthquake and its seismic and tectonic context. These research efforts are providing valuable new understanding of such relatively infrequent, but damaging, eastern North American earthquakes (Figure 1), as well as intraplate earthquakes generally, but they have also highlighted perplexing questions about these events’ causes and behaviors.

Revealing New Faults

The Central Virginia Seismic Zone has a long history of occasional earthquakes [Chapman, 2013; Tuttle et al., 2021]. The largest recorded event before 2011 was an earthquake that damaged buildings in and near Petersburg, Va., in 1774, but larger events are evident in the geologic record from studies of ground liquefaction. These are natural earthquakes, with no evidence that they are caused by human activity such as injection or withdrawal of fluids in wells.

Postearthquake studies of the area around the Mineral earthquake’s epicenter involved geologic mapping, high-resolution topographic mapping with lidar, airborne surveys of Earth’s magnetic field, seismic methods to examine subsurface structure, and detailed examination of faults using trenching. The earthquake began about 8 kilometers underground on a northeast–southwest trending fault that dips to the southeast. The rupture progressed upward and to the northeast across three major areas of slip [Chapman, 2013], thrusting rocks southeast of the fault upward relative to rocks to the northwest, although the fault rupture did not break the surface. This previously unknown fault has been named the Quail fault [Horton et al., 2015b].

U.S. Geological Survey (USGS) scientists explore a trench dug near Mineral, Va., for signs of deformation related to the 2011 earthquake. Credit: Stephen L. Snyder, USGS

The relationship between the Quail fault and ancient faults in the region is debated. The simple hypothesis is that the many faults along which modern earthquakes concentrate represent long-term zones of weakness. However, over their long geologic history, some of these old faults have gone through metamorphic events, in which exposure to high pressures and temperatures may have hardened or healed the faulted rocks. Magnetic data collected after the 2011 earthquake are consistent with rocks having different magnetic properties on each side of the Quail fault, suggesting the earthquake ruptured an older fault juxtaposing different rock types [Shah et al., 2015]. Yet a corresponding fault is not apparent in seismic reflection data from several kilometers south of the earthquake, indicating either that the fault terminates not far south of the 2011 epicenter or that there is a different explanation for the magnetic anomalies [Pratt et al., 2015].

As with faults associated with many earthquakes in eastern North America, the Quail fault does not appear to extend upward to any previously mapped faults at the surface, making past ruptures on such faults difficult to study. Clearly, there is still much to learn about the complicated relationships between modern seismicity and the locations and orientations of older intraplate faults, raising questions about the common assumption that future earthquakes will reactivate older faults and creating uncertainties in regional hazard assessments.

Shaking Eastern North America

The crust of the eastern North American plate is older, thicker, colder, and stronger than younger crust near the plate’s active margins, allowing efficient energy transmission that results in higher levels of shaking reaching much greater distances than is typical for earthquakes in the western part of the continent. Such intraplate settings also cause earthquakes to be relatively energetic (with high “stress drop”) for their size [McNamara et al., 2014a; Wu and Chapman, 2017], which results in relatively high frequency shaking that can cause strong ground accelerations and damage to built structures. The strong accelerations from the Mineral earthquake, for example, caused a temporary shutdown of the reactors at the North Anna Nuclear Generating Station in Louisa County, although damage was minimal.

Within minutes of the Mineral earthquake, it was evident that the event had the largest felt area of any eastern U.S. earthquake in more than 100 years. The U.S. Geological Survey (USGS) Did You Feel It? (DYFI?) website, where people can report and describe earthquake shaking, received entries from throughout the eastern United States and southeast Canada (Figure 2). Seismometers indeed showed ground shaking extending to far greater distances than for similarly sized earthquakes in the western United States, as had been observed in smaller earthquakes. These seismic readings provide an important data set for accurately determining the attenuation of seismic energy with distance across eastern North America, which is valuable information for estimating potential extents of damage in future earthquakes.

Fig. 2. Comparison of felt reports in the USGS Did You Feel It? (DYFI?) system from the 2011 Mineral earthquake and the similarly sized 2014 Napa earthquake in California. Shaking during the Mineral earthquake was reported over a much larger area. Considering the modern population density in the eastern United States, the Mineral earthquake was probably felt by more people than any other earthquake in U.S. history. Credit: USGS

Ground shaking during the Mineral earthquake was decidedly stronger to the northeast of the epicenter [McNamara et al., 2014b]. This variation with azimuth was found to be mostly due to the more efficient transmission of energy parallel to the Appalachians and the edge of the continent, indicating the strong influence that crustal-scale geology has on ground shaking. A similar pattern was recently seen in the magnitude 5.1 Sparta, N.C., earthquake in 2020.

Localized stronger shaking was primarily caused by amplification of seismic energy in the Atlantic Coastal Plain sediments.Also notable during the Mineral earthquake was the enhanced strength of shaking in Washington, D.C., which was quickly recognized in the DYFI? reports [Hough, 2012]. This localized area of stronger shaking was primarily caused by amplification of seismic energy in the Atlantic Coastal Plain sediments that underlie much of the city. Seismometer recordings and modeling of ground shaking using soil profiles have shown that this effect can be severe in the eastern part of the continent where the transition from extremely hard bedrock to soft overlying sediments amplifies shaking and efficiently traps seismic energy through internal reflections in the sediments [Pratt et al., 2017]. Similar amplification effects by sediments have concentrated damage in other earthquakes, for example, in the Marina District of San Francisco during the 1989 Loma Prieta earthquake and in Mexico City during the 1985 Michoacán earthquake.

The large amplification during the Mineral earthquake by the Atlantic Coastal Plain sediments, which cover coastal areas from New York City to Texas, has given impetus to recent studies characterizing how this effect is influenced by sediment layers of different thicknesses and different frequencies of shaking. Personnel from the USGS National Seismic Hazard Model project are evaluating results from these studies for production of more accurate national-level hazard maps on which many building codes are based.

Renewed Interest in Intraplate Earthquakes

Earthquakes within plate interiors generally receive less attention than the more frequent events at active plate boundaries, including those of western North America. With its extensive affected area, the Mineral earthquake led to increased interest in the causes of intraplate earthquakes. Earthquakes at plate boundaries occur largely because of differential motion between adjacent plates. Earthquakes within relatively stable tectonic plates are more difficult to understand. Hypotheses to explain their occurrence include plate flexing due to long-term glacial unloading (melting) or erosion of large areas of rock and sediment, drag caused by mantle convection below a plate, residual tectonic stress from earlier times that has not been released, gravitational forces caused by heavy crustal features such as old volcanic bodies, and stress transmitted from plate edges. The question of what causes these earthquakes remains unresolved, and the answer may differ for different earthquakes.

Aftershocks from the Mineral earthquake continue today, providing information to better forecast aftershock probabilities and durations in future eastern North American earthquakes.Studies of the Mineral earthquake have offered new understanding and insights into intraplate earthquakes, such as the behavior and duration of aftershock sequences following eastern North American earthquakes. Portable seismometers deployed following the earthquake were used to identify nearly 4,000 aftershocks in the ensuing months [McNamara et al., 2014a; Horton et al., 2015b; Wu et al., 2015]. Of about 1,700 well-located aftershocks, the majority occurred in a southeast dipping zone forming a donut-like ring around the main shock rupture area. The aftershocks show a variety of fault orientations over a relatively wide area, indicating rupture of small secondary faults. Aftershocks from the Mineral earthquake continue today, providing information to better forecast aftershock probabilities and durations in future eastern North American earthquakes.

Detailed geologic studies of the earthquake’s epicentral region also led to the recognition of fractures, or joints, in the bedrock that trend northwest–southeast. These joints have orientations similar to some of the fault planes determined from aftershocks but significantly different from the main shock fault plane. This observation indicates that some aftershocks are occurring on small faults trending at sometimes large angles to the Quail fault, with some being parallel or nearly parallel to Jurassic dikes mapped in the region. These aftershocks and joints are indicative of the influence of older deformation of multiple ages on modern seismicity.

Although seismometers throughout the region provided important recordings of ground motions during the Mineral earthquake, the earthquake was also a “missed opportunity” for studying infrequent intraplate earthquakes because of the relatively small number of seismometers operating in the eastern United States at the time. Far more information could have been obtained had there been more instruments, especially near-source instruments to record strong shaking. To increase the density of seismometers in the central and eastern United States and prepare for studies of future earthquakes in the region, in 2019 the USGS assumed operations of the Central and Eastern United States N4 network, comprising stations retained from the EarthScope Transportable Array.

Reminder of Risk Scaffolding was erected around the Washington Monument, seen here from inside the Lincoln Memorial, to help repair damage caused by shaking from the 2011 earthquake. Credit: Thomas L. Pratt

The Mineral earthquake offered a startling reminder for many people that eastern North America is not as seismically quiet as it might seem. Damaging earthquakes off Cape Ann, Mass., in 1755 and near Petersburg, Va., in 1774 demonstrated this fact more than 2 centuries ago, and the 1886 Charleston earthquake showed just how damaging such events can be. Recent events, including the recent magnitude 4.1 earthquake in Delaware in 2017 and the 2020 Sparta, N.C., earthquake, have shown that earthquakes can still strike unexpectedly across much of the eastern United States (Figure 1).

Geologic evidence of large earthquakes in the ancient past in eastern North America is clear. A magnitude 6 to 7 earthquake, likely in the past 400,000 years, ruptured a shallow fault beneath present-day Washington, D.C., for example, with the fault now visible in a road cut near the National Zoo.

In any given area of the eastern United States, however, these damaging earthquakes are infrequent on human timescales and are commonly followed by decades of quiescence, so their impacts tend to fade from memory. Nonetheless, such earthquakes can cause severe and widespread damage exceeding that from more frequent severe storms and floods. Also, unlike many natural hazards, earthquakes provide virtually no advance warning. Even if an earthquake early-warning system like that operating along the West Coast is eventually installed in the central and eastern United States, there would be, at most, seconds to tens of seconds of notice before strong shaking from a large earthquake.

The rarity of earthquakes in central and eastern North America presents challenges for studying their causes and effects and for planning mitigation efforts to reduce damage and loss of life in future earthquakes. Yet the potential consequences of not taking some mitigation measures can be extreme. Many older buildings across this vast region were constructed without regard for earthquakes, with unreinforced masonry buildings being particularly vulnerable. These older construction practices, combined with the high efficiency of energy transmission and potential local amplification of ground shaking by sediments, create potential risk for eastern North American cities to sustain extensive damage during an earthquake.

For example, damage seen in Washington, D.C., from the Mineral earthquake shows that a repeat of the ancient magnitude 6 to 7 earthquake on a fault directly beneath the city would be devastating to the city and could severely affect federal government operations. The last earthquake on the fault beneath D.C. is thought to have occurred tens to hundreds of thousands of years ago—that is, in the geologically recent past—suggesting the fault could still be active. Its next rupture may not occur for thousands of years or more, yet there is also the remote chance that it could happen much sooner.

The Mineral earthquake showed clearly that seismic risks in eastern North America cannot be ignored, as there will inevitably be more earthquakes that cause damage in this part of the country.The Mineral earthquake showed clearly that seismic risks in eastern North America cannot be ignored, as there will inevitably be more earthquakes that cause damage in this part of the country. It was only in the second half of the past century that probabilistic seismic hazard assessments, like the U.S. National Seismic Hazard Model, were developed to quantify the ground shaking that buildings may experience in a given time frame. These shaking forecasts provide guidelines for constructing buildings and other infrastructure to suitable levels for seismic safety.

Retrofitting vulnerable structures and raising awareness of the earthquake risks—and of simple, inexpensive mitigation actions like keeping an emergency preparedness kit on hand and making contingency plans (e.g., for family separations)—are important societal steps to help safeguard the population. Meanwhile, continued scientific research that builds on the work done since the Mineral earthquake and explores past and present earthquakes elsewhere in eastern North America will improve seismic hazard assessments to better estimate and mitigate ground shaking expected during earthquakes to come.

First Report of Seismicity That Initiated in the Lower Mantle

Thu, 08/19/2021 - 13:40

On 30 May 2015, a magnitude 7.9 earthquake took place beneath Japan’s remote Ogasawara (Bonin) Islands, located about 1,000 kilometers south of Tokyo. The seismic activity occurred over 660 kilometers below Earth’s surface, near the transition between the upper and lower mantle. The mechanism of deep-focus earthquakes, like the 2015 quake, has long been mysterious—the extremely high pressure and temperature at these depths should result in rocks deforming, rather than fracturing as in shallower earthquakes.

By using a 4D back-projection method, Kiser et al. traced the path of the 2015 earthquake and identified, for the first time, seismic activity that initiated in the lower mantle. They relied on measurements by the High Sensitivity Seismograph Network, or Hi-net, a network of seismic stations distributed across Japan. The data captured by these instruments are analogous to ripples in a pond produced by a dropped pebble: By calculating how seismic waves spread, the researchers were able to pinpoint the path of the deep-focus quake.

The team found that the main shock initiated at a depth of 660 kilometers, then propagated to the west-northwest for at least 8 seconds while decreasing in depth. Analyses of the 2 hours following the main shock identified aftershocks between depths of 624 and 751 kilometers. A common model for deep-focus earthquakes is transformational faulting; in other words, instability causes the transition of olivine in a subducting slab into a denser form, spinel. The aftershocks below 700 kilometers, however, occurred outside the zone where this transition occurs. The authors propose that the deep seismicity may have resulted from stress changes caused by settling of a segment of subducting slab in response to the main shock, although the hypothesis requires future investigation. (Geophysical Research Letters, https://doi.org/10.1029/2021GL093111, 2021)

—Jack Lee, Science Writer

La primera mirada de la meteorización a escala angstrom

Thu, 08/19/2021 - 13:40

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Tanto las rocas sedimentarias y como el agua son abundantes en la superficie de la Tierra y, durante largos períodos de tiempo, sus interacciones convierten montañas en sedimentos. Los investigadores saben desde hace mucho tiempo que el agua erosiona las rocas sedimentarias tanto físicamente, al facilitar la abrasión y migración de las rocas, como químicamente, a través de la disolución y recristalización. Pero estas interacciones nunca antes se han visto in situ a la escala angstrom.

En un nuevo estudio, Barsotti et al. utilizan la microscopía electrónica de transmisión ambiental para capturar imágenes dinámicas de vapor de agua y gotitas que interactúan con muestras de dolomitas, calizas y areniscas. Usando un sistema de inyección de fluidos personalizado, el equipo expuso las muestras a agua destilada y monitoreó los efectos del agua en el tamaño de los poros en el transcurso de 3 horas. La meteorización física fue fácilmente observable en los experimentos con vapor de agua, y los procesos químicos de disolución y recristalización fueron más pronunciados en experimentos con agua en fase líquida.

Los investigadores pudieron observar una capa de agua adsorbida que se había formado en las paredes de microporos de los tres tipos de rocas. Descubrieron que a medida que se agregaba vapor de agua, el tamaño de los poros se contraía hasta en un 62.5%. Después de 2 horas, cuando se eliminó el agua, aumentaron los tamaños de los poros. En general, con respecto al tamaño inicial, el tamaño final de los poros de la dolomita disminuyó en un 33.9%, mientras que el tamaño aumentó en un 3.4% y un 17.3% en la roca caliza y la arenisca, respectivamente. El equipo sugiere que estos cambios en el tamaño de los poros se debieron a la tensión inducida por adsorción. Los experimentos en fase líquida revelaron que las tasas de disolución fueron más altas en la piedra caliza, seguida de la dolomita y la arenisca.

El estudio apoya trabajos previos que sugieren que la disolución y recristalización pueden alterar el tamaño y la forma de los poros en las rocas sedimentarias. También proporciona la primera evidencia directa de un experimento in situ de que la tensión inducida por adsorción es una fuente de meteorización. En última instancia, estos cambios en la geometría de los poros podrían conducir a cambios en las propiedades de las rocas, como la permeabilidad, que influyen en el flujo de agua, la erosión y el ciclo de los elementos a escalas más grandes. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2020JB021043, 2021)

—Kate Wheeling, Escritora de ciencia

This translation by Daniela Navarro-Pérez (@DanJoNavarro) of @GeoLatinas and @Anthnyy was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Cosmological Tool Helps Archaeologists Map Earthly Tombs

Wed, 08/18/2021 - 13:13

Thousands of ancient tombs dot the semiarid landscape of eastern Sudan.

Compared to their more famous cousins in Egypt, these tombs are neither well-known nor well studied because they lie off the beaten path. Just a single road leads to the city of Kassala, and an additional 5 hours of off-roading are required to get to the funerary sites, said Stefano Costanzo, a Ph.D. student in archaeology at the University of Naples “L’Orientale” in Italy.

But the journey is worth it.

“I didn’t know I was interested before seeing them. I didn’t even know they were there. But because I went and saw them, you know, [my] interest just shot up,” Costanzo said. Viewed from the field, “the place was stunning,” he said.

As described in a new study published in the open-access journal PLoS ONE, satellite images of these tombs revealed an even more intriguing observation: Not only were the funerary sites numerous; they were also clustered in groups of up to thousands of structures.

Clusters of Qubbas

The study is the first to apply a statistical method created for cosmology to the more grounded field of archaeology to quantitatively describe the immense number of tombs and how their locations were scattered across the landscape.

Using satellite imagery and information gathered in field surveys he did for 3 years prior, Costanza was able to map the locations of the funerary structures. It took 6 months to draw the map at a resolution high enough to permit statistical analysis.

“I literally drew single boulders sometimes,” said Costanzo, the lead author of the study.

The funeral structures in eastern Sudan came in two flavors: tumuli, which are simpler raised structures made of earth or stone, and qubbas, which are square shrines or tombs constructed with flat slabs of foliated metamorphic rock standing about 2 meters tall and 5 meters wide. Most of the site’s tumuli are dated to the first millennium CE, whereas the qubbas are associated with medieval Islam, built in the area starting from the 16th century up to the 20th.

In all, the data set contained 783 tumuli and 10,274 qubbas in the roughly 6,475-square-kilometer (2,500-square-mile) region.

Viewed from the sky, the qubbas were clustered along foothills or atop ridges. However, topography was not enough to completely explain where the qubbas were located—there seemed to be another force that drew them close to one another.

Neyman-Scott Cluster Process

Study coauthor Filippo Brandolini, an archaeologist and environmental scientist at Newcastle University in the United Kingdom, extensively reviewed different methods for analyzing spatial statistics before coming across the Neyman-Scott cluster process. First used to study the spatial pattern of galaxies, the process has since been used in ecology and biology research but never before in the field of archaeology.

“It’s actually refreshing to see people use some of these methods, even though they might not identify themselves as statisticians, in a pretty sound way.”“It’s actually refreshing to see people use some of these methods, even though they might not identify themselves as statisticians, in a pretty sound way,” said Tilman Davies, a statistician at the University of Otago in New Zealand.

In fact, many statistical tools like the Neyman-Scott process were not originally created by statisticians, said Davies, who was not involved in the study. “They were sort of formulations that were derived by practitioners or applied scientists working in a particular field and thinking ‘This particular mathematical construction might be useful for my particular application.’”

In Sudan, the researchers found that many of the tombs were clustered around environmental features but also toward each other. Just as there’s a natural tendency for galaxies or stars to group together, there seems to be “a sort of gravitational attraction, which is actually social-cultural” for the patterns among the qubbas, possibly involving tribes or families, Costanzo said.

Green dots mark the sites of 1,195 clustered qubbas around and on top of a rocky outcrop in eastern Sudan. Credit: Stefano Costanzo

“They are attempting to describe the appearance of these points, both in terms of environmental predictors and in terms of some random mechanism that could explain this aggregation,” Davies said. “So they’re attempting to combine two things, which is often a very, very difficult thing to do.”

“We discovered the applicability of this tool; we didn’t invent it.”Similar statistical tools and models may be a boon for archaeology as a whole.

Many locations are remote and hard to get to on the ground. Using satellite and remote sensing—in addition to methods that permit quantification—could allow for rigorous archaeology from the desk.

“We discovered the applicability of this tool; we didn’t invent it. But we found out that it is useful in archaeological terms,” Costanzo said. “It has the potential to help many other research expeditions in very remote lands.”

—Richard J. Sima (@richardsima), Science Writer

Undertaking Adventure to Make Sense of Subglacial Plumes

Wed, 08/18/2021 - 13:13

“This is impossible” was our first reaction 5 years ago when Shin Sugiyama and I first heard the idea from Naoya Kanna, a postdoctoral scholar at Hokkaido University’s Arctic Research Center in Japan at the time [Kanna et al., 2018, 2020]. What was his “impossible” proposal? Kanna, now a postdoctoral scholar at the University of Tokyo, wanted to deploy oceanographic equipment into the water to study a turbulent glacial plume issuing from the calving front of a Greenland glacier.

If we could make that idea work, our instruments would record the first extended look at the chaotic region where fresh glacial meltwater streams out into the salt water of the ocean. The data would span a few weeks, in contrast to the snapshots from previous expeditions.

Collecting data near a crevasse-riven glacial terminus is challenging for several reasons.Collecting data near a crevasse-riven glacial terminus is challenging for several reasons. Instruments can disappear or become impossible to retrieve when chunks of ice break from the edge of a glacier. This type of event, called calving, can generate tsunami-like splashes as high as about 20 meters. Ice mélanges, disorderly mixes of sea ice and icebergs commonly seen where glaciers meet the sea, can block access to fjords or tug on cables attached to oceanographic instruments. Adding to the difficulties, anything that is deployed on a glacier’s surface melts out during the summer.

However, as ice loss and discharges of meltwater and sediment from coastal glaciers around the world continue to increase, it is important to understand exchanges of heat, meltwater, and nutrients between marine-terminating glaciers and their surroundings. These factors affect sea level, ocean circulation and biogeochemistry, marine ecosystems, and the communities that rely on marine ecosystems [Straneo et al., 2019].

Unfortunately, the data needed to fill key gaps in our understanding of ice-ocean interactions have been in short supply [Straneo et al., 2019]. This situation is starting to change, however, with the emergence of pioneering new research efforts, including an expedition at the calving front of Greenland’s Bowdoin Glacier in July 2017 in which colleagues and I participated.

The Challenges and Serendipity of Installing Instruments

Our expedition faced its first challenge early on. Because of a 7-day flight delay getting to northwest Greenland, we had only one night to prepare our expedition supplies in Qaanaaq Village (77.5°N) before our chartered helicopter’s scheduled departure. After rushing to collect the necessary food and gear for our week-and-a-half trip, we made our flight the next morning, which took us about 30 kilometers northwest to Bowdoin Glacier on 6 July. We had a lot of work ahead of us in the limited time available, not the least of which involved carrying our geophysical instruments (see sidebar) on our backs to their deployment sites at various locations on and around the glacier.

The discharge site studied (greenish area at center of left image), where meltwater traveling beneath Bowdoin Glacier reaches the ocean and pushes aside the floating ice mélange, is seen in this 7 July 2017 photo taken by an uncrewed aerial vehicle. On 8 July, the glacier calved an iceberg along the crevasse running from the top center of the photo down and to the right, creating a new glacier front where scientists then deployed instruments into the water. At right, the author stands beside a crevasse that blocked access to the calving front of Bowdoin Glacier on 6 July 2017. Credit: left, Eef van Dongen/Podolskiy et al. [2021a], CC BY 4.0; right, Lukas E. Preiswerk/van Dongen et al., 2021, CC BY 4.0A major crevasse, some 2 meters wide, initially blocked our direct access to the calving front, but the path was cleared after the glacier serendipitously calved a kilometer-scale iceberg on 8 July. We now had access to the fresh 30-meter-high ice cliff, but the evidence, or expression, of the plume on the water’s surface was gone.

Surface expressions of glacial discharge plumes typically appear as semicircular areas of either turbid water of a color different from surrounding seawater or—as was the case upon our arrival to Bowdoin in 2017—water that’s mostly ice free surrounded by sea ice and iceberg-laden water (or sometimes both of these things). These waters provide clear indications of the locations and timing of discharge plumes. Without the surface expression, we could make only an educated guess that the plume should still be there under the water.



Finally, on 13 July, we deployed our oceanographic instrumentation, hanging sensors from the calving front to collect continuous pressure, temperature, and salinity measurements at roughly 5- and 100-meter depths in the salty water of the fjord. Such a feat is trickier than it might sound. As we well knew—and had been reminded of a few days earlier with the large calving event—glacial termini are mobile, treacherous environments. We had to deploy a few hundred meters of cables, instrumentation, and protective pipes at the crevassed calving front, all while trying not to damage the equipment with our crampons and securing each other with ropes. Meanwhile, our Swiss colleagues were remotely operating uncrewed aerial vehicles (UAVs) over the calving front, providing near-real-time safety support and guidance on crevasse development.

Our expedition required more than just down parkas and warm socks. If you take along the following items, you will be all set to study a subglacial discharge plume:

hundreds of meters of cables connecting a data logger to oceanographic sensors four time-lapse cameras a seismometer for on-ice deployment pressure and temperature sensors a conductivity-temperature-depth profiler a helicopter a boat two uncrewed aerial vehicles an ice drill, ice screws, and other mountaineering gear supporting tidal and air temperature data sets (we got ours from Thule and Qaanaaq, respectively) an incredible team an unbelievable amount of luck a chocolate bar in your pocket a bottle of Champagne—chilled in a water-filled crevasse—to celebrate a successful deployment

The same evening of the deployment, a strange chain of events started to unfold near our tent camp, 2 kilometers up glacier from the plume. A huge chunk of dead ice (a stationary part of the glacier) collapsed into a large ice-dammed lake and triggered a small tsunami, which displaced the pressure and temperature sensors we had deployed in the lake. The sensors weren’t the only things that moved—the sound of this collapse was so loud that everyone in our camp ran out of their tents to see what was happening.

The next evening, while we were enjoying dinner, we realized that the ice-dammed lake was draining in front of our eyes! The bed of this lake, which had so recently held enough water to fill about 120 Olympic swimming pools, was now exposed. Unlike the simple, idealized glacier lake bed structures used in some models, what we saw in this chasm resembled limestone cave structures: a complex jumble of spongelike features and ponds. We were very fortunate to be present for this event, with all our instruments deployed and collecting data.

Soon after, we got a radio call from Martin Funk, a glaciologist from ETH Zürich (who has since retired) who was working in front of the glacier terminus. Funk, who was encamped with his colleagues on top of a protruding mountain ridge (called a nunataq) in front of the glacier, had a front-row seat for the event. Through binoculars, he saw that the discharge plume had resurfaced right where our oceanographic instruments were set up. The water in the lake had drained under the glacier via the plume we were monitoring. Funk’s team used radar, time-lapse photography, and UAVs to capture as many remote observations as possible. We have since confirmed these observations using analysis of our oceanographic data, time-lapse photographs, UAV images, and a nearby seismometer that recorded the drainage event.

The ice-dammed marginal lake (top) is seen here from a helicopter on 6 July 2017, before it drained abruptly on 14 July. The chasm left behind by the drained lake (bottom) was photographed on 16 July. Credit:  Evgeny A. Podolskiy

The helicopter retrieved us from the glacier, and we returned to Qaanaaq on 17 July. While we flew along the calving front, I was amazed to see that the cable connected to our deep sensor and its ballast were inclined away from the ice cliff, likely because of the strong current generated by the plume.

Some team members returned to the field area by boat on 1 August, climbing onto and traversing the glacier to collect the data and instrumentation at the calving front. They found the unsupervised cables dangling from a bent aluminum stake. Originally, the stake had been drilled 2 meters into the ice to secure the cables, but by 1 August, it had melted halfway out of the ice. The retrieval was timely: Later that same day, a several-hundred-meters-long section of the terminus near the equipment setup and data logger calved into the water.

Dealing with Difficult Data

Bringing home our hard-earned data set seemed like a major achievement; however, it proved to be just the beginning of our work on this project. The complexity of the retrieved data was a nightmare. Colleagues of ours were eager to plug the oceanographic observations into their models of plumes and subaqueous melt, but we were hesitant to share this unique data set until we could begin to understand it ourselves.

We eventually recognized that our data captured the physical turbulence of water near the calving front of Bowdoin Glacier—this chaotic behavior in fluids has fascinated scientists, including Leonardo da Vinci, for half a millennium. Dealing with turbulence in our data was already a daunting task, but other complications added to the challenge of making sense of it.

For example, the instrumentally observed pressure variations, which represent water depth, indicated that the plume occasionally “spit out” our sensor that had been anchored 100 meters below the surface, pushing it outward from where it had been deployed, and icebergs then pulled it up to the surface. This highly unconventional observation eventually yielded remarkable results. We realized that our sensor was traveling with the turbulent water rather than taking data at a single location as expected, thus producing Lagrangian time series data.

This 17 July 2017 helicopter photo shows how the subglacial discharge plume’s current pushed an underwater sensor away from the face of the ice cliff, causing the sensor’s cable to angle outward from the cliff. The horizontal undercut strip near the water surface is a tidal melt notch, caused by melting below the water’s surface. Credit: Shin Sugiyama

In contrast to previous studies, which obtained infrequent, single profiles of the water column, this time-lapse profiling of the water column documented a dramatic shift in fjord stratification over a span of a few days. For example, on 16 July, a cold layer of water began a 3-day descent, and the water column profile we obtained on 24 July bore no resemblance to conditions just 1 week earlier.

It took a few years of demanding detective work to understand every wiggle in the data recorded during those 12 days of observations and their causal connections with iceocean dynamics at the calving front. We learned a lot simply by analyzing the nearly 1 terabyte of time-lapse photography we and Funk’s team had shot. But our main target, the plume, was mostly hiding underwater, obscured by the low-salinity, relatively warm water at the surface near the calving front. Nevertheless, various lines of evidence provided fingerprints of the plume and details of its activity. For example, the seismic records collected near where the plume exited the glacial front revealed a low-frequency seismic tremor signal that lasted 8.5 hours during the drainage of the lake.

After applying all the classical signal-processing methods commonly applied in oceanographic, statistical, and seismic analyses, I realized we needed a radically different approach to comprehend the subsurface water pressure, salinity, and temperature data from within the plume.

The time series of these oceanographic data started to remind me of the nonlinear cardiograms I used to see at home when I was younger, courtesy of my mother, a cardiologist. The heart and many other dynamical systems can generate irregular and chaotic patterns of behavior even from totally deterministic behavior, without requiring any random, or stochastic, fluctuations in the system. In our case, the irregularities in the data came from the occasional turbulence caused by the plume and by icebergs, which repeatedly pushed the sensors through different water masses.

There are many linear methods that can be used to understand the evolution of time series. For example, the well-known Fourier transform mathematically converts time domain data to the frequency domain. But such methods neither statistically characterize the observed dynamics of a system nor detect the existence of so-called attractors (states to which a system tends to converge or return, like a pendulum that eventually comes to rest at the center of its arcing swings)—both capabilities we needed to make sense of our complex data. Mapping different states (i.e., water masses) visited by moving sensors collecting scalar, or point, measurements required us to move beyond linear analytical methods.

In the artillery of nonlinear methods, there is a technique called time delay embedding, which projects data from the time domain to the state space domain [Takens, 1981]. This technique reveals structure in a system graphically by using delay coordinates, which combine sequential scalar measurements in a time series, with each measurement separated by a designated time interval (the delay time), into multidimensional vectors. This technique helps reveal attractors and identify transitions in a time series (e.g., distinguishing between signals from the lake water and the fjord water). We applied this (almost) magical approach in combination with other, more conventional methods to decipher our oceanographic records and comprehend the observed dynamics of the plume [Podolskiy et al., 2021a].

New Discoveries and Problems Still to Solve

Our analyses revealed previously undocumented high-frequency dynamics in the glacier-fjord environment. These dynamics extended well beyond anything we had imagined, and they could be important for understanding submarine melting, water mixing, and circulation, as well as biogeochemistry near glacial termini. For example, sudden stratification shifts may imply the descent of a cold water layer and the corresponding thickening of a warmer layer near the surface as well as the occurrence of enhanced melting that undercuts ice cliffs and leads to calving.

Observed ripples in the water surface and detected seismicity may offer innovative ways to monitor abrupt, large discharge events even when there is no surface expression of the plume. These events may occur more often in the future as surface waters increasingly freshen (and become less dense), thereby blocking plumes from reaching the surface [De Andrés et al., 2020]. Also, the tidal modulation of water properties, such as temperature, that we found highlights that there are still processes not accounted for in models estimating submarine glacial melting in Greenland.

Our work shows that we may need to rethink how we model and monitor discharge plumes.Previous efforts to study this melting were limited to providing episodic views of the iceocean interface and have not monitored the ocean and glacier simultaneously. Our first-of-their-kind observations could thus be informative for constraining predictive ice-ocean models. Abrupt stratification changes in a fjord, the intermittent nature of glacial plumes, tidal forcing, and sudden drainages of marginal lakes are all very complex processes to model. It is possible these processes could be parameterized to make modeling them easier or even ignored if they’re found to be insignificant in long-term predictions of iceocean interactions. Nevertheless, our work shows that we may need to rethink how we model and monitor discharge plumes to obtain clarity on these processes, particularly on short timescales.

Our analyses may be helpful for interpreting future records of glacial discharge and other phenomena. The methods developed and applied here are not necessarily limited to oceanography because deciphering time series is an ever-present task across the geosciences. On the other hand, our novel and customized observational approach was challenging and is suited for only short-term campaigns. We hope that ongoing developments in sea bottom lander technology and remotely operated deployments at calving fronts [e.g., Nash et al., 2020; Podolskiy et al., 2021b] will pave the way for continuous, year-round observations in these critical environments, providing further insights to help scientists understand the effects of increasing glacial melting on ocean dynamics and marine ecosystems.

Acknowledgments

This work was part of Arctic Challenge for Sustainability research projects (ArCS and ArCS II; grants JPMXD1300000000 and JPMXD1420318865, respectively) funded by the Ministry of Education, Culture, Sports, Science and Technology of Japan.

Ejecta Discovered Near Site of Ancient Meteorite Impact

Tue, 08/17/2021 - 16:16

A meteorite impact is a colossal disruption—think intense ground shaking, sediments launching skyward, and enormous tsunamis. But evidence of all that mayhem can be erased by erosion over time. Scientists have now relied on clever geological sleuthing to discover impact ejecta near South Africa’s Vredefort impact structure, the site of a massive meteorite strike roughly 2 billion years ago. These ejecta might hold clues about the composition of the object that slammed into Earth during the Precambrian, the researchers suggest.

After more than 2 billion years of erosion, features of the crater created by the massive meteorite that impacted what is now Vredefort, South Africa, are barely discernible. Credit: NASA

The Vredefort impact structure, near Johannesburg, is estimated to be between 180 and 300 kilometers in diameter—it’s believed to be the largest impact structure on Earth. But it doesn’t look at all like a crater. It’s far too old—and therefore too eroded—to have preserved that characteristic signature of an impact.

What’s visible instead is an arc of uplifted sediments. That material is part of the “peak ring” that formed within the original crater. Such uplifted material is the calling card of a massive impact, said Matthew S. Huber, a geologist at the University of the Western Cape in Cape Town, South Africa. “If there’s a sufficiently large impact, there will be a rebound.”

Standing at the Roots

But even these uplifted sediments were buried far below Earth’s surface at the time of the impact, said Huber. “This [area] has experienced at least 10 kilometers of erosion. We’re at the deep structural roots.”

Because of all that erosion, there’s no hope of finding impact ejecta—sediments launched during an impact, which have often been altered by high temperatures and pressures—within the impact structure itself, said Huber. “It’s all eroded away. It’s gone.”

However, nearby sites—located within a few radii of the Vredefort impact structure—might still contain impact ejecta, Huber and his colleagues reasoned. (In previous studies, Huber and his collaborators had found millimeter-sized Vredefort ejecta much farther afield, in Greenland and Russia.)

To search for this so-called proximal ejecta, Huber and his colleagues looked a few hundred kilometers to the west. They focused on a swath of the Kaapvaal Craton, a geologic feature that, like other cratons around the world, preserves particularly ancient sediments.

A Violent Event, Told Through Rocks

The researchers collected material from a pair of sediment cores originally drilled by mining companies exploring the region for iron and manganese. Huber and his collaborators honed in on sediments dated to be 1.9 billion to 2.2 billion years old and assembled several thin sections of the rocks to analyze. The sediments exhibited telltale signs of a violent event, the team found.

To begin with, the researchers noticed bull’s-eye-looking features up to a few centimeters in diameter. These structures, called accretionary lapilli, form within clouds of ash. Much as hailstones grow via the addition of layers of ice, accretionary lapilli grow spherically as successive layers of ash are deposited on their outer surface. They’re associated with both volcanic eruptions and meteorite impacts.

“There’s no doubt that it is impact ejecta.”Huber and his colleagues also spotted parallel lines running through grains of quartz. These lines, known as planar deformation features, represent broken atomic bonds in the quartz’s crystal lattice. Ordinary geologic processes like earthquakes or volcanic eruptions are rarely powerful enough to create these features, said Huber. “These grains were subjected to a shock wave.”

Planar deformation features are “unequivocal evidence” of impact material, said Elmar Buchner, a geologist at the Neu-Ulm University of Applied Sciences in Neu-Ulm, Germany, not involved in the research. “There’s no doubt that it is impact ejecta.”

These results were presented today at the 84th Annual Meeting of the Meteoritical Society in Chicago.

There’s a lot more to learn from these ejecta, Huber and his collaborators suggest. The team next plans to analyze their samples for “impact melt,” material preserved from the time of the impact that’s sometimes a chemical amalgam of the impactor and the surrounding target rocks. Such ejecta could help reveal the composition of the object responsible for creating the Vredefort impact structure, the researchers suggest. “We are already planning our next analyses,” said Huber. “There is a lot of work to be done.”

—Katherine Kornei (@KatherineKornei), Science Writer

Magnetic Record of Early Nebular Dynamics

Tue, 08/17/2021 - 14:00

The early solar nebula was probably subject to strong magnetic fields, which influenced its dynamics and thus the rate at which the Sun and planets grew. Fortunately, some meteorites were forming during this epoch, and thus provide the potential to characterize these ancient nebular fields. Fu et al. [2021] make careful measurements of one particular meteorite and conclude that the fields recorded are an order of magnitude larger than fields recorded by other meteorites of similar ages. This result suggests that the nebula experienced either strong temporal variations in field strength, or strong spatial variations (for instance, because of the presence of gaps cleared by growing planets). As highlighted by Nichols [2021] in a companion Viewpoint, an important next step is to understand in more detail the chemical process(es) by which magnetization was acquired; so too is removing the lingering possibility that this field was due to an internal dynamo, rather than an external nebular field.

Citation: Fu, R., Volk, M., Bilardello, D., Libourel, G., Lesur, G., & Ben Dor, O. [2021]. The fine-scale magnetic history of the Allende meteorite: Implications for the structure of the solar nebula. AGU Advances, 2, e2021AV000486. https://doi.org/10.1029/2021AV000486

—Francis Nimmo, Editor, AGU Advances

Satellite Sensor EPIC Detects Aerosols in Earth’s Atmosphere

Tue, 08/17/2021 - 13:27

Aerosols are small, solid particles that drift aloft in Earth’s atmosphere. These minuscule motes may be any of a number of diverse substances, such as dust, pollution, and wildfire smoke. By absorbing or scattering sunlight, aerosols influence Earth’s climate. They also affect air quality and human health.

Accurate observations of aerosols are necessary to study their impact. As demonstrated by Ahn et al., the Earth Polychromatic Imaging Camera (EPIC) sensor on board the Deep Space Climate Observatory (DSCOVR) satellite provides new opportunities for monitoring these particles.

Launched in 2015, DSCOVR’s orbit keeps it suspended between Earth and the Sun, so EPIC can capture images of Earth in continuous daylight—both in the visible-light range and at ultraviolet (UV) and near-infrared wavelengths. The EPIC near-UV aerosol algorithm (EPICAERUV) can then glean more specific information about aerosol properties from the images.

Like other satellite-borne aerosol sensors, EPIC enables observation of aerosols in geographic locations that are difficult to access with ground- or aircraft-based sensors. However, unlike other satellite sensors that can take measurements only once per day, EPIC’s unique orbit allows it to collect aerosol data for the entire sunlit side of Earth up to 20 times per day.

To demonstrate EPIC’s capabilities, the researchers used EPICAERUV to evaluate various properties of the aerosols it observed, including characteristics known as optical depth, single-scattering albedo, above-cloud aerosol optical depth, and ultraviolet aerosol index. These properties are key for monitoring aerosols and their impact. The analysis showed that EPIC’s observations of these properties compared favorably with those from ground- and aircraft-based sensors.

The research team also used EPIC to evaluate the characteristics of smoke plumes produced by recent wildfires in North America, including extensive fires in British Columbia in 2017, California’s 2018 Mendocino Complex Fire, and numerous North American fires in 2020. EPIC contributed to the observational proof of smoke self-lofting via the tropopause by solar absorption–driven diabatic heating in 2017. EPIC observations successfully captured these huge aerosol plumes, and the derived plume characteristics aligned accurately with ground-based measurements.

This research suggests that despite coarse spatial resolution and potentially large errors under certain viewing conditions, EPIC can serve as a useful tool for aerosol monitoring. Future efforts will aim to improve the EPICAERUV algorithm to boost accuracy. (Journal of Geophysical Research: Atmospheres, https://doi.org/10.1029/2020JD033651, 2021)

—Sarah Stanley, Science Writer

Steady but Slow Progress on the Long Road Towards Gender Parity

Mon, 08/16/2021 - 19:04

Diversity among scientists expands the questions our science asks, the approaches it takes, and the quality and impact of its products. Unfortunately, the geosciences has one of the worst records of diversity among its ranks. Progress is being made to include more women in the geosciences, but Ranganathan et al. [2021] show that, assuming equity in hiring and retention going forward, gender parity in the geosciences at U.S. universities will not be reached until 2028, 2035, and 2056, for assistant, associate, and full professors, respectively. Women of color and all minoritized groups face a longer road to inclusion. In an accompanying Viewpoint, Hastings [2021] shares the policies, institutional support, and community support that helped her overcome several obstacles in her career. These data and personal stories show that actions have and will make a difference, but institutions and their leaders need to pick up the pace to make the geosciences more inclusive and equitable.

Citation: Ranganathan, M., Lalk, E., Freese, L. et al. [2021]. Trends in the representation of women among geoscience faculty from 1999-2020: the long road towards gender parity. AGU Advances, 2, e2021AV000436. https://doi.org/10.1029/2021AV000436

—Eric Davidson, Editor, AGU Advances

Predictive Forensics Helps Determine Where Soil Samples Came From

Mon, 08/16/2021 - 13:10

In the very first appearance of Sherlock Holmes, 1887’s A Study in Scarlet, Dr. Watson jots down notes on the famous detective’s incredible powers of observation. Holmes “tells at a glance different soils from each other. After walks, has shown me splashes upon his trousers and told me by their colour and consistence in what part of London he had received them.” Holmes finds footprints in a claylike soil in the story and uses his knowledge of geology in several subsequent mysteries.

Today, scientists are trying to expand the envelope of forensic geology, the science of using unique characteristics of geological materials in criminal investigations, through more refined techniques.

Better Tools for Soil Sleuths

Law enforcement agencies often try to piece together the tracks of criminals by analyzing soil and dust samples left on items such as clothing and vehicles. The concept has been around at least since the time of Arthur Conan Doyle, who popularized it.

In a study published in the Journal of Forensic Sciences, however, researchers in Australia outline their efforts to develop a more powerful tool for detectives. In a nutshell, they took soil samples and subjected them to advanced analytical methods and concluded that the “empirical soil provenancing approach can play an important role in forensic and intelligence applications.”

The researchers looked at 268 previously collected soil samples, each from its own square kilometer in an area of North Canberra measuring some 260 square kilometers, more than 4 times the size of Manhattan Island. Geochemical survey data were mapped to show what the chemical and physical properties should be between measured points, as well as the measured uncertainty for each cell. Comparative samples were analyzed using Fourier transform infrared spectroscopy, magnetic susceptibility, X-ray fluorescence, and inductively coupled plasma–mass spectrometry.

The researchers were given three samples from the surveyed area and challenged to identify the 250- × 250-meter cells they came from. Using the method, they were able to eliminate 60% of the area under consideration. In a real investigation, that would mean spending less of law enforcement’s time and money on areas that won’t yield any results.

“This is extremely useful in forensic investigations as it allows [investigators] to prioritize resources to the most promising parts of the area.”“We can use fairly standard analytical methods and achieve degrees of exclusion generally of the order of 60% to 90% of the investigated area,” said study lead author Patrice de Caritat, principal research scientist at Geoscience Australia. “This is extremely useful in forensic investigations as it allows [investigators] to prioritize resources to the most promising parts of the area. The greatest challenge was, predictably, the natural heterogeneity of soils. Even when characterized empirically at a density of one sample per 1 square kilometer, it is always possible that the evidentiary sample is uncharacteristic of that 1 square kilometer.”

De Caritat said predictive provenancing using existing digital soil maps has never been put forward before in a forensic application and offers an “effective desktop method of ruling out vast areas of territory for forensic search” as soon as a soil analysis is available. The research builds on an earlier paper he coauthored in which the method reduced the search area for soil samples by up to 90%.

“Using soil in forensic cases is an old technique,” added de Caritat. “Originally, bulk properties such as color were used and really useful, particularly to exclude matches between soil samples. But what has changed now is the breadth and depth of techniques brought to bear on the question.”

Hunt for Suspects

Lorna Dawson, head of the Soil Forensics Group at The James Hutton Institute in Aberdeen, Scotland, said the research was well carried out and is a good proof of concept but unrealistic because sampling at such a high resolution is not affordable.

“Often, the sample recovered from the car, tool, shoe, etc., is too small to allow elemental profile analysis to be carried out, so the methods described in Patrice’s research would not work in every country, and to test it would be prohibitively expensive,” said Dawson, who was not involved in the study.

“But if funding could be made available by some international donor, we as practitioners could work with key researchers such as Patrice, and we would be delighted to carry out the research to set up the appropriate databases and models to link with currently available soil databases,” Dawson added. “That level of detail would certainly help in many serious crime investigations such as fakes, frauds, precious metals, etc.”

De Caritat and colleagues have received a Defence Innovation Partnership grant from the South Australia government to apply their provenancing work to soil-derived dust and to include X-ray diffraction mineralogical and soil genomic information to increase specificity. The project is a collaboration involving the University of Canberra, the University of Adelaide, Flinders University, the Australian Federal Police, and Geoscience Australia. The research may be used for counterterrorism, where “environmental DNA” on the clothing or other personal effects of a suspect may prove valuable.

Jennifer Young, a lecturer in the College of Science and Engineering at Flinders University not involved in the new research, said last year that the technology could help provide “evidence of where a person of interest might have traveled based on the environmental DNA signature from dust on their belongings.”

—Tim Hornyak (@robotopia), Science Writer

Multicellular Algae Discovered in an Early Cambrian Formation

Mon, 08/16/2021 - 13:08

The Cambrian period, which occurred around 541–485 million years ago, is known for its explosive biological diversification. In warm oceans, the planet’s earliest eukaryotes began to thrive and diversify. A major contributing factor to the acceleration of life and the development of early metazoans is thought to be an increasingly efficient food web, created largely by algae. These new photosynthetic creatures allowed for easier nutrient transfer between species than their more ancient equivalents, the cyanobacteria.

A new study by Zheng et al. characterizes large, multicellular algae from a formation known as Kuanchuanpu. The site, located in southern Shaanxi Province in China, contains a famous collection of metazoan fossils from the Cambrian era. Using a combination of scanning electron microscopy and X-ray tomographic analysis, the authors reveal an organism with external membranes and cell walls. The cells in the specimens are organized into large spatial patterns, specifically, an inner and outer area, which the researchers refer to as a cortex and a medulla. These features lead the scientists to conclude that the fossil shows organized, multicellular algae enclosed in a membrane rather than a group of cyanobacteria or metazoan embryos.

The team also hypothesizes that the cortex-medulla organization seen in the specimens suggests an asexual life cycle wherein the organism grows from a single round ball of cells into a globular collection, where each lobe contains its own cortex-medulla organization inside. If their analysis is correct, these multicellular algae from Kuanchuanpu Formation appear consistent with contemporary specimens found at the Weng’an biota from the Ediacaran, south China. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1029/2020JG006102, 2021)

—David Shultz, Science Writer

Ice Lenses May Cause Many Arctic Landslides

Fri, 08/13/2021 - 11:39

Climate change is driving periods of unusually high temperature across large swaths of the planet. These heat waves are especially detrimental in the Arctic, where they can push surface temperatures in regions of significant permafrost past the melting point of ice lenses. Melting ice injects liquid water into the soil, reducing its strength and increasing the likelihood of landslides. In populated areas, these events can cause economic damage and loss of life.

Mithan et al. investigate a shallow-landslide formation mechanism called active layer detachment (ALD), in which the upper, unfrozen—or active—layer of soil separates from the underlying solid permafrost base. They analyze the topography in the vicinity of ALD landslides spread over a 100-square-kilometer region of Alaska to characterize the factors that govern such events. This region experienced many ALD landsides after a period of unusually high temperature in 2004.

The authors identified 188 events in the study area using satellite imagery and established the local topography using a U.S. Geological Survey digital elevation model. To analyze the relationship between ALD landslides and topography, they simulated such events using a set of common software tools.

Because many Arctic regions have relatively shallow slopes, their modeling finds that the simple flow of water is generally unable to generate sufficient water pressure between soil grains to kick-start a landslide. Rather, a major factor in ALD events appears to be the presence of ice lenses, concentrated bodies of ice that grow underground. When a heat wave pushes the thawing point of the permafrost to the depth of these ice accumulations, their melting strongly raises the local water pressure, creating the conditions for a landslide.

As ice lens formation is governed by local topography, the authors propose that it may be possible to construct a mechanism for predicting locations likely to be susceptible to ALD landslides using only simple surface observations. As permafrost increasingly thaws in the face of a warming planet, such predictions are likely to take on greater importance in the coming decades. (Geophysical Research Letters, https://doi.org/10.1029/2020GL092264, 2021)

—Morgan Rehnberg, Science Writer

Lava from Bali Volcanoes Offers Window into Earth’s Mantle

Fri, 08/13/2021 - 11:39

Volcanoes along the 5,600-kilometer-long Sunda Arc subduction zone in Indonesia are among the most active and explosive in the world—and given the population density on the islands of the archipelago, some of the most hazardous.

“The most dangerous volcanoes are in subduction zones…. Any improvement in our knowledge of these volcanoes will help us be better prepared when they erupt.”“The most dangerous volcanoes are in subduction zones,” said Frances Deegan, a researcher in the Department of Earth Sciences at Uppsala University in Sweden.

In a new study published in Nature Communications, Deegan and her colleagues shed more light on the magma systems beneath four volcanoes in the Sunda Arc: Merapi in Central Java, Kelut in East Java, and Batur and Agung on Bali. Using relatively new technology to measure oxygen isotopes in crystals in lava samples from the four volcanoes, the researchers established a baseline measurement of the oxygen isotopic signal of the mantle beneath Bali and Java. That baseline can be used to measure how much the overlying crust, or subjected sediments, influences magmas as they rise toward the surface.

Researchers tested lava samples from the volcanoes. Credit: Frances Deegan Volcano Forensics Researchers used the Secondary Ion Mass Spectrometer (SIMS) at the Swedish Museum of Natural History in Stockholm. Credit: Frances Deegan

In the past, volcano forensic studies have relied on such technologies as conventional fluorination or laser fluorination to measure isotopes and minerals in samples, which are used to analyze pulverized lava samples but also often capture unwanted contaminates. In the new study, the research made use of the Secondary Ion Mass Spectrometer (SIMS) at the Swedish Museum of Natural History. “It allows you to do in situ isotope analysis of really small things like meteorites,” Deegan said, “things that are really precious where you can’t really mash them up and dissolve them.”

SIMS also allows for targeting of portions of individual crystals as small as 10 microns, which allowed the researchers to avoid the unwanted contamination sometimes found within an individual crystal, according to Terry Plank, a volcanologist at Columbia University who was not involved in the study. “The ion probe lets you avoid that and really analyze the pristine part of the crystal,” she said, “so we can see, in this case, its original oxygen isotope composition.”

New Measurements

Researchers can use SIMS to measure oxygen isotope ratios (18O to 16O)—expressed as a δ18O value, which normalizes the ratios to a standard—in various samples. On the basis of previous measurements for mid-ocean ridge basalts, Earth’s mantle is believed to have a δ18O value of around 5.5%, according to Deegan. “The crust is very variable and very heavy, so it can be maybe 15% to 20% to 25%,” she said. “If you mix in even just a little bit of crust with this very heavy oxygen isotope signal, it’s going to change the 5.5%—it’s going to go up.”

Deegan and colleagues used SIMS to determine δ18O values from the mineral clinopyroxene in samples from the four volcanoes. In lavas from the Sunda Arc, clinopyroxene is a common mineral phase and can potentially shed light on source compositions and magmatic evolution. The results showed that the average δ18O values for each volcano decreased as the researchers moved east, with Merapi in Central Java measuring 5.8%, Kelut in East Java measuring 5.6%, and the Bali volcanoes Batur and Agung measuring 5.3% and 5.2%, respectively.

“We actually have a really clean mantle signature, which is unusual to find in a subduction zone.”“What really surprised me the most was finding this really pristine mantle signature under Bali,” Deegan said. Researchers already knew that the crust grows thinner as you move east from Java to Bali, but Deegan expected to find more evidence of ocean sediment in the measurements under Bali—seafloor material that melts along with the Indo-Australian plate as it slides beneath the Eurasian plate at the Sunda Arc. “We didn’t see that. We actually have a really clean mantle signature, which is unusual to find in a subduction zone,” she said.

The researchers also measured magma crystallization depths of each of the four volcanoes and found that most of the sampled clinopyroxene from the two Java volcanoes formed in the middle to upper crust, while the crystallization occurred closer to the crust–mantle boundary beneath the Bali volcanoes. “I think that we have found a special view on the Indonesian mantle at Bali,” Deegan said. “Agung volcano on Bali seems to be the best mirror of mantle compositions in the whole region.”

These findings could help scientists better understand what happens when magma leaves its sources and moves toward the surface. It’s theorized that magma interaction with volatile components in the crust could be a driver of more explosive eruptions, Deegan said, and so having a clean, contained mantle baseline for the Sunda Arc region could aid future research.

Crust or Sediment?

Although Plank was excited by the measurements of uncontaminated, unaltered oxygen isotope baselines in the paper, she wondered whether the differences in δ18O values are really explained by thicker crust under Java. “The averages for each volcano almost overlap within 1 standard deviation, so there are more high ones at Merapi than at Agung, but they all have the same baseline,” she said. “[The authors] argue that’s crustal contamination, but I wonder if there are other processes that can cause that.” It’s not always so easy, geochemically speaking, to distinguish between crustal contamination and material from subducted seafloor, Plank added. “The crust erodes and goes into the ocean, and then that material on the seafloor gets subducted and comes back up again,” she said. “It’s the same stuff.”

As more research is conducted with SIMS, Plank would like to see work similar to Deegan’s done on samples from Alaskan volcanoes, which exhibit low δ18O values—like the Bali volcanoes—as well as on more shallow magma systems, like the Java volcanoes.

“Any improvement in our knowledge of these volcanoes [in subduction zones] will help us be better prepared when they erupt,” Deegan said.

—Jon Kelvey (@jonkelvey), Science Writer

Wildfires Are Threatening Municipal Water Supplies

Thu, 08/12/2021 - 11:52

In recent decades, wildfire conflagrations have increased in number, size, and intensity in many parts of the world, from the Amazon to Siberia and Australia to the western United States. The aftereffects of these fires provide windows into a future where wildfires have unprecedented deleterious effects on ecosystems and the organisms, including humans, that depend upon them—not the least of which is the potential for serious damage to municipal water supplies.

In 2013, the Rim Fire—at the time, the third-largest wildfire in California’s history—burned a large swath of Stanislaus National Forest near Hetch Hetchy Reservoir, raising concerns about the safety of drinking water provided from the reservoir to San Francisco.

The 2018 Camp Fire not only burned vegetation but also torched buildings and the water distribution system for the town of Paradise in north central California, leaving piles of charred electronics, furniture, and automobiles scattered amid the ruins. Postfire rainstorms flushed debris and dissolved toxicants from these burned materials into nearby water bodies and contaminated downstream reaches. Residents relying on these sources complained about smoke-tainted odors in their household tap water [Proctor et al., 2020]. And in some cases, water utilities had to stop using water supplies sourced from too near the wildfire and supply alternative sources of water to customers.

Water exported from severely burned watersheds can have greatly altered chemistry and may contain elevated levels of undesirable materials that are difficult to remove.Climate change is expected to increase the frequency and severity of wildfires, resulting in new risks to water providers and consumers. Water exported from severely burned watersheds can have greatly altered chemistry and may contain elevated levels of contaminants and other undesirable materials that are difficult to remove. For example, excess nutrients can fuel algal blooms and suspended soil erosion particles can clog water filters. Are water utilities in wildfire-affected areas prepared for these changes?

Our research team has conducted field studies after several severe wildfires to sample surface waters and investigate the fires’ effects on downstream water chemistry. In California, we have looked at the aftereffects of the 2007 Angora Fire, 2013 Rim Fire, 2015 Wragg Fire, 2015 Rocky-Jerusalem Fire, 2016 Cold Fire, 2018 Camp Fire, and 2020 LNU Lightning Complex Fire. We also studied the 2016 Pinnacle Mountain Fire in South Carolina and the long-term effects of the 2002 Hayman Fire in Colorado. These campaigns often involve hazardous working conditions, forcing researchers to wear personal protective gear such as respirator masks, heavy boots and gloves, and sometimes full gowns for protection from ash and dust. We also had to monitor the weather so as not to be surprised by the unpredictable and dangerous flash floods, debris flows, and landslides that can occur following fires.

The devastation of these burned landscapes is stunning and amplifies the urgency to better understand the fallout of fires on ecosystems and humans. Our field studies have provided important new insights about how surface water chemistry and quality are affected after fires—information useful in efforts to safeguard water treatment and water supplies in the future.

Wildfire Impacts on Water

Wildfires have well-documented effects on the quality of surface waters. Fires contaminate the rivers, streams, lakes, and reservoirs that supply public drinking water utilities with sediments, algae-promoting nutrients, and heavy metals [Bladon et al., 2014]. However, few researchers have addressed water treatability—the ease with which water is purified—or drinking water quality in water treated following wildfires (see video below).



Although wildfires can destroy forest ecosystems within days, changes in dissolved organic matter quantity and composition can persist in burned landscapes for decades.Contaminants are mobilized in the environment as a result of the forest fires, which volatilize biomass into gases like carbon dioxide while producing layers of loose ash on the soil surface. Dissolved organic matter (DOM) leached from this burned, or pyrogenic, material (PyDOM) has appreciably different chemical characteristics compared with DOM from the unburned parent materials [Chen et al., 2020]. Although wildfires can destroy forest ecosystems within days, changes in DOM quantity and composition can persist in burned landscapes for decades [Chow et al., 2019].

DOM itself is not a contaminant with direct impacts on human health, but it creates problems for water treatment. It can cause off colorations and tastes and serve as a substrate for unwanted microbial growth or a foulant of membranes and adsorption processes. Also, DOM can increase treatment costs and chemical demand levels, that is, the amount of added chemicals, like chlorine and ferric iron, required to disinfect water and remove DOM. In addition, treatment efforts can introduce unintended side effects: Disinfection processes for DOM-contaminated water can form a variety of carcinogenic disinfection by-products (DBPs) such as chloroform, some of which are regulated by the EPA.

The characteristics, treatability, and duration of PyDOM from burned watersheds are poorly understood and require more study, but it is clear that this material poses several major challenges and health concerns related to municipal water supplies in wildfire-prone areas. In particular, it negatively affects treatability while increasing the likelihood of algal blooms and toxic chemical releases (Figure 1).

Fig. 1. Threats to drinking water supplies from wildfires include releases of toxic chemicals from burned infrastructure, electronics, plastics, cars, and other artificial materials (left); releases of pyrogenic dissolved organic matter and toxic chemicals from ash deposits into source water supplies (middle); and postfire eutrophication and algal blooms in water supplies because of increased nutrient availability (right). Credit: Illustration, Wing-Yee Kelly Cheah; inset photos, Alex Tat-Shing Chow Treatability of Pyrogenic Dissolved Organic Matter

Postfire precipitation can easily promote the leaching of chemicals from burned residues, and it can also transport lightweight ash to nearby surface waters. This deposition raises levels of DOM and total suspended solids, increases turbidity, and lowers dissolved oxygen levels in the water, potentially killing aquatic organisms [Bladon et al., 2014; Abney et al., 2019].

Our controlled laboratory and field studies demonstrated that DOM concentrations in leached water depend on fire severity. Burned residuals could yield DOM concentrations up to 6–7 times higher than those in leached water from the unburned parent biomass. DOM concentrations in stream water from a completely burned watershed were 67% higher than concentrations in water from a nonburned watershed in the year following a severe wildfire [Chen et al., 2020; Uzun et al., 2020].

High total suspended solid levels complicate drinking water treatment by increasing chemical demand and reducing filtration. We observed that PyDOM—which had a lower average molecular weight but greater aromatic and nitrogen content than nonpyrogenic DOM—was removed from water with substantially lower efficiency (20%–30% removal) than nonpyrogenic DOM (generally 50%–60% removal or more) [Chen et al., 2020].

Elevated levels of PyDOM in water mean that higher chemical dosages are needed for treatment, and higher levels of DBPs are likely to be formed during treatment. PyDOM is also more reactive, which promotes the formation of potentially harmful oxygenated DBPs. For example, chlorinating water that contains PyDOM produces haloacetic acids, whereas chloramination, another form of disinfection, produces N-nitrosodimethylamine [Uzun et al., 2020]. In addition, increased levels of bromide, another DBP precursor, released from burned vegetation and soils has been observed in postfire surface runoff, especially in coastal areas. This bromide may enhance the formation of more toxic brominated DBPs (e.g., by converting chloroform to bromoform) [Wang et al., 2015; Uzun et al., 2020].

Only a small fraction (less than 30%) of total DBPs generated from DOM have been identified in chlorinated or chloraminated waters. The unique chemical characteristics of PyDOM generated from wildfire may give rise to DBPs that do not typically occur in water treatment and have not been identified or studied.

Postfire Nutrient Releases and Algal Blooms

After wildfires, burned biomass, fire retardant, and suppression agents often release nutrients, including inorganic nitrogen and phosphorus, into source waters.After wildfires, burned biomass, fire retardant, and suppression agents like ammonium sulfate and ammonium phosphate often release nutrients, including inorganic nitrogen and phosphorus, into source waters. Wildfire runoff is often alkaline (pH > 9), in part because of its interactions with wood ash and dissolved minerals. Under these conditions, high ammonia and ammonium ion concentrations can cause acute ammonia toxicity in aquatic organisms, especially in headwater streams where these contaminants are not as diluted as they become farther downstream.

Freshwater aquatic ecosystems tend to be phosphorus limited, meaning algal growth is naturally kept under control. But large phosphorus loads originating from burned watersheds, particularly phosphorus associated with sediments, can induce eutrophication (nutrient enrichment) and harmful algal blooms, particularly in lentic (still-water) ecosystems where nutrients accumulate. Blooms of cyanobacteria (blue-green algae) like Microcystis aeruginosa are especially hazardous for drinking water supplies because they produce neurotoxins and peptide hepatotoxins (liver toxins) such as microcystin and cyanopeptolin.

Algal organic matter is also nitrogen rich and contributes to the formation of a variety of carbonaceous and nitrogenous DBPs during drinking water disinfection [Tsai and Chow, 2016]. Although copper-based algicide treatments are options for controlling algal blooms, copper ions themselves catalyze DBP formation during drinking water disinfection [Tsai et al., 2019].

Releases of Toxic Chemicals

When forest vegetation burns, it can generate and directly release a variety of potentially toxic chemicals, including polycyclic aromatic hydrocarbons [Chen et al., 2018], mercury [Ku et al., 2018], and heavy metals [Bladon et al., 2014]. In addition, fires such as California’s 2017 Tubbs Fire and 2018 Camp Fire, which extended to the interfaces between wildlands and urban areas, have generated residues from burned infrastructure, electronics, plastics, cars, and other artificial materials, contributing a variety of toxic chemicals to source waters.

Fires can also burn plastic water pipelines in homes that are connected to municipal water distribution systems, potentially releasing hazardous volatile organic carbon into the larger water system. For example, up to 40 milligrams per liter of benzene, a known carcinogen, was reported in water distribution lines following the Tubbs Fire in an urban area of California [Proctor et al., 2020]. Benzene is one of many organic chemicals found in damaged water distribution networks, and experts worry there could be many other toxic chemicals released from burned pipes over time. Because these damaged pipes are downstream from the treatment facility, the best remediation option may be to replace the pipes entirely.

Climate Change and Wildfires Alter Watershed Hydrology

As wildfires burn hotter and consume more fuel in future climates, water quality will progressively degrade.Our recent research demonstrates that the degree of water quality impairment increases markedly with increasing wildfire severity and with the proportion of the watershed area burned [Chow et al., 2019; Uzun et al., 2020]. Hence, as wildfires burn hotter and consume more fuel in future climates, water quality will progressively degrade.

Severe wildfires consume vegetative and soil cover and often cause soils to become more water-repellent, which greatly increases surface runoff at the expense of soil infiltration. In turn, these changes lead to enhanced soil erosion and sediment transport—carrying associated pollutants directly to downstream waters—and to reduced filtration of water in the soil profile [Abney et al., 2019].

Although water quality in riverine systems may recover quickly following successive storm-flushing events, pollutants can accumulate in lakes and reservoir systems, which are more sedentary, degrading water quality for decades as pollutants are recycled between the water column and sediments.

Other factors are also likely to influence postfire runoff, erosion, and contamination transport amid changing climates. For example, many forested watersheds today still receive much of their precipitation as snowfall, which is much less erosive than a comparable volume of rainfall. But as the climate warms and more precipitation falls as rain, postfire surface runoff and erosion and water quality impairment will increase considerably. In addition, as extreme weather events are expected to be more prevalent in the future, more intense rainfall could greatly increase postfire pollutant transport.

Burned trees line the banks of a creek in the aftermath of the 2018 Camp Fire. A warming climate is expected to severely degrade water quality by contributing to larger burned areas and more severely burned watersheds. Credit: Alex Tat-Shing Chow

At present, portions of a watershed not burned during a wildfire serve to mitigate water pollution by providing fresh water that dilutes contaminants coming from burned areas. As wildfire sizes grow larger, this dilution will diminish. Furthermore, vegetation takes longer to recover after more severe wildfires, delaying recoveries in water quality.

Stream runoff dynamics will also be altered as increased surface runoff reduces soil water and groundwater recharge, leading to higher peak flows during storms and lower base flow conditions. The slow regeneration of vegetation, which will reduce consumption and transpiration of water by plants, will also lead to greater runoff following wildfires [Chow et al., 2019].

A warming climate is expected to severely degrade water quality by contributing to larger burned areas and more severely burned watersheds. The damage will be exacerbated by increases in rainfall compared with snow and by extreme storm events that enhance surface runoff and erosion.

Proactive and Prescribed Solutions

Mitigating wildfire impacts on drinking water safety requires effective, proactive management as well as postdisaster rehabilitation strategies from the forestry and water industries.Wildfires can cause press (ongoing) and pulse (limited-duration) perturbations in forested watersheds, altering watershed hydrology and surface water quality and, consequently, drinking water treatability. Mitigating wildfire impacts on drinking water safety requires effective, proactive management as well as postdisaster rehabilitation strategies from the forestry and water industries.

Water quality impairment increases exponentially with increasing burn intensity and area burned, so reducing forest fuel loads is critical. From a forestry management perspective, forest thinning and prescribed fire are both well-established, effective fuel reduction techniques. However, the operational costs of thinning are usually high, and the residual foliar litter it produces can cause increased DOM concentrations in source waters.

By comparison, prescribed fire is an economic management practice to reduce loads of forest litter and wildfire hazard. These low-severity fires reduce the quantity of DOM (and DBP precursors) potentially released to waterways while not appreciably affecting its composition or treatability in source water [Majidzadeh et al., 2019] (see video below). Establishing landscape-scale firebreaks is another effective management strategy, providing defensible corridors within forests to limit the rapid spread of fire and reduce the size of burned areas.

Water utilities, particularly those in fire-prone areas, should develop risk analysis and emergency response plans that combine multiple approaches. Such approaches include identifying alternative source waters, extensive and long-term postfire source water quality monitoring, and modifications in treatment processes and operations, such as using adsorbents and alternative oxidants that reduce taste and odor problems, remove specific contaminants, and decrease the formation of regulated DBPs.

Other research and preventive efforts should be encouraged as well. Such efforts include research studying the fates and effects of fire retardants in source water and the effects of postfire rehabilitation practices (e.g., mulching) on water chemistry, and the use of different pipeline and construction materials in newly developed housing near the wildland-urban interface. Furthermore, a collaborative system and an effective communication network between the forestry and water industries linking forest management to municipal water supplies will be critical in assessing and addressing wildfire impacts on drinking water safety.

Acknowledgments

The fieldwork efforts described in this article were supported by National Science Foundation RAPID Collaborative Proposal 1917156, U.S. EPA grant R835864 (National Priorities: Water Scarcity and Drought), and National Institute of Food and Agriculture grant 2018-67019-27795.

Is Earth’s Albedo Symmetric Between the Hemispheres?

Wed, 08/11/2021 - 14:41

The planetary albedo, the portion of insolation (sunlight) reflected by the planet back to space, is fundamentally important for setting how much the planet will warm or cool. Previous literature noted an intriguing feature that the albedo is essentially identical in the two hemispheres, on average, despite very different surface properties. Building upon earlier studies, Datseris & Stevens [2021] support the hemispheric albedo symmetry via advanced time-series analysis techniques using the latest release of CERES datasets. Because of land-sea fraction differences, the clear-sky albedo is greater in the northern than the southern hemisphere. However, this clear-sky albedo asymmetry is disrupted by the asymmetry in cloudiness, especially over the extratropical ocean. In search of a symmetry-establishing mechanism, the authors analyze the temporal variability and find substantial decadal trends in hemispheric albedo that is identical for both hemispheres. The results hint at a symmetry-enforcing mechanism that operates on large spatio-temporal scales.

Citation: Datseris, G. & Stevens, B. [2021]. Earth’s albedo and its symmetry. AGU Advances, 2, e2021AV000440. https://doi.org/10.1029/2021AV000440

—Sarah Kang, Editor, AGU Advances

 

Specifically Tailored Action Plans Combat Heat Waves in India

Wed, 08/11/2021 - 11:56

Unprecedented heat waves swept through Canada, the United States, and northern India this year, claiming hundreds of lives. These heat waves are not new: In India, 17,000 deaths have occurred because of heat waves since the 1990s.

A recent global study found that India had the highest burden of mortality associated with high temperatures between 2000 and 2019. In the summer of 2010, temperatures rose to 46.8°C in Ahmedabad, a metropolitan city in the state of Gujarat, causing an excess of 1,300 deaths in just 1 month.

“The heat wave in 2010 was devastating. People were caught unawares; they didn’t know they were experiencing symptoms of heat. In one hospital, there were 100 neonatal deaths.”Polash Mukerjee, the lead for Air Quality and Climate Resilience at the Natural Resources Defense Council (NRDC) India Program, said, “The heat wave in 2010 was devastating. People were caught unawares; they didn’t know they were experiencing symptoms of heat. In one hospital, there were 100 neonatal deaths.”

The deaths in Ahmedabad prompted scientists from NRDC; the Indian Institute of Public Health (IIPH), Gandhinagar; the India Meteorological Department (IMD); and officials from the Amdavad Municipal Corporation to develop India’s first heat action plan specifically tailored for a city in 2013.

The plan includes early-warning systems, color-coded temperature alerts, community outreach programs, capacity-building networks among government and health professionals for preparedness and reducing exposure, and staggered or reduced timings for schools and factories. The Amdavad Municipal Corporation also appointed a nodal office to coordinate the heat action plan with various agencies.

“The nodal officer sends out alerts: orange (very hot) if it is more than 40°C, red (extremely hot) for more than 45°C. Messages are then sent to the public through various media, to take precautions and not go out. Hospitals are made ready to receive heat stroke cases,” explained Dileep Mavalankar, director of IIPH, Gandhinagar.

A study conducted to assess the effectiveness of the Ahmedabad heat action plan found that an estimated 2,380 deaths were avoided during the summers of 2014 and 2015 as a result of the plan’s implementation.

Built to Scale

In 2015, India decided to scale up heat action plans based on the Ahmedabad model. In 2016, IMD started issuing 3-month, 3-week, and 5-day heat forecasts. India’s National Disaster Management Authority (NDMA) along with IMD and NRDC started developing heat action plans for other cities and states.

“We have developed city-specific threshold assessments based on temperature and mortality data from last 30 years, for example, how many deaths were there at 35°C and at 40°C. Based on this data, we send out area-specific advisory.”Anup Kumar Srivastava, a senior consultant at NDMA, said, “We have developed city-specific threshold assessments based on temperature and mortality data from last 30 years. We analyzed at what temperatures mortalities happened, for example, how many deaths were there at 35°C and at 40°C. Based on this data, we send out area-specific advisory for the temperature forecasted.” He added that of the 23 heat-prone states, 19 have heat action plans in place.

Action plans vary by region. The temperature threshold for the coastal city of Bhubaneswar, for example, takes into account the relative humidity of the place. “Yellow alert is issued at 35.9°C, orange at 41.5°C and red at 43.5°C for Bhubaneswar,” Srivastava said. In contrast, the temperature threshold for Nagpur, a city in the interior with a dry and arid climate, is 43°C (orange) and 45°C (red).

In addition, because of higher humidity, a coastal city feels hotter than a noncoastal city at the same temperature, Mukerjee explained. “So the coastal city heat action plan may include awareness components for heat-related illnesses associated with dehydration, whereas noncoastal cities will focus more on heat exhaustion and heat stroke,” he added.

Current Plans Are Just a Start

Not all experts agree on the efficacy of the heat plans. Abinash Mohanty, program lead for the Council on Energy, Environment and Water, Delhi, who was not involved in developing the heat action plans, said that studies indicate that the frequency of heat waves in India has increased since the 1990s, with a 62% increase in mortality rate.

“The numbers are an indication of the extremities yet to come, and the current limitations of heat action plans (lack of empirical evidence, limited adaptive capacity, and impact-based early warning) make them inadequate to mitigate heat waves,” he said.

Mohanty clarified that few cities like Ahmedabad have full-fledged heat action plans that identify and characterize climate actions focused on mitigating heat wave impacts. “Many cities lack climatological-led empirical evidence on metrics such as the number of heat wave days, seasonal variability, that are imperative for effective heat wave management,” he explained.

Mohanty added that though the current heat plans are a good starting point, they need to include factors such as the heat wave index, wet-bulb temperature, and updated heat wave definitions across varied geographies.

Kamal Kishore, a member of NDMA, said, “The heat plans are based on vulnerability factors for each city—people working outside (such as farmers, [those working in] open shops, and traffic police), type of dwellings (type of walls and roofs), access to drinking water, nutritional factors, etc.”

He added that preparedness workshops are planned well in advance of the season, and they constantly revise guidelines based on previous years’ lessons.

Mitigative Approaches Cool roofs, in which light-colored paint reflects sunlight away from the surface of a building, are one way Indian cities are combating heat waves with infrastructure strategies. Credit: Mahila Housing Trust – Natural Resources Defense Council

Mukerjee added that heat action plans are moving from reactionary to more mitigative approaches. These approaches include the cool-roof initiative, which is a low-cost method to reduce indoor temperatures and the corresponding health impact. “Cool-roof paints reflect the sunlight away from the surface of a building. This has potential to benefit the most vulnerable section of society, [such as] migrant workers, women, and children in low-income neighborhoods,” he said.

Thirty-five percent of India’s urban population live in low-income housing known as slums. These low-rise buildings trap heat through tin roofs, exacerbating the urban heat island effect.

The cities of Bhopal, Surat, and Udaipur have pioneered the use of cool roofs for the past 2 years, Mukerjee said. “Ahmedabad included cool roofs in 2017; the neonatal unit has benefitted. Telangana has a state level policy now wherein all new building plans must include cool roofs.”

Mohanty said that tackling heat waves should be a national imperative and a more robust and granular picture of heat wave impact should be mapped to the productivity of citizens.

He said, “2021 will be remembered as a year of heat wave anomalies ravaging lives and livelihoods across Indian states. Tackling heat waves calls for heat wave–proofed urban planning, revival and restoration of natural ecosystems that act as natural shock absorbers against extreme heat wave events.”

—Deepa Padmanaban (@deepa_padma), Science Writer

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer