EOS

Syndicate content Eos
Science News by AGU
Updated: 2 years 26 weeks ago

Satellites Allow Scientists to Dive into Milky Seas

Fri, 09/03/2021 - 11:57

For centuries, sailors have reported sightings of large patches of glowing oceans, stretching like snowfields from horizon to horizon. The ephemeral phenomenon, incidents of which can grow larger than some states, has long evaded close examination by scientists. But now, thanks to a little assistance from space, researchers may finally be able to dive into these milky seas.

Milky seas are associated with bioluminescence, light created by living organisms using biochemical triggers. Most well-known examples of bioluminescence are short-lived flashes, like those emitted by fireflies. But milky seas last for days or even weeks, a steady glow of light in the dark ocean visible only on moonless nights. Scientists suspect tiny, bioluminescent bacteria are responsible, but because glimpses of milky seas are so fleeting, researchers have had virtually no opportunity to directly examine the phenomenon.

Hunting for milky seas from space in near-real time may change that. Researchers using two NOAA satellites—the Suomi National Polar-orbiting Partnership (NPP) and the Joint Polar Satellite System (JPSS)—have developed the ability to quickly identify milky seas, potentially opening the possibility for study before the glow disappears.

“Now we have a way of proactively identifying these candidate areas of milky seas,” said Steve Miller, a senior research scientist at Colorado State University and lead author of the new study, which was published in Scientific Reports. “If we do have assets in the area, the assets could be forward-deployed in a SWAT-team-like response.”

Rapid observations of the fleeting phenomena could help answer several lingering mysteries around milky seas, including how and why they form and why they are so rare.

“We really want to get out to one of these things and sample it and understand the structure,” Miller said.

Turning On the Lights

Milky seas have been described by sailors for more than 200 years. Reports characterize them as having a pale glow, and travel through them is described as like moving across a snowfield or cloud tops. Ships’ propellers create a dark wake as they move through the seas. The glow is so faint that moonlight renders it invisible to the human eye. The unusual waters seem more like the stuff of science fiction than science; indeed, they played a role in the Jules Verne novel Twenty Thousand Leagues Under the Seas.

Scientists experienced the spectacle only once, when R/V Lima chanced upon glowing waters in the Arabian Sea in 1985. Water samples from the ship identified algae covered with the luminous bacteria Vibrio harveyi, leading scientists to hypothesize that milky seas are associated with large collections of organic material.

Small groups of V. harveyi and other similar bacteria lack the faint shimmer found in milky seas. But once the population grows massive enough, the bacteria switch on their luminescence by the process of quorum sensing. Each individual bacterium seeds the water with a chemical secretion known as an autoinducer. Only after the emissions reach a certain density do the bacteria begin to glow.

“You know when you see these lights that there are a lot of luminescent bacteria there,” said Kenneth Nealson, who along with Woody Hastings identified the phenomenon in the 1960s and was not a part of the new study. Nealson, a professor emeritus at the University of Southern California, estimated it would take around 10 million bacteria per milliliter of water to turn on the lights.

Gathering so many bacteria in one part of the ocean requires a significant source of food, and scientists suspect the bacteria are feasting on the remains of massive algal blooms. “If you give them something good to eat, they’ll double about every half hour,” Nealson said. “It doesn’t take more than a day for them to have well over 10 million per milliliter.”

Unlike algal blooms that drive phenomena like red tides, which are supposed to drive fish away, milky seas may be working to attract fish. Fish eat the bacteria as well as the dying algae, and consumption doesn’t end the bacteria’s life cycle.

“For [the bacteria], the inside of a fish’s stomach is a favorable environment,” said Steve Haddock, a biologist at Monterey Bay Aquarium Research Institute in California and one of the authors of the new research. “They can live inside [a fish’s] stomach just like bacteria live inside our bodies.”

Seas from Space

This isn’t Miller’s first foray into using satellites to hunt for milky seas. After a conversation with his colleagues questioned whether bioluminescent activity could be detected from space, Miller wondered what sort of ocean activity might be visible. He found a report from Lima that listed its coordinates and the date and time of the 3-day-long encounter. Using this information, he hunted through archival data collected by the U.S. Defense Meteorological Satellite Program constellation of satellites, a collection of polar-orbiting satellites surveying Earth in visible and near-infrared light. In 2005, he and Haddock, along with two other researchers, reported the first detection of a milky sea from space.

“It was really difficult to find that milky sea in that older generation of data,” Miller said. He attributed the success to the clear records kept by Lima. “There was no way to pick it out on my own independently.” It turned out the ship had navigated through only a small part of the 15,400-square-kilometer glowing sea, which stretched to roughly the size of the state of Connecticut.

Miller and his coauthors identified a dozen milky sea incidents between 2012 and 2021.Encouraged by his success, Miller turned his attention to the newly launched Suomi NPP and its Day/Night Band (DNB) instrument, which breaks down light into gradients. Suomi NPP can sift through lights from cities, wildfires, and the atmospheric glow caused as ultraviolet light breaks apart molecules. Finding the faint light from milky seas required looking for dim seas and pulling out the short-lived events.

“It was a decade of learning,” Miller said of the time spent culling transient events in search of milky seas.

After determining that most historical sightings of the glowing bacteria over the past 200 years occurred in the Indian Ocean and around Indonesia, researchers concentrated their hunt on that region. Moonlit nights were eliminated because they were too bright. Ultimately, Miller and his coauthors identified a dozen milky sea incidents between 2012 and 2021.

The largest milky sea satellite spotting occurred south of Java in 2009. The DNB detected a dimly lit sea on 26 July and continued to track it until 9 August, when the Moon once again drowned out the bacteria. Imagery confirmed that the luminescent sea spanned more than 100,000 square kilometers. Estimates place the number of bacteria involved in the event as exceeding 10 sextillion (a sextillion is 1,000 trillion), making it the largest event on record.

“This is just an inconceivable number of bacteria participating in that event,” Haddock said.

“Almost all of the information on milky seas up to the 1990s was anecdotal from people on ships. Now we have remote observations from satellites showing exactly where these phenomena are happening and how they change with time. That’s a major step forward.”Satellite observations also allowed researchers to take stock of the conditions of the ocean when milky seas are present. The new research measured details like water temperature and the amount of chlorophyll present.

“There’s no doubt that there’s a connection between a high level of chlorophyll and milky seas,” Nealson said. “Nobody’s been closer to an answer for the phenomena than [Miller, Haddock, and their colleagues]; they did a really wonderful job.”

Biologist Peter Herring, a retired professor at the National Oceanography Centre in Southampton, U.K., agreed. “Almost all of the information on milky seas up to the 1990s was anecdotal from people on ships,” Herring said. “Now we have remote observations from satellites showing exactly where these phenomena are happening and how they change with time. That’s a major step forward.”

Diving into the Seas

Although satellite imagery is an important tool, Miller hopes that the project will eventually lead to real-time observations. There are a lot of unanswered questions about milky seas, some quite basic. For instance, scientists aren’t sure whether the bacteria form a thin film on the surface or stretch deeper beneath the water. Nor are they certain that algal blooms are the primary food source for the bacteria.

“If you were in the middle of one of these blooms, a lot of the things that we talk about would become obviously right or wrong,” Nealson said. “That’s very unusual in science, that you could get such a clear answer.”

But real-time, in-person study may continue to prove elusive. There are no major ocean facilities near the region where milky seas seem to be most prevalent, and the seas are rife with pirates and other dangers that keep many research vessels away.

Nor have photos or videos ever reliably captured milky seas. The closest attempt was in 2010, when a crew tried to take a photo of the glowing sea using a flash, which promptly washed out the dim phenomenon. Miller hopes more commercial crews can be equipped with cameras specially designed to photograph bioluminescence.

In the meantime, Miller hopes to one day experience the fleeting mystery in person.

“I’ve always wanted to dive into a milky sea and see if it’s still glowing under the surface,” he said.

—Nola Taylor Tillman (@NolaTRedd), Science Writer

Longer Days Likely Boosted Earth’s Early Oxygen

Fri, 09/03/2021 - 11:57

Gregory Dick, a professor of Earth and environmental sciences at the University of Michigan, had just completed a public lecture on a Saturday morning in 2016 when a colleague asked him a question.

Dick and postdoctoral researcher Judith Klatt were studying the role of cyanobacteria in oxygenating Earth’s atmosphere billions of years ago. Klatt, now at the Max Planck Institute for Marine Microbiology, had found that mats of photosynthetic cyanobacteria begin to release oxygen into the water only after a long lag early in the day as they compete with other microbes for sunlight.

Brian Arbic, a Michigan oceanographer who was attending the lecture, asked if the changing length of Earth’s day could have affected photosynthesis and hence the amount of oxygen released into the atmosphere. “I hadn’t ever heard about the changing day length,” recalled Dick. “We got excited about it, and Judith and I started working on the problem.”

“It is clear that photosynthesis is, and always has been, the only significant source of oxygen on our planet. And oxygenic photosynthesis was ‘invented’ by cyanobacteria.”Five years later, Klatt, Arbic, Dick, and colleagues reported their results in Nature Geoscience: The increase in the length of the day could have played a key role in allowing cyanobacterial mats to pump oxygen into the air, setting the stage for the development of complex life.

“One of the most enduring questions in the Earth sciences is how Earth became the oxygen-rich planet that we could evolve on,” said Klatt, the lead author of the study. “It is clear that photosynthesis is, and always has been, the only significant source of oxygen on our planet. And oxygenic photosynthesis was ‘invented’ by cyanobacteria.”

Although fossil records indicate that cyanobacteria first appeared on Earth at least 3.5 billion years ago, atmospheric oxygen didn’t begin to appear until about 2.4 billion years ago. Scientists have wondered why there was such a long lag between the two milestones.

Combining Experiments, Modeling

Klatt and her colleagues probed that question through a combination of laboratory experiments and modeling.

For the experiments, they collected samples of bacterial mats from the bottom of Middle Island Sinkhole, a collapsed limestone cave in Lake Huron, about 3 kilometers off the northeastern coast of Michigan’s lower peninsula. Conditions in the sinkhole approximate those of shallow coastal waters all across Earth billions of years ago. Competing layers of microbes jockey for position during the day, with purple oxygen-producing cyanobacteria rising to the top layer during the morning hours.

Similar mats “might have dominated Earth’s coasts for much of its history and were likely the arena for oxygenic photosynthesis,” said Klatt. “Nowadays, we only find microbial mats in ‘extreme’ environments….One such place is the Middle Island Sinkhole.”

“This is one of those rare places that we might call a process analog,” said Woodward Fischer, a professor of geobiology at the California Institute of Technology who wasn’t involved in the research. “They’re not looking at an environment that’s an exact reproduction [of ancient mats], but it’s a place where processes are playing out that remind us of that.”

In Dick’s laboratory, the team exposed the mats to day-night cycles of different lengths, corresponding to the length of Earth’s day at different times in its past.

Judith Klatt scrapes a sample of a Middle Island Sinkhole microbial mat from a sediment core into a jar for study. Credit: Jim Erickson, University of Michigan News

Shortly after the formation of the Moon, more than 4 billion years ago, the day was just 6 hours long. Lunar tides, however, function as a brake, slowing Earth’s rotation and making the days longer. (To balance the books, the Moon moves farther from Earth; it’s currently receding at about 3.8 centimeters per year.)

By 2.4 billion years ago—a time that corresponds to the Great Oxidation Event, the first big pulse of oxygen into the atmosphere—the day had extended to about 21 hours. It stayed at that length (perhaps, Arbic said, because of a counteracting thermal atmospheric tide that was unrelated to the lunar tides) for more than a billion years. At the end of that period, the lunar tides regained supremacy, and the day lengthened again, to about 24 hours. That increase corresponds to the second big jump in atmospheric oxygen, the Neoproterozoic Oxygenation Event, about 600 million years ago.

The Length of the Day Matters

With a longer day, more oxygen diffused into the water.The experiments showed that although total oxygen production by the photosynthetic cyanobacteria was about the same regardless of day length, the physics of diffusion limited the amount of oxygen that entered the water for up to several hours after sunrise. Short days left little time for that process to play out—by the time the oxygen factory had shifted into high gear, the Sun was setting, and it was time to shut down for the night. With a longer day, though, more oxygen diffused into the water.

“The total amount of sunlight is the same whether the day is 16 hours long or 24 hours or whatever,” said Arbic. “It’s just that with a shorter day, you turn it off and on more often. Since there’s a lag in the process, that’s why it matters if you have a longer day.”

The researchers then extended their work by developing a general model of mat systems and changing daylight conditions. The model considered the physiology of various mat systems, differing photosynthesis rates, and conditions in the water column, among other factors. “All scenarios showed a dependency of net productivity on day length,” Klatt said.

“The modeling was really exciting because it showed that this mechanism [for producing oxygen] doesn’t have anything to do with the particular behaviors of the organisms in one site versus another or in the modern world versus the ancient world,” said Dick. “We think this is a really robust effect that should operate anywhere you have oxygen being produced in these microbial mats.”

“This is part of the picture, but not the whole picture,” of Earth’s oxygenation history, Fischer said. “This mechanism extends just to mats living on the seafloor, and we don’t have a perfect geological record. But it’s absolutely part of the picture.”

—Damond Benningfield (damond5916@att.net), Science Writer

The Challenges of Forecasting Small, But Mighty, Polar Lows

Fri, 09/03/2021 - 11:53

Sailors in Scandinavian countries have told tales about dangerous encounters with small, intense storms since time immemorial. These maritime storms, known as polar lows, are believed to have claimed many small boats in North Atlantic waters [Rasmussen and Turner, 2003]. In a recent case in October 2001, strong winds associated with a polar low that developed near the Norwegian island of Vannøya capsized a boat, causing the death of one of its two crew members.

Polar lows are not only found in the North Atlantic but also are common in the North Pacific and in the Southern Ocean. In Japan, for example, tragedy struck in December 1986, when strong winds from a polar low caused a train crossing the Amarube Bridge to derail and fall from the tracks onto a factory below, killing six people [Yanase et al., 2016].

Forecasting these systems remains challenging because of their relatively small size, rapid formation, and short duration (most last less than 2 days). However, as global warming and receding sea ice make the Arctic more accessible and increase the vulnerability of coastal populations and ecosystems, it will become increasingly important to forecast these dangerous storms accurately. Studying the effects of a warming climate on where these storms form, as well as on their frequency, lifetime, and intensity, is also vital because this work will help determine which regions will be the most affected by polar lows in the future.

Dangerous High-Latitude Storms

Polar lows are a little-known part of the wider family of polar cyclones, which include polar mesoscale cyclones less than 1,000 kilometers in diameter as well as larger, synoptic-scale cyclones. With diameters between 200 and 1,000 kilometers—and most often 300–400 kilometers—polar lows are a subset of mesoscale cyclones.

The relatively small storms differ from other polar mesoscale cyclones in that they develop over the ocean and are especially intense. Polar lows are often associated with severe weather like heavy snow showers and strong winds that can reach hurricane force. Thus, they sometimes lead to poor visibility, large waves, and snow avalanches in mountainous coastal regions. Changes in meteorological conditions can be abrupt, with winds increasing from breeze to gale force in less than 10 minutes, for example. Such severe weather can force affected countries to close roads and airports.

With their high winds and waves, polar lows threaten many communities and ecosystems with extreme weather as well as potential coastal erosion and effects on ocean primary productivity.Polar lows can even cause the formation of rare, extreme storm waves known as rogue waves. One such wave, named the Draupner wave, was observed in the North Sea in 1995 and reached a height of 25.6 meters [Cavaleri et al., 2016].

With their high winds and waves, polar lows threaten many communities and ecosystems with extreme weather as well as potential coastal erosion and effects on ocean primary productivity. They also pose significant risks to marine-based industries, such as fishing and onshore and offshore resource extraction. Roughly 25% of the natural gas and 10% of the oil produced worldwide are produced in the Arctic, and despite the strong influence of fossil fuel use on continuing climate change, interest in further extraction of offshore resources in this region is growing.

In addition, as summer sea ice extent decreases because of climate change, shipping seasons will become longer, and new shipping routes will open up, making the Arctic more accessible and potentially increasing the likelihood of storm-related accidents. The possibility of shipping accidents or other disasters causing oil spills in the Arctic is particularly concerning because the lack of infrastructure in this remote region means that it could take a long time to respond to spills. With so many concerns and at-risk communities, there is a pressing need to improve forecasting of polar lows and other extreme Arctic weather to reduce risk.

Where Do Polar Lows Form?

Polar lows are predominantly a cold season phenomenon, developing near the sea ice edge and the coasts of snow-covered continents during cold air outbreaks, when very cold air over the ice or landmass flows out over the relatively warm ocean.

The area around Tromsø, Norway, seen here, is affected by polar lows during the Northern Hemisphere winter. Credit: Marta Moreno Ibáñez

Southern Hemisphere polar lows, which have received less attention from researchers, develop mainly near the Antarctic sea ice edge, far from human settlements, and they tend to be less intense than their northern counterparts. Northern Hemisphere polar lows develop above about 40°N, thus affecting several Arctic countries. They are more frequent in the North Atlantic than in the North Pacific [Stoll et al., 2018], mainly forming in the Nordic Seas, the Denmark Strait, the Labrador Sea, and Hudson Bay. Every year, some of the polar lows that develop in the Nordic Seas make landfall on the coast of Norway, affecting its coastal population.

In the North Pacific, polar lows primarily form over the Sea of Okhotsk, the Sea of Japan, the Bering Sea, and the Gulf of Alaska. Densely populated areas of Japan are especially vulnerable when marine cold air outbreaks in the Sea of Japan lead to polar lows.

An Elusive Phenomenon

The origins and characteristics of polar lows largely remained a mystery until the beginning of the satellite era in the 1960s. With resolution in atmospheric models being too coarse to capture the storms until relatively recently, satellite infrared images have been key to identifying polar lows. These images have shown that some polar lows are shaped like commas, similar to midlatitude synoptic-scale (i.e., extratropical) cyclones, whereas others are spiraliform like hurricanes (i.e., tropical cyclones; Figure 1).

Fig. 1. These satellite infrared images show (a) a comma-shaped polar low over the Norwegian Sea (which made landfall in Norway), captured by the Advanced Very High Resolution Radiometer, and (b) a polar low with a spiraliform signature over the Barents Sea (which made landfall in Novaya Zemlya, Russia), captured by the Moderate Resolution Imaging Spectroradiometer. The blue outlining represents the coastline. Source: Moreno-Ibáñez et al. [2021], CC BY-NC 4.0How polar lows develop was long debated among researchers. Some argued that polar lows resembled small versions of synoptic-scale cyclones, which develop because of baroclinic instabilities arising from strong horizontal temperature gradients in the atmosphere. Others claimed they were akin to hurricanes, which intensify as a result of convection and are typically about 500 kilometers in diameter. Today, the research community agrees that development mechanisms of polar lows are complex and include some processes involved in the formation of synoptic-scale cyclones and some involved in hurricane formation. Among these processes are transfers of sensible heat from the ocean surface to the atmosphere through the effects of turbulent air motion, which play roles in the formation and intensification of polar lows.

Weather forecasting in polar regions remains challenging because atmospheric models still struggle to correctly represent certain key processes, such as air-sea interactions.In general, weather forecasting in polar regions remains challenging because atmospheric models still struggle to correctly represent certain key processes, such as air-sea interactions, in these regions. Because of their small size and short lifetimes, polar lows are particularly hard to forecast compared with larger polar cyclones. Compounding the challenge is the fact that these systems develop over the ocean at high latitudes, where conventional observations (e.g., from surface weather stations, buoys, and aircraft) are scarce.

With the advent of high-resolution nonhydrostatic atmospheric models with grid meshes of less than about 10 kilometers (which started to be implemented for weather forecasting in the 2000s), however, polar low forecasts have improved notably. Unlike models that assume hydrostatic conditions, nonhydrostatic models do not assume balance between the vertical pressure gradient force, which results from the decrease of atmospheric pressure with altitude, and the force of gravity—a balance that does not occur in intense small-scale systems. Compared to coarser models, high-resolution models better represent processes that occur near the surface (e.g., the influence of topography on wind) as well as convection, which play important roles in polar low development. Moreover, high-resolution models can better resolve the structure of polar lows (e.g., strong wind gradients).

Nevertheless, model improvements are still needed to forecast the trajectories and intensities of polar lows accurately [Moreno-Ibáñez et al., 2021]. For instance, the parameterization of turbulence is based on approximations that are not valid at the kilometer scale. In addition, more conventional observations of atmospheric variables at high latitudes, such as winds and temperatures at different levels of the atmosphere, are required to improve the initial conditions fed into the models.

Several major scientific questions about these storms also remain unanswered: What are the best objective criteria (e.g., size, intensity, lifetime) for identifying and tracking polar lows using storm tracking algorithms? What is the main trigger for polar low development? And, most intriguing, what is the role of polar lows in the climate system?

Actors in the Climate System

Little is known about how polar lows contribute to Earth’s climate system. A few studies have analyzed the effects of polar lows on the ocean, but results so far are inconclusive. On the one hand, the large sensible heat fluxes—which can reach more than 1,000 watts per square meter—from the ocean surface to the atmosphere that favor the development of these cyclones lead to cooling of the ocean surface [e.g., Føre and Nordeng, 2012]. On the other hand, the strong winds of polar lows induce upper-ocean mixing, which can warm the ocean surface in regions where sea surface temperatures are colder than underlying waters [Wu, 2021].

The warming or cooling effect of polar lows on the ocean surface may influence the formation rate of deep water, a major component of Earth’s global ocean circulatory system.The overall warming or cooling effect of polar lows on the ocean surface may influence the formation rate of deep water, a major component of Earth’s global ocean circulatory system. In one study, researchers found that polar mesoscale cyclones increase ocean convection and stretch convection to deeper depths [Condron and Renfrew, 2013]. However, this study used only a coupled ocean–sea ice model, relying on a parameterization to represent the effects (e.g., winds) of polar mesoscale cyclones in the ocean-ice model rather than explicitly resolving the cyclones. Therefore, the interactions between the ocean and the atmosphere, which are relevant for the deepwater formation, were not represented. This tantalizing, but hardly definitive, result highlights the need for further study of polar lows’ interaction with the ocean and climate.

Polar Lows in a Warmer Climate

The continuing decreases in Arctic sea ice extent and snow cover on land projected to occur with global warming, as well as increases in sea surface temperatures, will undoubtedly affect the climatology of polar lows. In the North Atlantic, polar lows have been projected to decrease in frequency, and the regions where they develop are expected to shift northward as sea ice retreats [Romero and Emanuel, 2017]. This shift means that newly opened Arctic shipping routes will not be spared from these storms.

We do not know yet what will happen in other regions because research investigating climate change impacts on the frequency, lifetime, intensity, and genesis areas of polar lows is still at an incipient stage. The few studies conducted so far have used dynamical or statistical downscaling methods to produce high-resolution information about the relatively small, localized phenomenon of polar lows from low-resolution data (e.g., from global climate models)—approaches that require far less computing resources than performing global, high-resolution climate simulations.

Unfortunately, current coarse-grained global climate models cannot resolve small-scale phenomena like polar lows. The typical resolution of the models included in the Coupled Model Intercomparison Project Phase 5 (CMIP5), endorsed by the World Climate Research Programme in 2008, was 150 kilometers for the atmosphere and 1° (i.e., 111 kilometers at the equator) for the ocean. As part of CMIP6, a High Resolution Model Intercomparison Project has been developed [Haarsma et al., 2016], including models with grid meshes ranging from 25 to 50 kilometers for the atmosphere and 10 to 25 kilometers for the ocean. These resolutions are fine enough to enable study of some mesoscale eddies in the atmosphere and the ocean [Hewitt et al., 2020], and important weather phenomena, such as tropical cyclones, can also be simulated [e.g., Roberts et al., 2020].

Nevertheless, atmospheric models at this resolution are still too coarse to resolve most polar lows. Moreover, the resolution of these ocean models is not high enough to resolve mesoscale eddies that develop poleward of about 50° latitude [Hewitt et al., 2020], so some mesoscale air-sea interactions cannot be adequately represented. Mesoscale air-sea interactions also affect sea ice formation, which influences where polar lows form. The recent Intergovernmental Panel on Climate Change report indicates that there is low confidence in projections of future regional evolution of sea ice from CMIP6 models.

Interdisciplinary Research Needed

The importance of interdisciplinary collaboration in polar low research cannot be overstated.Considering the interactions among the atmosphere, ocean, and sea ice involved in polar low development, the importance of interdisciplinary collaboration in polar low research cannot be overstated. Close cooperation among atmospheric scientists, oceanographers, and sea ice scientists is needed to enable a complete understanding of polar lows and their role in the climate system.

Improving forecasts and longer-term projections of polar lows requires coupling of high-resolution atmosphere, ocean, and sea ice models. High-resolution coupled model forecasts of polar lows are already practicable. With continuing increases in computational capabilities, it may become feasible to use coupled high-resolution regional climate models and variable-resolution global climate models to better study how polar low activity may change in a warming climate and the impact of polar lows on ocean circulation. Such interdisciplinary research will also help us better anticipate and avoid damaging effects of these small, but mighty, polar storms on people and productivity.

Acknowledgments

The author thanks René Laprise and Philippe Gachon, both affiliated with the Centre for the Study and Simulation of Regional-Scale Climate, University of Quebec in Montreal, for their constructive comments, which helped improve this article.

Telling the Stories Behind the Science

Thu, 09/02/2021 - 12:30

AGU journals and books have captured research in Earth and space science for over a century, providing a documented record of scientific discovery. There is another history, however, which how not been as well documented, and these are the stories of how that scientific research was accomplished. These are the stories that might be told in a department coffee room or recounted after-hours at a scientific meeting, often passed down informally from one generation of scientists to the next.

Perspectives is a collection reflecting on important scientific discoveries, advances, and events in Earth and space science, focusing on the process of scientific discovery.AGU launched a new journal, Perspectives of Earth and Space Scientists, to capture these stories. Perspectives is a collection of memoirs, essays, and insights by AGU Fellows and other invited authors reflecting on important scientific discoveries, advances, and events in Earth and space science, focusing on the process of scientific discovery. All articles are open access and are intended to be read and understood by the wider geosciences community and the science-interested public, both as a documentation of the past history of our fields and as inspiration for future scientists.

Scientific journals tend to record the what of the research and often skip over the why or how of the research, but these are often the most important aspects for young scientists. These stories address how challenges were met, how obstacles were overcome, and how funding was obtained. Research papers tend to avoid an author’s personality, except obliquely, but Perspectives stories revel in the personalities, revealing how deals were made, alliances forged, and sometimes how conflicts were resolved. This is often what young scientists want and need to know to succeed in their research.

Although Perspectives articles do not focus solely on the scientific research, they are still very clearly scientific articles, and as such they cite past research, contain reference lists, are peer-reviewed, and are themselves citable publications.

Perspectives articles contain a balance of both scientific and personal history, blended in an engaging story.Because Perspectives articles contain a balance of both scientific and personal history, blended in an engaging story, authors often find these articles to be more difficult to write than a more traditional paper. As we have all encountered, good storytelling is a challenging art form, often mastered only after years of practice. It is therefore not unlikely that the paper may go through a few rounds of revisions before its story becomes sufficiently impactful.

I don’t use the word “story” casually. Humans evolved to learn effectively through the telling of stories, as cultural memory was passed down with societies through storytelling for thousands of years before the written word was invented. Within the most recent pedagogical advances in science education, as exemplified by the phenomenon-based learning of the K-12 Next Generation Science Standards, student understanding is best attained when the science is structured around engaging storylines that address relevant and observable phenomena.

In the context of Perspectives, a good story is not a biography or Wikipedia entry or prosaic retelling of one’s CV. A story needs a thesis, a plotline, and usually some degree of character development (i.e., of the author). This story should be truthful and fair and have a takeaway message that can be of use to future scientists. Beyond that, however, there many different approaches that an author can take. How were you drawn to your chosen field? Were there critical events or turning points in your career? What obstacles did you overcome? How did a particular research field or scientific program evolve? What are some highlights and reflections on the current status of your field and where do you think it is going? Some degree of humor is often welcome but not required. Humility is always a necessity. The articles published thus far span a wide range of formats.

We seek submissions from a broad diversity of author identities, backgrounds, and career pathways.The first round of Perspectives articles were solicited from AGU Fellows, but we now seek further submissions from a broad diversity of author identities, backgrounds, and career pathways, to capture the full diversity of current AGU membership that may inspire researchers from diverse backgrounds.

Articles are published by invitation only but we welcome proposals. If you feel that you have a personal scientific story to document and disseminate, please send an article proposal for consideration by the Editorial Board. There are no charges for publishing articles in Perspectives and all articles are published with an open access license.

—Michael Wysession (mwysession@wustl.edu; 0000-0003-4711-3443), Editor in Chief, Perspectives of Earth and Space Science, and Department of Earth and Planetary Sciences, Washington University in St. Louis

How the “Best Accidental Climate Treaty” Stopped Runaway Climate Change

Thu, 09/02/2021 - 12:29

The international treaty that phased out the production of ozone-depleting chemicals has prevented between 0.65°C and 1°C of global warming, according to research.

The study also showed that carbon stored in vegetation through photosynthesis would have dropped by 30% without the treaty, which came into force in 1989.

Researchers from the United Kingdom, New Zealand, and the United States wrote in Nature that the Montreal Protocol was essential in protecting carbon stored in plants. Studies in the polar regions have shown that high-energy ultraviolet rays (UVB) reduce plant biomass and damage DNA. Forests and soil currently absorb 30% of human carbon dioxide emissions.

“At the ends of our simulations, which we finished around 2100, the amount of carbon which is being taken up by plants is 15% the value of our control world where the Montreal Protocol is enacted,” said lead author and atmospheric scientist Paul Young of Lancaster University.

In the simulation, the UVB radiation is so intense that plants in the midlatitudes stop taking up a net increase in carbon.

Plants in the tropics fare better, but humid forests would have 60% less ozone overhead than before, a state much worse than was ever observed in the Antarctic ozone hole.

A “World Avoided”

The study used a chemistry climate model, a weather-generating tool, a land surface model, and a carbon cycling model. It links ozone loss with declines in the carbon sink in plants for the first time.

Chlorofluorocarbons (CFCs), ozone-depleting chemicals phased out by the Montreal Protocol, are potent greenhouse gases. The study estimated that CFCs would warm the planet an additional 1.7°C by 2100. Taken together, the damage from UVB radiation and the greenhouse effect of CFCs would add an additional 2.5°C warming by the century’s end. Today, the world has warmed, on average, 1.1°C at the surface, leading to more frequent droughts, heat waves, and extreme precipitation.Carbon dioxide levels double in the “World Avoided” scenario.

Carbon dioxide levels in the atmosphere would reach 827 parts per million by the end of the century too, double the amount of carbon dioxide today (~412 parts per million).

The work analyzed three different scenarios: The first assumes that ozone-depleting substances stayed below 1960 levels when massive production kicked in. The second assumes that ozone-depleting chemicals peaked in the late 1980s before tapering off. The last assumes that ozone-depleting chemicals increase in the atmosphere every year by 3% through 2100.

The last scenario, called the “World Avoided,” assumes not only that the Montreal Protocol never happened but also that humans had no idea CFCs were harming ozone, even when the effects would become clear in the 2040s. The models also assume one kind of UVB damage to all vegetation, when in reality, plants react differently.

“Change Is Possible” The ozone layer over Antarctica has stabilized and is expected to recover this century. Credit: Amy Moran/NASA Goddard Space Flight Center

“The Montreal Protocol is regarded as one of the most successful global environmental treaties,” said University of Leeds atmospheric scientist Martyn Chipperfield, who was not involved in the research. “CFCs and other ozone-depleting substances are potent greenhouse gases, and the Montreal Protocol is known for having real benefits in addressing climate change by removing previous levels of high CFCs from the atmosphere.”

The Kigali Amendment to the Montreal Protocol in 2016 brought climate change to the forefront. Countries agreed to gradually phase out hydrofluorocarbons (HFCs), which are used in applications such as air conditioning and fire extinguishing systems. HFCs originally replaced hydrochlorofluorocarbons (HCFCs) and CFCs because they do not harm ozone. Yet HFCs are potent greenhouse gases.

The Montreal Protocol was the “best accidental climate treaty,” said Young. “It is an example of where science discovered there was a problem, and the world acted on that problem.”

Injecting sulfate aerosols into the stratosphere has been proposed as one geoengineering solution to slow global warming. “People are seriously talking about this because it’s one of the most plausible geoengineering mechanisms, yet that does destroy ozone,” Young said. Calculating the harm to the carbon cycle is “the obvious follow-up experiment for us.”

The research highlights the importance of the U.N. Climate Change Conference of the Parties (COP26) this fall, which will determine the success of worldwide climate targets.

Immediate and rapid reductions in greenhouse gases are necessary to stop the most damaging consequences of climate change, according to the Intergovernmental Panel on Climate Change.

—Jenessa Duncombe (@jrdscience), Staff Writer

Heat Pumps Can Lower Home Emissions, but Not Everywhere

Thu, 09/02/2021 - 12:25

In 1855, engineer Peter von Rittinger was concerned with salt production. He was building a device that could evaporate water from brine more efficiently than available methods. Later iterations of this device, the heat pump, would become tools to slow climate change. Today heat pumps aim to replace a home’s in situ oil or gas consumption with cleaner electricity use.

Researchers recently found that wider installation of residential heat pumps for space heating could lower greenhouse gas emissions. The results, published in Environmental Research Letters, showed that heat pumps would reduce emissions for two thirds of households and financially benefit a third of U.S. homeowners.

But only around 10% of homes use heat pumps, which pump heat out of the house in summer and into the house during winter. “The majority of heating in buildings, as well as hot water and cooking, relies on fossil fuels burned on site,” said Michael Waite, associate research scientist at Columbia University who was not involved in the new study. To reduce emissions, homeowners need to replace such heating systems. “The only direct way of doing that is through electrification of those uses,” said Waite.

Pros and Cons

But wide-scale heat pump adoption may have unintended, undesirable consequences. Thomas Deetjen, a research associate at the University of Texas at Austin, and his coauthors wanted to see which circumstances make heat pumps a wise choice for homeowners and society.

Using tools from the National Renewable Energy Laboratory (NREL), they simulated outcomes of widespread heat pump adoption. They modeled 400 locally representative single-family homes in each of 55 cities. To model the electric grid, the researchers assumed moderate decarbonization of the grid (a 45% decline in emissions over the 15-year lifetime of a heat pump).

Researchers evaluated effects on homeowners, comparing costs of heat pump installation to energy cost savings. They also analyzed changes in carbon dioxide emissions and air pollutants, putting a dollar amount to climate and health damages. Climate damages included costs associated with climate change–driven natural hazards such as flooding and wildfire. Health damages include premature deaths due to air pollution.

“The key finding is that for around a third of the single-family homes in the U.S., if you installed the heat pump, you would reduce environmental and health damages.”“The key finding is that for around a third of the single-family homes in the U.S., if you installed the heat pump, you would reduce environmental and health damages,” said Parth Vaishnav, an assistant professor at the School for Environment and Sustainability at the University of Michigan and a coauthor of the paper. Installing heat pumps would avoid $600 million in health damages and $1.7 billion in climate damages each year. It would also directly save homeowners money on energy costs. They also found that for all homes, assuming moderate electric grid decarbonization, heat pump use cut greenhouse gas emissions.

But heat pump installation did have other consequences. “Heat pumps are not necessarily a silver bullet for every house,” said Deetjen.

Although homeowners may trade a furnace for a heat pump, for example, the electricity for that pump could still come from a plant burning fossil fuels. The cost of generating electricity may be more than the cost of in situ fossil fuel use. “There are some houses that if they get a heat pump, it’s actually worse for the public,” said Deetjen. ”They end up creating more pollution.”

Heat pump benefits also depend on climate. Heat pumps operate less efficiently in the cold, running up electricity costs. In 24 of the studied cities, mostly in colder climates, peak residential electricity demand increased by over 100% if all houses adopted heat pumps, which would require grid upgrades.

“It could be challenging to meet that increase of winter peaking, because our system is not built that way,” said Ella Zhou, a senior modeling engineer at NREL not involved with this study. “We need to think about both the planning and operation of the grid system in a more integrated fashion with future use.”

Consequences of Widespread Electrification

The new research supported 32% of single-family homes converting to heat pumps. More widespread adoption came at much higher financial and health costs. If all U.S. houses adopted heat pumps, the study said, it would yield $6.4 billion in climate benefits. However, it would also cost homeowners $26.7 billion, and pollutants from increased electricity generation would result in $4.9 billion in health damages from financial burdens resulting from illnesses or premature deaths.

There is some uncertainty surrounding these findings. The study didn’t consider the cost of potential grid upgrades or what complete decarbonization would mean for heat pump adoption. Waite pointed out that as the grid evolves, future research should also determine whether renewable energy could even meet the demands of high electricity loads.

—Jackie Rocheleau (@JackieRocheleau), Science Writer

When Deep Learning Meets Geophysics

Wed, 09/01/2021 - 14:06

As artificial intelligence (AI) continues to develop, geoscientists are interested in how new AI developments could contribute to geophysical discoveries. A new article in Reviews of Geophysics examines one popular AI technique, deep learning (DL). We asked the authors some questions about the connection between deep learning and the geosciences.

How would you describe “deep learning” in simple terms to a non-expert?

Deep learning (DL) optimizes the parameters in a system, a so-called “neural network,” by feeding it a large amount of training data. “Deep” means the system consists of a structure with multiple layers.

DL can be understood from different angles. In terms of biology, DL is a bionic approach imitating the neurons in the human brain; a computer can learn knowledge as well as draw inferences like a human. In terms of mathematics, DL is a high-dimensional nonlinear optimization problem; DL constructs a mapping from the input samples to the output labels. In terms of information science, DL extracts useful information from a large set of redundant data.

How can deep learning be used by the geophysical community?

Deep learning-based geophysical applications. Credit: Yu and Ma [2021], Figure 4aDL has the potential to be applied to most areas of geophysics. By providing a large database, you can train a DL architecture to perform geophysical inferring. Take earthquake science as an example. The historical records of seismic stations contain useful information such as the waveforms of an earthquake and corresponding locations. Therefore, the waveforms and locations serve as the input and output of a neural network. The parameters in the neural network are optimized to minimize the mismatch between the output of the neural network and the true locations. Then the trained neural network can predict locations of new coming seismic events. DL can be used in other fields in a similar manner.

What advantages does deep learning have over traditional methods in geophysical data processing and analysis?

Traditional methods suffer from inaccurate modeling and computational bottlenecks with large-scale and complex geophysical systems; DL could be helpful to solve this. First, DL can handle big data naturally where it causes a computational burden in traditional methods. Second, DL can utilize historical data and experience which are usually not considered in traditional methods. Third, an accurate description of the physical model is not required, which is useful when the physical model is not known partially. Fourth, DL can provide a high computational efficiency after the training is complete thus enabling the characterization of Earth with a high resolution. Fifth, DL can be used for discovering physical concepts, such as the solar system is heliocentric, and may even provide discoveries that are not yet known.

In your opinion, what are some of the most exciting opportunities for deep learning applications in geophysics?

DL has already provided some surprising results in geophysics. For instance, on the Stanford earthquake data set, the earthquake detection accuracy improved to 100 percent compared to 91 percent accuracy with the traditional method.

In our review article, we suggest a roadmap for applying DL to different geophysical tasks, divided into three levels:

Traditional methods are time-consuming and require intensive human labor and expert knowledge, such as in the first-arrival selection and velocity selection in exploration geophysics. Traditional methods have difficulties and bottlenecks. For example, geophysical inversion requires good initial values and high accuracy modeling and suffers from local minimization. Traditional methods cannot handle some cases, such as multimodal data fusion and inversion.

What are some difficulties in applying deep learning in the geophysical community?

Despite the success of DL in some geophysical applications, such as earthquake detectors or pickers, its use as a tool for most practical geophysics is still in its infancy.

Despite the success of deep learning in some geophysical applications its use as a tool for most practical geophysics is still in its infancy.The main difficulties include a shortage of training samples, low signal-to-noise ratios, and strong nonlinearity. The lack of training samples in geophysical applications compared to those in other industries is the most critical of these challenges. Though the volume of geophysical data is large, available labels are scarce. Also, in certain geophysical fields, such as exploration geophysics, the data are not shared among companies. Further, geophysical tasks are usually much more difficult than those in computer vision.

What are potential future directions for research involving deep learning in geophysics?

Future trends for applying deep learning in geophysics. Credit: Yu and Ma [2021], Figure 4bIn terms of DL approaches, several advanced DL methods may overcome the difficulties of applying DL in geophysics, such as semi-supervised and unsupervised learning, transfer learning, multimodal DL, federated learning, and active learning. For example, in practical geophysical applications, obtaining labels for a large data set is time-consuming and can even be infeasible. Therefore, semi-supervised or unsupervised learning is required to relieve the dependence on labels.

We would like to see research of DL in geophysics focus on the cases that traditional methods cannot handle, such as simulating the atmosphere or imaging the Earth’s interior on a large spatial and temporal scale with high resolution.

—Jianwei Ma (jwm@pku.edu.cn,  0000-0002-9803-0763), Peking University, China; and Siwei Yu, Harbin Institute of Technology, China

Forecast: 8 Million Energy Jobs Created by Meeting Paris Agreement

Wed, 09/01/2021 - 14:05

The tricky part will be ensuring that laid-off fossil fuel workers have access to alternative employment.Opponents of climate policy say curbing fossil fuel emissions will kill jobs, but a new study showed that switching to renewables would actually create more jobs than a fossil fuel–heavy future will. The tricky part will be ensuring that laid-off workers have access to alternative employment.

Globally, jobs in the energy sector are projected to increase from 18 million today to 26 million in 2050 if the world cuts carbon to meet the well-below 2°C target set by the Paris Agreement, according to a model created by researchers in Canada and Europe. Renewables will make up 84% of energy jobs in 2050, primarily in wind and solar manufacturing. The new study was published earlier this summer in One Earth.

In contrast, if we don’t limit global warming to below 2°C, 5 million fewer energy jobs will be created.

The Future Is Looking Bright for Solar and Wind

The Intergovernmental Panel on Climate Change’s latest physical science assessment predicted that climate will be 1.5°C warmer than preindustrial levels by the 2030s unless there are strong, rapid cuts to greenhouse gases in the coming decades. Such cuts will necessitate a greater reliance on sustainable energy sources.

In 2020, renewables and nuclear energy supplied less than a quarter of global energy, according to BP’s 2021 report.

Many regions will gain energy jobs in the transition.This number is expected to rise, however, in part because solar photovoltaics and wind are now cheaper than fossil fuels per megawatt-hour and because many countries have set aggressive emissions-cutting goals.

According to the new study, many regions will gain energy jobs in the transition, including countries throughout Asia (except for China), North Africa, and the Middle East, as well as the United States and Brazil. Although fossil fuel extraction jobs will largely disappear, “massive deployment of renewables leads to an overall rise in jobs,” wrote the authors.

But not all countries will be so lucky: Fossil fuel–rich China, Australia, Canada, Mexico, South Africa, and sub-Saharan African countries will likely lose jobs overall.

Only jobs directly associated with energy industries, such as construction or maintenance, were included in the study. Other reports have included adjacent or induced jobs such as fuel transport, government oversight, and service industry.

Previous studies estimated a larger increase in energy jobs, using numbers compiled from the Organisation for Economic Co-operation and Development.

The new study instead compiled data from primary sources by mining fossil fuel company reports, trade union documents, government reports, national databases, and other sources that cover 50 countries representing all major players in fossil fuels and renewables. Lead study author Sandeep Pai ran the numbers through an integrated assessment model housed at the European Institute on Economics and the Environment. The model calculates job growth projections under different climate policies and social and economic factors. Pai is a lead researcher at the Just Transition Initiative supported by the nonprofit policy research organization the Center for Strategic and International Studies and the Climate Investment Funds.

Calls for Just Transitions

Crucially, the study found that nearly 8 million of the 26 million jobs (31%) in 2050 are “up for grabs,” said study author Johannes Emmerling, a scientist at the European Institute on Economics and the Environment.Renewable manufacturing isn’t tied to a particular location, unlike coal mining.

These jobs in renewable manufacturing aren’t tied to a particular location, unlike coal mining.

Pai concurred. “Any country with the right policies and incentives has the opportunity to attract between 3 [million and] 8 million manufacturing jobs in the future.”

Recently, countries have begun putting billions of dollars toward “just transition,” a loose framework describing initiatives that among other things, seek to minimize harm to workers in the fossil fuel industry. Concerns include salary loss, local revenue, and labor exploitation.

What could be done? Just transition projects may include employing fossil fuel workers to rehabilitate old coal mines or orphan oil wells, funding community colleges to train workers with new skills, supporting social services like substance abuse centers, and incentivizing local manufacturing.

“The just transition aspect is quite critical,” Pai said. “If [countries] don’t do that, this energy transition will be delayed.”

LUT University energy scientist Manish Thulasi Ram, who was not involved in the study, thinks the latest research underestimates the job potential of the energy transition. Using a different approach, Ram forecasts that close to 10 million jobs could be created from battery storage alone by 2050—a sector not considered in the latest analysis.

—Jenessa Duncombe (@jrdscience), Staff Writer

Does the Priming Effect Happen Underwater? It’s Complicated

Wed, 09/01/2021 - 14:03

In microbiology, the priming effect is the observation that the decomposition rate of organic material is often altered by the introduction of fresh organic matter. Depending on the context, the effect can be the increase or reduction of microbial consumption and a corresponding change in emitted carbon dioxide.

Although the mechanism isn’t fully understood, several contributing processes have been proposed. They include the shift of some specialist microbes to the consumption of only fresh or only older material, as well as increased decomposition of stable (older) matter in search of specific nutrients needed to sustain growth enabled by the addition of fresh material.

The priming effect has been well established in terrestrial soils, but experimental evidence has appeared more mixed in aquatic environments. Both the magnitude and the direction (i.e., increase versus decrease) of the effect have been contradictory in a variety of studies conducted in the laboratory and the field.

Sanches et al. performed a meta-analysis of the literature in an attempt to resolve these difficulties. The authors identified 36 prior studies that published a total of 877 results matching their experimental criteria. Of the subset that directly estimated priming, about two thirds concluded that there was no priming effect, with the majority of the remainder indicating an acceleration in decomposition. However, these past studies used a wide variety of metrics and thresholds to define the priming effect. Many others did not directly calculate the magnitude of the effect.

To overcome the range of methodologies, the researchers defined a consistent priming effect metric that can be calculated from the reported data. With this metric, they found support for the existence of a positive priming effect. Namely, the addition of new organic material increases decomposition on average by 54%, with a 95% confidence interval of 23%–92%. They attribute this divergence from the aggregated conclusions described above to a significantly larger data set (because they could calculate their metric even when the original authors did not), which enabled increased statistical significance.

The meta-analysis also indicated which experimental factors were most correlated with an observed priming effect. One key factor was the proxy chosen for microbial activity, as well as the addition of any other nutrients, such as nitrogen or phosphorus. Finally, the authors noted that other recent meta-analyses using differing methodologies have reported no priming effect; they concluded that the umbrella term “priming effect” may be better split into several terms describing related, but distinct, processes. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1029/2020JG006201, 2021)

—Morgan Rehnberg, Science Writer

中生代1.95亿年的全球气候模拟

Tue, 08/31/2021 - 13:44

This is an authorized translation of an Eos article. 本文是Eos文章的授权翻译。

中生代约在2.52亿年前至6600万年前之间,是地球历史上的一个关键时期。除了是恐龙时代之外,当时的超级大陆泛古陆开始分裂成我们今天所熟悉的碎片大陆。随着二氧化碳水平的升高和太阳的照射,构造变化影响了全球气候,产生了温暖潮湿的温室条件。详细了解中生代气候变化趋势的驱动因素,不仅有助于深入了解地球的历史,也有助于科学家研究人类造成的地球变暖的后果。

研究过去气候的一种方法是使用数值模型。在一项新的研究中,Landwehrs等人采用500万年的时间间隔,对2.55亿到6000万年前的气候进行了集合气候模拟。他们在不同的运行中调整了特定的参数,以剖析过去气候对古地理、大气二氧化碳水平、海平面、植被模式、太阳能量输出和地球轨道变化的敏感性。

作者发现,中生代的全球平均气温普遍高于前工业时期。他们还观察到一种变暖的趋势,这是由太阳光度增加和海平面上升引起的。海洋区域反射的太阳辐射通常比陆地要少;相应地,研究人员发现,海平面上升和大陆地区洪水泛滥与全球平均温度上升同时出现。在这一总趋势下,大气中二氧化碳的波动造成了全球平均温度的冷暖异常。作者指出,这一发现并不意味着人类导致的全球变暖可以被忽视;现代气候变化的速度比地球历史上的变化快得多。

气候模拟的集合也提供了深入了解中生代长期气候变化的其他方面。总的来说,作者确定了从强烈的季节性和干旱的泛古陆气候到一个更平衡和湿润的气候的过渡。为了辅助对中生代气候趋势的进一步分析,作者们在网上分享了他们的模型数据。

-科学作家Jack Lee

This translation was made by Wiley. 本文翻译由Wiley提供。

Share this article on WeChat. 在微信上分享本文。

Lake Erie Sediments: All Dredged Up with Nowhere to Grow

Tue, 08/31/2021 - 13:09

In 2014, a Lake Erie algal bloom sent a cloud of toxins into Toledo, Ohio’s water supply, forcing the city to shut down water service to 400,000 residents. Like many lakes in agricultural areas, Lake Erie produces thick, smelly algae mats when the water gets warm. Temperatures above 60°F can trigger algal blooms, and Lake Erie—shallowest and warmest of the Great Lakes—hit nearly 80°F in 2020. In addition, the lake is the final destination for fertilizers washing off of area farms—that’s a recipe for excess photosynthesis.

“You’re cooking a perfect soup for having a very productive lake,” said Angélica Vázquez-Ortega, an assistant professor of geochemistry at Bowling Green State University.

Whereas fertilizers are a source of Lake Erie’s annual algae issue, research from Vázquez-Ortega’s lab suggests agriculture could be a partial solution, too. Instead of applying more fertilizers upstream, farmers could remove nutrients from the lake by mixing Lake Erie sediment into their soils. The research is especially timely as a new law leaves millions of tons of sediment piling up at Ohio’s ports.

Bright green algae lights up western Lake Erie near Toledo, Ohio. Credit: Joshua Stevens, NASA Earth Observatory Hydrologic History

The research is rooted in northeastern Ohio, a region that formerly boasted a 4,000-square-kilometer marsh dubbed the Great Black Swamp. The swamp “was the kidneys of the area, filtering the nutrients and making sure the water Lake Erie is receiving was clean,” said Vázquez-Ortega.

Colonizers sick of the knee-deep mud and clouds of mosquitoes gradually drained the area in the mid-1800s, easing navigation but increasing the export of sediments from the land. The watershed is now over 72% agricultural.

Nutrients like nitrogen and phosphorus naturally enter lakes through sediment export, but farm practices—like draining and fertilizing fields—accelerate the process. Once nutrients enter Lake Erie, they tend to stay there, eventually accumulating in sediments on the lake floor.

Storing dredged sediment from Lake Erie “is a completely new challenge for Ohio.”Those sediments require annual dredging to keep ports viable. Ohio dredges 1.5 million tons of sediment from its eight ports each year, and the Port of Toledo accounts for more than half that figure. Until recently, Toledo would dump dredged sediment into open water, a common practice that introduces phosphorus and nitrogen back into the water column and buries benthic communities on the seafloor. Ohio banned open-water dumping of dredged sediment, effective in 2020, forcing ports to find a process for storing their sediment. For now, Toledo is building artificial islands on the lakefront.

“This is a completely new challenge for Ohio,” said Vázquez-Ortega.

More Sediment, More Soybeans Soybeans grow in buckets in a greenhouse at Bowling Green State University. Credit: Angélica Vázquez-Ortega

Agriculture could be a possible destination for dredged sediment, according to results from Vázquez-Ortega’s lab published in Journal of Environmental Quality. In a greenhouse experiment, sediment from the Port of Toledo increased crop growth with no significant loss of nutrients in percolated water.

The study created four soil combinations, blending material from a local farm with dredged material from the Port of Toledo at sediment ratios of 0%, 10%, 20%, and 100%. Dredged sediment introduced more organic content, giving the test soils a lower bulk density and allowing roots and water to penetrate into the less compact soil. Samples with more Lake Erie sediment grew heftier root systems and generated higher soybean yields. The study demonstrated that Lake Erie sediments can improve crop yield without the use of additional fertilizers.

Farming Out the Research

“All that information is really necessary for convincing a farmer this is an option.”Despite promising results, there’s plenty left to research. What crops grow best and at what sediment percentages? What if industrial contaminants are in the soil? Importantly, will this work outside the greenhouse on an actual farm?

“All that information is really necessary for convincing a farmer this is an option,” said Vázquez-Ortega.

Economics and logistics are other key concerns. With 1.5 million tons of material, Ohio can give nutrient-rich sediment away for free. But would anyone want it?

In the study, the greatest soybean yield came from the 100% dredged sediment sample. That’s not a feasible ratio for farms, though. Sediment is heavy, and transporting it is expensive. Even at 10% application, a farmer would need 100 tons of dried sediment per acre, estimated Keith Reid, a soil scientist with Canada’s Department of Agriculture and Agri-Food. In addition, he said, spreading tons of sediment would require heavy machinery, which would compact the soils and remove any benefits of lower bulk density.

Soybeans with more Lake Erie sediment grew heftier root systems and generated higher soybean yields in a study at Bowling Green State University. Credit: Angélica Vázquez-Ortega

“It’s a good start at looking at the potential for uses of soil amendment,” Reid said of the study. “It’s fair to safely say there was no negative impact. It’s hard to say if there was a real large positive impact.”

Any new method for farming must demonstrate effectiveness and affordability, and Vázquez-Ortega recognizes the work left to do. “It’s a very preliminary step,” she said of the study. She’s now collaborating with the Ohio EPA and the Ohio Lake Erie Commission, among other parties, on a 2-year farm test.

The study is a step toward finding a beneficial use for sediment that preserves the ports and protects the lake. But until the process makes economic and agronomic sense, sediments will remain all dredged up with nowhere to grow.

—J. Besl (@J_Besl), Science Writer

Anticipating Climate Impacts of Major Volcanic Eruptions

Tue, 08/31/2021 - 13:09

This year marks the 30th anniversary of the most recent volcanic eruption that had a measurable effect on global climate. In addition to devastating much of the surrounding landscape and driving thousands of people to flee the area, the June 1991 eruption at Mount Pinatubo in the Philippines sent towering plumes of gas, ash, and particulates high into the atmosphere—materials that ultimately reduced average global surface temperatures by up to about 0.5°C in 1991–1993. It has also been more than 40 years since the last major explosive eruption in the conterminous United States, at Mount St. Helens in Washington in May 1980. As the institutional memory of these infrequent, but high-impact, events fades in this country and new generations of scientists assume responsibility for volcanic eruption responses, the geophysical community must remain prepared for coming eruptions, regardless of these events’ locations.

Rapid responses to major volcanic eruptions enable scientists to make timely, initial estimates of potential climate impacts to assist responders in implementing mitigation efforts.Rapid responses to major volcanic eruptions enable scientists to make timely, initial estimates of potential climate impacts (i.e., long-term effects) to assist responders in implementing mitigation efforts, including preparing for weather and climate effects in the few years following an eruption. These events also present critical opportunities to advance volcano science [National Academies of Sciences, Engineering, and Medicine (NASEM), 2017], and observations of large events with the potential to affect climate and life globally are particularly valuable.

Recognizing this value, NASA recently developed a volcanic eruption response plan to maximize the quantity and quality of observations it makes following eruptions [NASA, 2018], and it is facilitating continuing research into the drivers and behaviors of volcanic eruptions to further improve scientific eruption response efforts.

How Volcanic Eruptions Affect Climate

Major volcanic eruptions inject large amounts of gases, aerosols, and particulates into the atmosphere. Timely quantification of these emissions shortly after they erupt and as they disperse is needed to assess their potential climate effects. Scientists have a reasonable understanding of the fundamentals of how explosive volcanic eruptions influence climate and stratospheric ozone. This understanding is based on a few well-studied events in the satellite remote sensing era (e.g., Pinatubo) and on proxy records of older eruptions such as the 1815 eruption of Tambora in Indonesia [Robock, 2000]. However, the specific effects of eruptions depend on their magnitude, location, and the particular mix of materials ejected.

To affect global climate, an eruption must inject large quantities of sulfur dioxide (SO2) or other sulfur species (e.g., hydrogen sulfide, H2S) into the stratosphere, where they are converted to sulfuric acid (or sulfate) aerosols over weeks to months (Figure 1). The sulfate aerosols linger in the stratosphere for a few years, reflecting some incoming solar radiation and thus reducing global average surface temperatures by as much as about 0.5°C for 1–3 years, after which temperatures recover to preeruption levels.

Fig. 1. In the top plot, the black curve represents monthly global mean stratospheric aerosol optical depth (AOD; background is 0.004 or below) for green light (525 nanometers) from 1979 to 2018 from the Global Space-based Stratospheric Aerosol Climatology (GloSSAC) [Kovilakam et al., 2020; Thomason et al., 2018]. AOD is a measure of aerosol abundance in the atmosphere. Red dots represent annual sulfur dioxide (SO2) emissions in teragrams (Tg) from explosive volcanic eruptions as determined from satellite measurements [Carn, 2021]. The dashed horizontal line indicates the 5-Tg SO2 emission threshold for a NASA eruption response. Vertical gray bars indicate notable volcanic eruptions and their SO2 emissions. From left to right, He = 1980 Mount St. Helens (United States), Ul = 1980 Ulawun (Papua New Guinea (PNG)), Pa = 1981 Pagan (Commonwealth of the Northern Mariana Islands), El = 1982 El Chichón (Mexico), Co = 1983 Colo (Indonesia), Ne = 1985 Nevado del Ruiz (Colombia), Ba = 1988 Banda Api (Indonesia), Ke = 1990 Kelut (Indonesia), Pi = 1991 Mount Pinatubo (Philippines), Ce = 1991 Cerro Hudson (Chile), Ra = 1994 Rabaul (PNG), Ru = 2001 Ulawun, 2002 Ruang (Indonesia), Re = 2002 Reventador (Ecuador), Ma = 2005 Manam (PNG), So = 2006 Soufriere Hills (Montserrat), Ra = 2006 Rabaul (PNG), Ka = 2008 Kasatochi (USA), Sa = 2009 Sarychev Peak (Russia), Me = 2010 Merapi (Indonesia), Na = 2011 Nabro (Eritrea), Ke = 2014 Kelut (Indonesia), Ca = 2015 Calbuco (Chile), Am = 2018 Ambae (Vanuatu). In the bottom plot, circles indicate satellite-measured SO2 emissions (symbol size denotes SO2 mass) and estimated plume altitudes (symbol color denotes altitude) for volcanic eruptions since October 1978 [Carn, 2021].Eruptions from tropical volcanoes like Pinatubo typically generate more extensive stratospheric aerosol veils because material injected into the tropical stratosphere can spread into both hemispheres.Although this direct radiative effect cools the surface, the aerosol particles also promote warming in the stratosphere by absorbing outgoing longwave radiation emitted from Earth’s surface as well as some solar radiation, which affects atmospheric temperature gradients and thus circulation (an indirect advective effect). This absorption of longwave radiation also promotes chemical reactions on the aerosol particles that drive stratospheric ozone depletion [Kremser et al., 2016], which reduces absorption of ultraviolet (UV) radiation and further influences atmospheric circulation. The interplay of aerosol radiative and advective effects, which both influence surface temperatures, leads to regional and seasonal variations in surface cooling and warming. For example, because advective effects tend to dominate in winter in the northern midlatitudes, winter warming of Northern Hemisphere continents—lasting about 2 years—is expected after major tropical eruptions [Shindell et al., 2004].

Eruptions from tropical volcanoes like Pinatubo typically generate more extensive stratospheric aerosol veils because material injected into the tropical stratosphere can spread into both hemispheres. However, major high-latitude eruptions can also have significant climate impacts depending on their season and the altitude that their eruption plumes reach [Toohey et al., 2019].

The effects of volcanic ash particles are usually neglected in climate models because the particles have shorter atmospheric lifetimes than sulfate aerosols, although recent work has suggested that persistent fine ash may influence stratospheric sulfur chemistry [Zhu et al., 2020]. This finding provides further motivation for timely sampling of volcanic eruption clouds.

The threshold amount of volcanic SO2 emissions required to produce measurable climate impacts is not known exactly. On the basis of prior eruptions, NASA considers that an injection of roughly 5 teragrams (5 million metric tons) of SO2 or more into the stratosphere has sufficient potential for climate forcing of –1 Watt per square meter (that is, 1 Watt per square meter less energy is put into Earth’s climate system as a result of the stratospheric aerosols produced from the SO2) and warrants application of substantial observational assets.

Pinatubo volcano erupts on 12 June 1991. Credit: K. Jackson, U.S. Air Force; accessed at NOAA National Centers for Environmental Information

Since the dawn of the satellite era for eruption observations in 1978, this threshold has been surpassed by only two eruptions: at El Chichón (Mexico) in 1982 and Pinatubo in 1991 (Figure 1), which reached 5 and 6, respectively, on the volcanic explosivity index (VEI; a logarithmic scale of eruption size from 0 to 8). Since Pinatubo, the observational tools that NASA employs have greatly improved.

In the event of future eruptions on par with or larger than those at El Chichón and Pinatubo, rapid mobilization of NASA’s observational and research assets, including satellites, balloons, ground-based instruments, aircraft, and modeling capabilities, will permit scientists to make early initial estimates of potential impacts. Capturing the transient effects of volcanic aerosols on climate would also provide critical data to inform proposed solar geoengineering strategies that involve introducing aerosols into the atmosphere to mitigate global warming [NASEM, 2021].

NASA’s Eruption Response Plan

In the United States, NASA has traditionally led investigations of eruptions involving stratospheric injection because of the agency’s global satellite-based observation capabilities for measuring atmospheric composition and chemistry and its unique suborbital assets for measuring the evolution of volcanic clouds in the stratosphere.

Under its current plan, NASA’s eruption response procedures will be triggered in the event an eruption emits at least 5 teragrams of SO2 into the stratosphere.Under its current plan, NASA’s eruption response procedures will be triggered in the event an eruption emits at least 5 teragrams of SO2 into the stratosphere, as estimated using NASA’s or other satellite assets [e.g., Carn et al., 2016]. The first phase of the response plan involves a review of near-real-time satellite data by a combined panel of NASA Headquarters (HQ) science program managers and NASA research scientists in parallel with initial modeling of the eruption plume’s potential atmospheric evolution and impacts.

The HQ review identifies relevant measurement and modeling capabilities at the various NASA centers and among existing NASA-funded activities. HQ personnel would establish and task science leads and teams comprising relevant experts from inside and outside NASA to take responsibility for observations from the ground, from balloons, and from aircraft. The efforts of these three groups would be supplemented by satellite observations and modeling to develop key questions, priority observations, and sampling and deployment plans.

Implementing the plan developed in this phase would likely result in major diversions and re-tasking of assets, such as NASA aircraft involved in meteorological monitoring, from ongoing NASA research activities and field deployments. Ensuring that these diversions are warranted necessitates that this review process is thorough and tasking assignments are carefully considered.

The second phase of NASA’s volcanic response plan—starting between 1 week and 1 month after the eruption—involves the application of its satellite platforms, ground observations from operational networks, and eruption cloud modeling. Satellites would track volcanic clouds to observe levels of SO2 and other aerosols and materials. Gathering early information on volcanic aerosol properties like density, particle composition, and particle size distribution would provide key information for assessing in greater detail the potential evolution and effects of the volcanic aerosols. Such assessments could provide valuable information on the amount of expected surface cooling attributable to these aerosols, as well as the lifetime of stratospheric aerosol particles—two factors that depend strongly on the aerosols’ size distribution and temporal evolution.

Meanwhile, NASA’s Aerosol Robotic Network (AERONET), Micro-Pulse Lidar Network (MPLNET), and Southern Hemisphere Additional Ozonesondes (SHADOZ) would provide real-time observations from the ground. Eruption cloud modeling would be used to calculate cloud trajectories and dispersion to optimize selection of ground stations for balloon launches and re-tasking of airborne assets.

The third phase of the response plan—starting 1–3 months after an eruption—would see the deployment of rapid response balloons and aircraft (e.g., from NASA’s Airborne Science Program). The NASA P-3 Orion, Gulfstream V, and DC-8 aircraft have ranges of more than 7,000 kilometers and can carry heavy instrumentation payloads of more than 2,500 kilograms to sample the middle to upper troposphere. A mix of in situ and remote sensing instruments would be employed to collect detailed observations of eruption plume structure, evolution, and optical properties.

NASA’s high-altitude aircraft (ER-2 and WB-57f) provide coverage into the stratosphere (above about 18 kilometers) with payloads of more than 2,200 kilograms. These high-altitude planes would carry payloads for measuring the evolving aerosol distributions along with trace gas measurements in situ to further understand the response of stratospheric ozone and climate forcing to the eruption. In particular, the high-altitude observations would include data on the particle composition and size distribution of aerosols, as well as on ozone, SO2, nitrous oxide and other stratospheric tracers, water vapor, and free radical species. Instrumented balloons capable of reaching the stratosphere could also be rapidly deployed to remote locations to supplement these data in areas not reached by the aircraft.

The third phase would be staged as several 2- to 6-week deployments over a 1- to 2-year period that would document the seasonal evolution, latitudinal dispersion, and multiyear dissipation of the plume from the stratosphere. These longer-term observations would help to constrain model simulations of the eruption’s impacts on the global atmosphere and climate.

Enhancing Eruption Response

An effective eruption response is contingent on timely recognition of the hallmarks of a major volcanic eruption.An effective eruption response is contingent on timely recognition of the hallmarks of a major volcanic eruption, namely, stratospheric injection and substantial emissions of SO2 (and H2S) amounting to more than 5 teragrams, using satellite data. However, it may take several hours to a day after an event for satellites to confirm that emissions have reached this level. By then, time has been lost to position instruments and personnel to effectively sample the earliest stages of an eruption, and it is already too late to observe the onset of the eruption.

Hence, a key element in efforts to strengthen eruption responses is improving our recognition of distinctive geophysical or geochemical eruption precursors that may herald a high-magnitude event. Observations of large, sulfur-rich eruptions such as Pinatubo have led to scientific consensus that such eruptions emit “excess” volatiles—gas emissions (especially sulfur species, but also other gases such as water vapor and carbon dioxide) exceeding those that could be derived from the erupted magma alone. Excess volatiles, in the form of gas bubbles derived from within or below a magma reservoir that then accumulate near the top of the reservoir, may exacerbate climate impacts of eruptions and influence magmatic processes like magma differentiation, eruption triggering and magnitude, and hydrothermal ore deposition [e.g., Edmonds and Woods, 2018]. They may also produce detectable eruption precursors and influence eruption and plume dynamics, although how remains largely unknown.

With support from NASA’s Interdisciplinary Research in Earth Science program, we (the authors) have begun an integrated investigation of eruption dynamics focused on understanding the fate of excess volatiles from their origins in a magma reservoir, through underground conduits and into a volcanic plume, and, subsequently, as they are dispersed in the atmosphere. The satellite observations we use are the same or similar to those required for rapid assessment and response to future high-magnitude events (with a VEI of 6 or greater).

We are examining whether excess volatile accumulation in magma reservoirs can drive large eruptions and produce enhanced aerosol-related climate impacts resulting from these eruptions.Our investigation is using data from previous moderate-scale eruptions (VEI of 3–5) with excellent satellite observational records that captured instances in which gases and aerosols displayed disparate atmospheric dispersion patterns. Among the main questions we are examining is whether excess volatile accumulation in magma reservoirs can drive large eruptions and produce enhanced aerosol-related climate impacts resulting from these eruptions. Using numerical model simulations of eruptions involving variable quantities of excess volatiles, we will endeavor to reproduce the specific atmospheric distributions of gases and aerosols observed by satellites after these events and thus elucidate how volatile accumulation might influence plume dispersion and climate impacts.

We are currently developing a framework to simulate a future eruption with a VEI of 6+. Over the coming year, we hope to produce benchmark simulations that track the fate of volcanic gases as they travel from a subsurface magmatic system into the atmosphere to be distributed globally. This simulation framework will comprise a coupled suite of subsystem-scale numerical models, including models of magma withdrawal from the magma reservoir, magma ascent within the volcanic conduit, stratospheric injection within the volcanic plume, and atmospheric dispersion and effects on climate.

With these tools, NASA will have gained important capabilities in simulating volcanic eruptions and understanding their potential precursors. These capabilities will complement NASA’s satellite and suborbital observations of volcanic eruptions as they unfold—an important advance for volcano science and a powerful means to assess the climate impacts of future large explosive eruptions.

Acknowledgments

Although not listed as coauthors, we acknowledge contributions to this work from the organizers of the NASA Major Volcanic Eruption Response Plan workshop in 2016, including Hal Maring, Ken Jucks, and Jack Kaye (NASA HQ), as well as the workshop participants from NASA, NOAA, the U.S. Geological Survey, and the academic community.

Making the Most of Volcanic Eruption Responses

Tue, 08/31/2021 - 13:09

Mount St. Helens, hidden away in a remote forest midway between Seattle, Wash., and Portland, Ore., had been putting out warning signals for 2 months. Still, the size and destruction of the 18 May 1980 eruption took the United States by surprise. The blast spewed ash into the air for more than 9 hours, and pyroclastic density currents and mudflows wiped out surrounding forests and downstream bridges and buildings. Fifty-seven people died as a result of the volcanic disaster, the worst known in the continental United States.

In addition to its immediate and devastating effects, the 1980 eruption spurred efforts to study volcanic processes and their impacts on surrounding landscapes more thoroughly and to advance monitoring and forecasting capabilities. It also prompted further cooperation among agencies and communities to better prepare for and respond to future volcanic eruptions.

Mount St. Helens erupts in 1980. Credit: USGS

According to a 2018 U.S. Geological Survey (USGS) report, there are 161 potentially active volcanoes in the United States and its territories, including 55 classified as high or very high threat [Ewert et al., 2018]. Over the past century, especially since 1980, integrated studies of active volcanic systems have shed light on magmatic and volcanic processes that control the initiation, duration, magnitude, and style of volcanic eruptions. However, because there have been few continuously monitored volcanic eruptions with observations that span the entire sequence before, during, and after eruption, our understanding of these processes and the hazards they pose is still limited.

This limited understanding, in turn, hampers efforts to forecast future eruptions and to help nearby communities prepare evacuation plans and to marshal and allocate resources during and after an event. Thus, a recent consensus study about volcanic eruptions by the National Academies of Sciences, Engineering, and Medicine [2017] highlighted the need to coordinate eruption responses among the broad volcanological and natural hazard scientific community as one of three grand challenges.

The charge of the Community Network for Volcanic Eruption Response (CONVERSE) is to maximize the scientific return from eruption responses at U.S. volcanoes.The Community Network for Volcanic Eruption Response (CONVERSE) initiative, which began in 2018 as a 3-year Research Coordination Network supported by the National Science Foundation (NSF), is attempting to meet this challenge. The charge of CONVERSE is to maximize the scientific return from eruption responses at U.S. volcanoes by making the most efficient use possible of the relatively limited access and time to collect the most beneficial data and samples. This goal requires looking for ways to better organize the national volcano science community.

A critical component of this organization is to facilitate cooperation between scientists at academic institutions and the U.S. Geological Survey, which is responsible for volcano monitoring and hazard assessment at domestic volcanoes. Since 2019, CONVERSE has conducted several workshops to allow groups representing the various disciplines in volcanology to formulate specific science questions that can be addressed with data collected during an eruption response and assess their capacities for such a response. Most recently, in November 2020, we conducted a virtual response scenario exercise based on a hypothetical eruption of Mount Hood in the Oregon Cascades. A month later, Hawaii’s Kīlauea volcano erupted, allowing us to put what we learned from the simulation to use in a coordinated response.

A Virtual Eruption at Mount Hood

To work through a simulated response to an eruption scenario at Mount Hood, our CONVERSE team had planned an in-person meeting for March 2020 involving a 2-day tabletop exercise. Travel and meeting restrictions enacted in response to the COVID-19 pandemic required us to postpone the exercise until 16–17 November, when we conducted it virtually, with 80 scientists participating for one or more days. The goal of the exercise was to test the effectiveness of forming a science advisory committee (SAC) as a model for facilitating communications between responding USGS volcano observatories and the U.S. academic community.

Mount Hood, located near Portland, Ore., is relatively accessible through a network of roads and would attract a lot of scientific interest during an eruption. Thus, we based our eruption scenario loosely on a scenario developed in 2010 for Mount Hood for a Volcanic Crisis Awareness training course.

Mount Hood, seen here in 2018, is part of the Cascade Volcanic Arc and is less than 100 kilometers from Portland, Ore. Credit: Seth Moran, USGS

Because a real-life eruption can happen at any time at any active volcano, participants in the November 2020 workshop were not informed of the selected volcano until 1 week prior to the workshop. Then we sent a simulated “exercise-only” USGS information statement to all registrants noting that an earthquake swarm had started several kilometers south of Mount Hood’s summit. In the days leading up to the workshop, we sent several additional information statements containing status updates and observations of the volcano’s behavior like those that might precede an actual eruption.

During the workshop, participants communicated via videoconference for large group discussions and smaller breakout meetings. We used a business communications platform to share graphics and information resources and for rapid-fire chat-based discussions.

The workshop started with an overview of Mount Hood’s eruptive history and monitoring status, after which the scenario continued with the volcano exhibiting escalating unrest and with concomitant changes in USGS alert level. Participants were asked to meet in groups representing different disciplines, including deformation, seismicity, gas, eruption dynamics, and geochemistry, to discuss science response priorities, particularly those that required access to the volcano.

This break in communication was done to mimic the difficulty that external scientists often encounter communicating with observatory staff during full-blown eruption responses.As the simulated crisis escalated at the end of the first day of the workshop, non-USGS attendees were told they could no longer communicate with USGS participants (and vice versa). This break in communication was done to mimic the difficulty that external scientists often encounter communicating with observatory staff during full-blown eruption responses, when observatory staff are fully consumed by various aspects of responding to the eruption. Instead, scientific proposals had to be submitted to a rapidly formed Hood SAC (H-SAC) consisting of a USGS liaison and several non-USGS scientists with expertise on Mount Hood.

The H-SAC’s role was to quickly evaluate proposals submitted by discipline-specific groups on the basis of scientific merit or their benefit for hazard mitigation. For example, the geodesy group was approved to install five instruments at sites outside the near-field volcanic hazard zone to capture a deep deflation signal more clearly, an activity that did not require special access to restricted areas. On the other hand, a proposal by the gas group to climb up to the summit for direct gas sampling was declined because it was deemed too hazardous. Proposals by the tephra sampling group to collect ash at specific locations were also approved, but only if the group coordinated with a petrology group that had also submitted a proposal to collect samples for characterizing the pressure-temperature and storage conditions of the magma.

The H-SAC then provided recommendations to the Cascade Volcano Observatory (CVO) scientist-in-charge, with that discussion happening in front of all participants so they could understand the considerations that went into the decisionmaking. After the meeting, participants provided feedback that the SAC concept seemed to work well. The proposal evaluation process that included scientific merit, benefit for hazard mitigation, and feasibility was seen as a positive outcome of the exercise that would translate well into a real-world scenario. Participants emphasized, however, that it was critical that SAC members be perceived as neutral with respect to any disciplinary or institutional preferences and that the SAC have broad scientific representation.

Responding to Kīlauea’s Real Eruption

Just 1 month after the workshop, on 20 December 2020, Kīlauea volcano began erupting in real life, providing an immediate opportunity for CONVERSE to test the SAC model. The goals of CONVERSE with respect to the Kīlauea eruption were to facilitate communication and coordination of planned and ongoing scientific efforts by USGS scientists at the Hawaiian Volcano Observatory (HVO) and external scientists and to broaden participation by the academic community in the response.

Kīlauea’s volcanic lava lake is seen here at the start of the December 2020 eruption. Credit: Matthew Patrick, USGS

These goals were addressed through two types of activities. First, a Kīlauea Scientific Advisory Committee (K-SAC), consisting of four academic and three USGS scientists, was convened within a week of the start of the eruption. This committee acted as the formal point of contact between HVO and the external scientific community for the Kīlauea eruption, and it solicited and managed proposals for work requiring coordination between these groups.

The K-SAC evaluated proposals on the basis of the potential for scientific gain and contributions to mitigating hazards. For example, one proposal dealt with assessing whether new magma had entered the chamber or whether the eruption released primarily older magma already under the volcano. The K-SAC also identified likely benefits and areas of collaboration between proposing groups, and it flagged potential safety and logistical (including permitting from the National Park Service) concerns in proposals as well as resources required from HVO.

Proposals recommended by the K-SAC were then passed to HVO staff, who consulted with USGS experts about feasibility, potential collaborations, and HVO resources required before making decisions on whether to move forward with them. One proposal supported by the K-SAC involved the use of hyperspectral imaging to quantify in real time the proportion of crystalline material and melt in the active lava lake to help determine the lava’s viscosity, a critical parameter for hazard assessment.

The second major activity of CONVERSE as the Kīlauea eruption progressed was to provide a forum for communication of science information open to all volcano scientists.The second major activity of CONVERSE as the Kīlauea eruption progressed was to provide a forum for communication of science information via a business communications platform open to all volcano scientists. In addition, we posted information about planned and current activities by HVO and external scientists online and updated it using “living documents” as well as through virtual information sessions. As part of this effort, the K-SAC developed a simple spreadsheet that listed the types of measurements that were being made, the groups making these measurements, and where the obtained data could be accessed. For example, rock samples collected from the eruption were documented, and a corresponding protocol on how to request such samples for analytical work was developed. We held virtual town hall meetings, open to all, to discuss these topics, as well as updates from HVO K-SAC members on the status of the eruption and HVO efforts.

The Future of CONVERSE

The recent virtual exercise and the experience with the Kīlauea eruption provided valuable knowledge in support of CONVERSE’s mandate to develop protocols for coordinating scientific responses to volcanic eruptions. These two events brought home to us the importance of conducting regular, perhaps yearly or even more frequent, tabletop exercises. Such exercises could be held in person or virtually to further calibrate expectations and develop protocols for scientific coordination during real eruptions and to create community among scientists from different institutions and fields. Currently, workshops to conduct two scenario exercises are being planned for late this year and early next year. One will focus on testing deformation models with a virtual magma injection event; the other will focus on a response to an eruption occurring in a distributed volcanic field in the southwestern United States.

CONVERSE’s best practices and protocols could guide future international eruption responses coordinated among volcano monitoring agencies of multiple countries.Future exercises should build on lessons learned from the Hood scenario workshop and the Kīlauea eruption response. For example, although the SAC concept worked well in principle, the process required significant investments of time that delayed some decisions, possibly limiting windows of opportunity for critical data collection at the onset of the eruption. Although CONVERSE is focused on coordination for U.S. eruptions, its best practices and protocols could guide future international eruption responses coordinated among volcano monitoring agencies of multiple countries.

A critical next step will be the development of a permanent organizational framework and infrastructure for CONVERSE, which at a minimum should include the following:

A mechanism for interested scientists to self-identify and join CONVERSE so they can participate in eruption response planning and activities, including media and communications training. A national-level advisory committee with accessibility to equitable decisionmaking representation across scientific disciplines and career stages. The committee would be responsible for coordinating regular meetings, planning and conducting activities, liaising with efforts like the SZ4D and Modeling Collaboratory for Subduction initiatives, and convening eruption-specific SACs. Dedicated eruption SACs that facilitate open application processes for fieldwork efforts, including sample collection, distribution, and archiving. The SACs would establish and provide clear and consistent protocols for handling data and samples and would act as two-way liaisons between the USGS observatories and external scientists. A dedicated pool of rapid response instruments, including, for example, multispectral cameras, infrasound sensors, Global Navigation Satellite System receivers, uncrewed aerial vehicles, and gas measuring equipment. This pool could consist of permanent instruments belonging to CONVERSE and housed at an existing facility as well as scientist-owned distributed instruments available on demand as needed.

The SAC structure holds great promise for facilitating collaboration between U.S. observatories and external science communities during eruptions and for managing the many requests for information from scientists interested in working on an eruption. It also broadens participation in eruption responses beyond those who have preexisting points of contact with USGS observatory scientists by providing a point of contact and process to become engaged.

We are confident that when the next eruption occurs in the United States—whether it resembles the 1980 Mount St. Helens blast, the recent effusive lava flows from Kīlauea, or some other style—this structure will maximize the science that can be done during the unrest. Such efforts will ultimately help us to better understand what is happening at the volcano and to better assist communities to prepare for and respond to eruptions.

Acknowledgments

The CONVERSE RCN is funded by NSF grant 1830873. We thank all the participants of the Mount Hood Virtual Scenario Exercise and, specifically, the USGS CVO staff and the CONVERSE disciplinary leaders. We also thank USGS HVO staff for their insights and efforts during the ongoing Kīlauea eruption in making the K-SAC (and future SACs) a better vehicle for communication and collaboration. We thank Hawaii state volcanologist Bruce Houghton for developing the initial training course that served as a basis for the Mount Hood scenario workshop in collaboration with CVO scientists. Finally, we thank Tina Neal and Wes Thelen for their careful reviews of this paper. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. government.

Megaripples on Mars—How to Name Wind-Shaped Features on the Red Planet

Mon, 08/30/2021 - 13:16

Spacecraft on Mars have captured images of barren, desertlike landscapes complete with dunes of sand. But the windswept features are not identical to their terrestrial counterparts. The surface of the Red Planet is dotted by midsized sand masses not found on Earth. These features go by a variety of names: megaripples, sand ripples, sand ridges, and the less melodic transverse aeolian ridges (TARs) chief among them. But the nomenclature is inconsistent, causing confusion that hampers scientific advancement. Now, new research has proposed an official naming scheme for wind-formed features.

“Because we’re seeing new things on Mars, people have adapted what they are calling things,” said Mackenzie Day, a researcher at the University of California, Los Angeles. Day and James Zimbelman of the Smithsonian Institution coauthored the new paper, published in the journal Icarus. “People have adapted in slightly different ways.”

“As we’re getting new information, having a standard nomenclature makes sure everybody is on the same page.”Broadly based, the new system classifies aeolian, or wind-created, features by size and geomorphology.

“As we’re getting new information, having a standard nomenclature makes sure everybody is on the same page,” Day said. “If we’re all talking about the same thing in the same way, it makes it easier as a scientific community to move forward in understanding what’s going on.”

Blowing in the Wind

Aeolian bed forms are piles of moving sand brushed across the planet’s surface by the wind. On Earth, the largest of these features are sand dunes, which can stretch for tens to hundreds of meters in length. Small ripples only a few tens of centimeters long can be carved on top of these dunes.

“Bed forms are really amazing interactions between the atmosphere and the surface,” said Serina Diniega, a research scientist at NASA’s Jet Propulsion Laboratory who is not associated with the new paper. “If you see one, you immediately have a whole bunch of information about the environment.”

In addition to dunes and ripples, Mars has a third type of bed form: transverse aeolian ridges. TARs appear to have been created by the wind but move on much slower timescales than their fellow bed forms and seem to be coated with a layer of fine-grained dust.

Day and Zimbelman proposed a broad frame of terminology for ripples, TARs, and dunes that relies first on the size and geomorphology of the features. As surface observations (anticipated soon from Curiosity and Perseverance) allow scientists to classify grain size and dust cover, the terminology can be further constrained.

Small ripples, for instance, are measured on centimeter scales in height and are classified as straight crested. Megaripples are measured at less than a meter in height and may be straight crested or sinuous. Unlike small ripples, megaripples may include coarse grains. TARs are classified as larger than a meter in height and straight crested. Dunes, the largest aeolian bed form on Mars, are classified as taller than 3 meters and have wildly varying geomorphologies: from straight crested or sinuous to radially symmetrical stars.

Straight-crested transverse aeolian ridges in the lower part of the image give way to more complex star-shaped sand dunes in this terrain southwest of Schiaparelli Crater on Mars. Credit: NASA/JPL-Caltech/University of Arizona

According to Ryan Ewing, a geologist at Texas A&M University not involved in the new study, the biggest challenge of a settled nomenclature will be agreeing on the processes that created TARs. “I think as we uncover more about how sediments move on Mars by wind, that will help the community refine their definitions of these [features],” he said.

“I really like this paper because it’s attempting to apply some sort of structure around these terms,” said Diniega. “Using a classification based on looking at both Earth and Mars is better than a classification system based only on Earth.”

Sand Through the Solar System

Bed forms aren’t limited to Earth and Mars. They’ve been spotted on Venus and on Saturn’s moon Titan, and there have been signs of them on Pluto and Comet 67/P.

“Every place that has an atmosphere—and even places that don’t have an atmosphere—we see an example of these bed forms,” Diniega said.

The new classification system should work on these bodies as well as on Earth and Mars, researchers said.

“As we start exploring the solar system more, like sending Dragonfly to Titan, it would be nice to have a nomenclature that could be applied independent of what planet you’re on,” Day said.

—Nola Taylor Tillman (@NolaTRedd), Science Writer

Los geomojis traducen la geociencia a cualquier idioma

Mon, 08/30/2021 - 13:16

Esta historia es parte de la semana de cobertura de Covering Climate Now centrada en “Vivir la emergencia climática”. Covering Climate Now es una colaboración periodística global comprometida con el fortalecimiento de la cobertura de la historia climática.

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Los emojis son pictogramas utilizados para transmitir mensajes particulares. Tienen el mismo significado básico en cualquier idioma: una sonrisa significa una sonrisa.

New Inversion Method Improves Earthquake Source Imaging

Mon, 08/30/2021 - 11:30

The increasing density and accuracy of geodetic measurements of earthquake-related movements of the Earth’s surface can improve our understanding of the physics of earthquakes, a critical requirement to better assess seismic hazard in tectonically active regions.

Modeling of such surface observations allows to recover key parameters of the earthquake source, such as the geometry and spatial extent of the fault that broke during the earthquake as well as the amount of slip on this fault during rupture (the “coseismic slip”), all related to the energy released during the seismic event.

Although there has long been evidence of the geometric complexity of faults, most earthquake source models ignore this complexity, for the sake of simplicity and due to the lack of precise imaging of faults at depth. Planar fault geometries are generally assumed, which leads to biases in coseismic slip estimates.

Dutta et al. [2021] propose a method to simultaneously recover the fault geometry and the coseismic slip, allowing for non-planar faults and slip variability along fault (described by a limited set of parameters to be estimated). The method ultimately provides not a unique fault and slip model but an ensemble of plausible models, with uncertainties on all the estimated parameters, which is also essential for a proper interpretation of the results.

The approach is validated and its contribution discussed using synthetic cases of earthquakes, mimicking the main characteristics of real earthquakes in various tectonic contexts, to underline the importance of taking into account more realistic fault geometries in earthquake source modeling.

Citation: Dutta, R., Jónsson, S., & Vasyura-Bathke, H. [2021]. Simultaneous Bayesian estimation of non-planar fault geometry and spatially-variable slip. Journal of Geophysical Research: Solid Earth, 126, e2020JB020441. https://doi.org/10.1029/2020JB020441

 —Cécile Lasserre, Associate Editor, JGR: Solid Earth

Amazon Deforestation and Fires are a Hazard to Public Health

Fri, 08/27/2021 - 12:51

Wildfires are increasingly common, and their smoky emissions can wreak havoc on human health. In South America, fires may cause nearly 17,000 otherwise avoidable deaths each year. Fire frequency in the Amazon basin has been linked to climate—drier conditions result in more fires—but direct human action, such as deforestation, drives up fire frequency as well.

Deforestation can cause wildfires that spread out of control because of humans burning vegetation. Smoke from these fires also interacts with clouds and the Sun to reduce further rainfall, which creates dry, fire-prone conditions. Perhaps most subtly, deforestation breaks up the massive rain forest ecosystem, disrupting the forest’s effect on climate and creating a drier environment with greater fire risk.

The number of fires—and the amount of fire-generated air pollution—in the Brazilian Legal Amazon has closely shadowed the deforestation rate over the past 2 decades. In the early 2000s, high deforestation rates led to frequent fires and accompanying air pollution. Over time, the Brazilian government enacted policies to protect large sections of the rain forest, and the deforestation rate dropped. In the past decade or so, however, the rate of deforestation has been slowly climbing again, bringing with it increased fire and health risks.

In a new study, Butt et al. model the year 2019 under different deforestation scenarios to understand the link between these events in the rain forest and public health.

The researchers found that if 2019 had matched the year in the last 2 decades with the least deforestation, regional air pollution would have been substantially lower that year, resulting in 3,400 fewer premature deaths across South America. If, on the other hand, deforestation rates in 2019 had matched those of the early 2000s, before government regulations brought the rates down, the number of fires would have increased by 130%, and the number of deaths would have more than doubled to 7,900.

These models demonstrate the link between direct human action such as deforestation and environmental hazards and, consequently, public health. They also show how government environmental protections can have a substantial impact on human health. (GeoHealth, https://doi.org/10.1029/2021GH000429, 2021)

—Elizabeth Thompson, Science Writer

How Can Wristbands Monitor Pollution, PAHs, and Prenatal Care?

Fri, 08/27/2021 - 12:51

Wildfires, vehicle emissions, petroleum by-products, and even cooking can conjure images of climate change. Each category also produces polycyclic aromatic hydrocarbons, or PAHs, which are products of incomplete combustion. This group of hundreds of chemical species is toxic to human health, and as the world warms, more extreme weather will further exacerbate their presence in the atmosphere, said Natalie Johnson, an environmental toxicologist at Texas A&M University. Monitoring human exposure to these air pollutants, she said, is a public health issue.

In a new study published in the Journal of Exposure Science and Environmental Epidemiology, Johnson and her colleagues used silicone wristbands—like the ones worn by people supporting various causes—to track pregnant women’s exposure to PAHs. Their study took place in McAllen, Texas, which has high rates of premature births and childhood asthma—adverse health outcomes associated with poor air quality.

Highway to Poor Health

Studies show that mothers exposed to high levels of air pollutants have infants with an increased risk of developing respiratory infections, said Johnson. Moreover, if mothers live closer to sources of vehicle-related air pollution—like freeways—their children are more likely to develop asthma.

Three pathways transport PAHs into the human body, said Johnson. We can absorb them through our skin or ingest them by consuming charred foods. The third pathway is inhalation. This is a key pathway because our bloodstream can deliver PAHs throughout the body, she said. In pregnant women, the sanguineous superhighway can carry PAHs to the placenta. In this way, said Johnson, “[PAHs] can have some direct effects on the developing fetus.”

One problem PAHs can pose for people is cancer. By themselves, PAHs are typically not carcinogenic, but the pathways through which they can morph into cancer-causing molecules are known, said Pierre Herckes, an atmospheric scientist at Arizona State University who was not involved in the Texas study. Less well understood are the exact mechanisms through which PAHs might cause premature births and other adverse health outcomes in infants and children, he said.

Our bodies’ metabolisms can manage PAHs by converting them to free radicals, which are unstable, oxygen-bearing molecules that desperately want to react with anything that can give them electrons, said Johnson. Our bodies’ antioxidant systems can limit the impact of free radicals, she said. But when the scale tips toward more free radicals—more oxidants versus antioxidants—the antioxidant systems can become overwhelmed. The accumulation of free radicals can adversely affect growth and development in utero, she said, because “oxidative stress is tightly linked with inflammation.”

Exposures at the earliest developmental stages—in the womb or during infancy—may increase the possibility of lung disease.Too little inflammation leaves the body prone to viruses and bacteria, whereas too much results in the body overreacting to seemingly benign invaders, like dust. “Early in infancy, the prenatal exposures to these pollutants may cause immune suppression, and you may get inability to respond to important viruses like RSV,” said Johnson. The respiratory syncytial virus (RSV) can be deadly for premature and young infants. Later in life, the same children exposed to these pollutants at an early age tend to have too much inflammation, triggering asthma attacks or allergic reactions, she said. Exposures at the earliest developmental stages—in the womb or during infancy—may increase the possibility of lung disease.

Silicone Sampling

One of Johnson’s graduate students, Jairus Pulczinski, mentioned to Johnson that other researchers have demonstrated the ability of silicone wristbands to passively sample pollutants. He suggested using them in an ongoing study of pregnant women in McAllen, which has poor air quality resulting from phenomena like Saharan sands blowing through the region and PAH-laden air wafting by from seasonal burning in Mexico.

Wristbands can qualitatively assess air pollution, said Johnson. “They’ve been really useful so far to say, ‘Yes or no, is there exposure?’”

However, Johnson and her colleagues were more interested in air quality in this case. “In our study, we actually placed [wristbands] on small backpacks because we were also sending out active air monitors.” Within the backpacks, two tubes actively sampled the air. One tube sampled heavier, particulate PAHs, whereas the second tube sampled lighter, volatile PAHs. Seventeen expectant mothers carried the wristband-tagged backpacks for 24 hours, sampling the ambient atmosphere. The wristband results compared well with the volatile PAH sampling tube.

In this graphical abstract of the Texas study, the center photo shows an actual active air sampler backpack, with a wristband affixed to the outside. The wristband collects data related to volatile and semivolatile PAHs: phenanthrene, biphenyl, 1-methylnaphthalene, and 2-methylnaphthalene. The backpack active air sampler also gathers data related to 2,6-dimethylnaphthalene, a particulate PAH. Credit: Natalie M. Johnson

Health care providers could use information provided by the backpack sampler to help identify whether the person, for example, lives with a smoker or has an open fireplace—both PAH sources, said Herckes.

The “bad” part of town might be closer to the highway or near industries that produce more air pollutants, and quantifying these deleterious effects could play a role in environmental justice.The data might also provide important clues about the geography of health care. “More studies show that exposure is different by socioeconomic situation,” Herckes said. The “bad” part of town might be closer to the highway or near industries that produce more air pollutants, and quantifying these deleterious effects could play a role in environmental justice, he explained.

Johnson and her colleagues provided guidance for limiting PAH exposure to the expectant mothers in McAllen who were part of the study. Good air filtration in the home is paramount, and monitoring the air quality index also helps.

Wristband research is now focusing on the quantitative side: How much air pollution has someone been exposed to? If the amount of exposure is known, said Johnson, scientists can start detangling just how much exposure is detrimental to mother or child. In the future, she said, they plan to explore whether air quality regulations are stringent enough to ensure safe pregnancies. This, she said, could inform future policy. “Doing anything we can to mitigate these environmental exposures could have a potentially big impact on public health outcomes.”

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Tracking Sustainability Goals with Creative Data Sources

Fri, 08/27/2021 - 12:51

The United Nations has created 17 interlinked Sustainable Development Goals (SDGs) that “recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth—all while tackling climate change and working to preserve our oceans and forests.” The SDGs were unveiled in 2015 and are intended to be reached by 2030 in a process nicknamed Agenda 2030. Achieving the SDGs will be a challenge of scientific know-how, technical creativity, and political will.

But there’s one challenge that often slips under the radar: How do we actually track how well we’re doing? It turns out there are insufficient data for 68% of the environmental indicators needed to assess progress on the SDGs. Several areas with limited data are biodiversity, ecosystem health, and the concentration of pollution and waste in the environment.

“When you are missing data, it creates sort of a vicious cycle where you are making decisions on data that you don’t have, and you are also making a deprioritizing investment in the collection of that data.”“If we are going to be able to measure the environment in a way that allows us to make better interventions and investment, then we need better data,” said Jillian Campbell, head of monitoring, review, and reporting at the United Nations (U.N.) Convention on Biological Diversity, at a recent U.N. World Data Forum webinar.

“When you are missing data, it creates sort of a vicious cycle where you are making decisions on data that you don’t have, and you are also making a deprioritizing investment in the collection of that data,” she said.

Traditionally, data from academia, official statistical agencies, central banks, the private sector, and nonprofit organizations are gathered through surveys and censuses. To plug data gaps in these sources, experts are turning to geospatial technologies, crowdsourced science initiatives, and greater partnerships with Indigenous Knowledge holders.

Earth Observations from Ocean to Desert

Earth observations, which include space-based data, remotely sensed data, ground-based data, and in situ data, help provide spectral information that can be processed or transformed into high-level products that are useful to produce indicators and inform relevant SDG targets and goals, said Argyro Kavvada, program manager of SDGs at NASA.

For example, the GEO Blue Planet initiative works to advance the use of Earth observations to monitor coastal eutrophication and marine litter. (The Group on Earth Observations (GEO) is a global network of governments, academic and research institutions, data providers, businesses, engineers, and scientists.)

Kavvada said GEO Blue Planet has worked with the U.N. Environment Programme and Esri to develop a methodology that combines satellite information on factors such as chlorophyll concentrations with in situ and ground-based observations such as imagery and videos from uncrewed aerial vehicles and ship-based cameras. Such robust data can help scientists infer changes in water quality.

Similarly, GEO’s Land Degradation Neutrality initiative is working with the U.N. Convention to Combat Desertification to develop data quality standards, analytical tools, and remote sensing data to help support land degradation monitoring and reporting. The group is looking at how globally available Earth observation data sets can complement national data for three main SDG concerns: land cover, land productivity, and soil data.

“They are looking for key requirements for the global data sets to contribute, and for the suitability of those data sets in supporting country efforts, timeliness of the data, and spatial cover rates,” Kavvada said.

Integrating Geospatial Information

The Food and Agriculture Organization (FAO) of the United Nations is the custodian agency for 21 out of the 231 SDG indicators. Its roles include supporting countries to develop the capacity to generate, disseminate, and use national data, as well as to realign their national monitoring frameworks to SDG indicators.

“Geospatial information and satellite Earth observations offer unprecedented opportunities to support national and global statistical systems.”At the FAO, guiding progress on the SDGs increasingly relies on integrating geospatial information provided by Earth observations. “Geospatial information and satellite Earth observations offer unprecedented opportunities to support national and global statistical systems,” said Lorenzo De Simone, a geospatial specialist in the office of the chief statistician at the FAO.

Broadening the scope of data may make monitoring environmental progress more cost-effective and efficient, experts say. Geospatial data, for instance, can be scaled and integrated with traditional sources of socioeconomic and environmental data such as surveys.

For instance, the FAO developed a new SDG indicator directly monitored with Earth observation data. SDG indicator 15.4.2, the Mountain Green Cover Index (MGCI), uses remotely sensed images to measure changes in mountain vegetation such as forests, shrubs, and individual trees.

De Simone said the FAO is committed to helping member states develop Earth observation technology. EOSTAT, for example, is aimed at building capacity with Earth observations (EO) to produce national agricultural statistics and support practices that increase efficiency in the use of fertilizer and chemicals to boost production output. De Simone said four EOSTAT pilots have been implemented, in Afghanistan, Lesotho, Senegal, and Uganda.

Mapping Crowdsourced Science

There is untapped potential for crowdsourced science (described as “voluntary public participation in scientific research and knowledge production”) to plug some of the data gaps for SDG indicators, according to a study done by Dilek Fraisl at the International Institute for Applied Systems Analysis. “We should start thinking how we harness the potential,” she said.

When data are lacking for the SDGs, relevant agencies within countries can search for crowdsourced projects that may help provide some of these data gaps and reach out to them, said Fraisl.

“In cases where citizen science projects do not exist but data are lacking, relevant agencies within countries might consider working with local communities on the ground on issues that are important to them but might also help to fill data gaps,” Fraisl said.

For example, Fraisl said crowdsourced science was crucial to monitoring marine debris in Ghana, a project of the Ghana Statistical Service. As individuals and groups engaged in beach cleanups along Ghana’s 550-kilometer-long coastline, they cataloged the numbers and types of marine debris they found.

In addition to communities and individuals, the initiative involved federal agencies (such as the Ghana Statistical Service and the Ghana Environmental Protection Agency), nongovernmental organizations (such as the Ocean Conservancy), and intergovernmental organizations (such as the U.N. Convention on Biological Diversity).

“One of the most valuable lessons from this initiative is that working with existing initiatives…utilizes existing tools [and is] more resource efficient than starting an initiative from scratch,” Fraisl said.

Indigenous Knowledges

Indigenous Knowledges are not a traditional source of data for monitoring environmental progress on the SDGs. But such knowledge could provide valuable information on natural resources, public services, and population demographics.

For example, Indigenous rangers in Arnhem Land, Australia, are using science-based water monitoring techniques to test salinity, toxicity, and microbiological contaminants in freshwater streams on their ancestral homelands, according to one recent study. Such techniques “complement local Indigenous knowledge concerning the health of waterways, such as the taste, smell, and color of water in specific places, combined with knowledge of the presence or absence of key attributes that can serve as proxies for the status and condition of freshwater ecosystems.”

A more comprehensive use of Indigenous Knowledges and other nontraditional methodologies can thus help bridge data gaps in monitoring the SDGs, researchers said, as well as contributing to better stewardship of local ecosystems.

—Munyaradzi Makoni (@MunyaWaMakoni), Science Writer

Meet Jane, the Zircon Grain—Geochronology’s New Mascot

Fri, 08/27/2021 - 12:51

There is no “once upon a time” in the children’s book Jane’s Geological Adventure, but if there were, that time was 400 million years ago, in a world replete with creepy-crawly creatures threading their way through a lush verdure of unfamiliar plants. As a volcano’s magma chamber seethed, a zircon grain named Jane was born, growing until she erupted onto Earth for a full life of metamorphism, multiple mountain-building adventures, sundry erosion styles, and her most recent phase: display at a museum.

As part of his outreach efforts, author and geochronologist Matthew Fox, a lecturer at University College London, created Jane, the zircon grain, modeling her life after rocks similar to the Jura Mountains in Switzerland. “That we can actually understand this much information from a single grain of sand is really incredible,” said Fox, “and I wanted to try to describe how we can do that.”

Janes Geological History As Jane metamorphoses, other minerals marking the rock’s transformation grow, including Mitesh the mica (center) and Gary the garnet. Credit: Martin Fox

“You can think about this as a children’s book,” Fox said, or “you can think about it in terms of how you could actually extract that information from a crystal, which might require different analytical methods.” As nature’s time capsules, zircons like Jane can retain evidence of multiple high-temperature events, like the timing of crystallization or metamorphism. “That’s why we use them for geochronology,” he said.

As Jane metamorphoses, she is joined by Gary the garnet and Mitesh the mica. Although the characters are anthropomorphized, the metamorphic mineral assemblage is real. “You can look at trace element concentration within different zones to see what other minerals might have been growing at these different time intervals,” explained Fox.

In this excerpt from the book Jane’s Geological Adventure, Jane the zircon bumps along the river bed as animals appropriate to the Cretaceous period swim and play. Credit: Matthew Fox and Martin Fox In this excerpt from the book Jane’s Geological Adventure, a geologist collects Jane from an outcrop. Credit: Matthew Fox and Martin Fox

The shapes of the crystal itself provide additional clues, said Fox. For example, Jane’s distinct points eroded away as she bumped along a river bottom.

After this tumultuous travel, the sediments in which she landed eventually lithified and rose skyward as mountains. From this vantage, Jane watched glaciers carve the land before being plucked from an outcrop by a geochronologist who wrings history from Jane’s lattice.

By describing the many geological processes that Jane (and, by extension, mountains like the Swiss Jura) experienced, Fox said, “you can get a sense of how much can fit into such a long period of time.”

A Family Project

Although Jane’s geological tale spans 400 million years, the book itself has a much younger provenance. After years of scribbling short geology-themed poems during field trips, Fox began to toy with writing a longer poem for children. In 2018, shortly after Fox joined University College London as a Natural Environment Research Council Independent Research Fellow, he began to compose Jane’s story on his phone, during his commute.

As Fox refined the rhyme, he reached out to several friends and colleagues, many of whom worked on zircon-related quandaries (including the author of this article). With the support of his community, Fox became convinced that a children’s book was worth pursuing. However, without funds to pay for an illustrator, he was stuck.

At this point, the project became a true family affair. Fox’s mother contributed indirectly to the story because Jane is her namesake. Fox proposed a collaboration to his father, Martin Fox, an architect and occasional painter, who agreed to help. Fox the elder created a playfully anthropomorphic, but scientifically precise, depiction of Jane’s journey, while Fox the younger ensured the details were correct—for example, that only dinosaurs from the same era feature in Jane’s story.

Connecting with Kids and Parents

As the Fox family worked to illustrate Jane’s exploits, Matthew Fox began looking forward to fatherhood himself. Fox’s daughter was born soon after he finished the book and just as the COVID-19 pandemic began in spring 2020.

Jane’s Geological Adventure was written by Matthew Fox and illustrated by Martin Fox. Credit: Alka Tripathy-Lang

The pandemic thwarted Fox’s plan to sell the book at conferences to eschew postage. He opted to sell the book via his website instead, publishing the first 200 copies during parental leave. “[Fatherhood] made me appreciate how important children’s books are and how important that time is where you actually interact with children,” said Fox. “My partner says it’s one of [my daughter’s] favorite books.”

Structural engineer Jan Moore of Salt Lake City, Utah, said that Jane’s appeal is not limited to children. “[My kids] really got the idea [that] the dinosaurs existed at one time and not another, which I thought was an advanced idea that kids don’t always grasp,” she said. “I don’t think I really grasped what the age [of a rock] really meant until there was a children’s book to explain it.”

Creative Public Outreach

When it comes to public outreach, said Fox, “everyone’s got different skills.” For example, although he’s spoken at schools around London, he acknowledges that public speaking sometimes makes him nervous.

“Try and do outreach activities that you enjoy doing.”He had a different approach to the book. “This was something that I quite enjoyed doing…and I thought I could contribute to outreach in a way that might be potentially more far-reaching,” Fox said. He plans to donate any profits made from the sales of Janes Geological Journey to GeoBus, an outreach activity funded by the U.K. Natural Environment Research Council wherein a van brimming with activities designed to engage children in geology travels to different schools.

To other researchers trying expand their outreach, Fox offered some tried-and-true advice: “Try and do outreach activities that you enjoy doing.” If the outreach you’re doing is something you’re excited about, he said, people will respond to that.

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer