EOS

Syndicate content Eos
Science News by AGU
Updated: 1 day 2 hours ago

Great Lakes Cities’ Sewer Designs Mean Waste in the Waters

Mon, 05/04/2020 - 12:36

The video shot in the summer of 2016 is grisly.

In it, Mark Mattson, co-founder of the Lake Ontario Waterkeeper, displays materials skimmed from the Toronto harbor, which lies along the northwestern end of Lake Ontario in Ontario, Canada.

As his small boat bobs in the water, he lays them on a wooden oar: slimy condoms, tampons, and wet naps.

And he points to a likely culprit: The millions of gallons of raw sewage that are dumped into the harbor and Lake Ontario after big rainstorms.

The problem results from the way infrastructure in Toronto—and hundreds of other cities along the Great Lakes—was built decades ago. In older cities, a single system of pipes may transport sewage as well as stormwater runoff.

That system is sufficient on most days. Treatment plants can remove contaminants before discharging the water into the harbor or its tributaries.

But a big storm can overwhelm a treatment plant’s capacity. When that happens, water rushing through those pipes bypasses the plant and ends up, untreated, in the lake.

It’s unsightly—and unhealthy. The sewage can contaminate beaches with Escherichia coli (E. coli) bacteria. Unless the beaches close, swimmers can get infections and other health problems.

Not Just Toronto

That’s been an issue for years, and now climate change has brought a new urgency for cities like Toronto to take action.

Climate change is expected to bring more intense storms to the Great Lakes region. That means more raw sewage flowing into the lakes, and a greater likelihood that beaches will be closed due to bacterial contamination.

In just one year, 20 cities released 92 billion gallons of untreated sewage, mostly via sewage overflows.“Extreme weather events are increasing in Toronto. That’s not a controversial point,” said Krystyn Tully, co-founder and vice president of the Lake Ontario Waterkeeper environmental group.

She sees a vicious cycle triggered by climate change: As the number of hot days increases, Toronto residents will want to go into the lake to cool off, especially low-income families. At same time, the lake’s water is getting warmer, and more frequent storms are leading to increased pollution.

“It’s a perfect storm, no pun intended,” she said. “We want the water to be clean … so people aren’t worried about the health impact.”

How Bad Is the Problem Across the Great Lakes region?

In just one year, 20 cities released 92 billion gallons of untreated sewage, mostly via sewage overflows, according to a report from the International Joint Commission (IJC), a U.S.-Canada group that helps to regulate the lakes.

That’s enough untreated sewage to fill nearly 150,000 Olympic-sized swimming pools.

It’s clearly a widespread problem—and one of the biggest sources of pollution in the Great Lakes region. Nearly 300 cities in the region have these antiquated sewer systems, the report notes. That includes Milwaukee, Wisc.; Cleveland, Ohio; Chicago, Buffalo, N.Y.; and Gary, Ind.

“In public meetings hosted by the IJC, the residents of Great Lakes communities voiced concern that extreme rainfall events are becoming more frequent, resulting in major discharges of untreated sewage,” says IJC spokesman Frank Bevacqua.

That’s a special challenge for Toronto, whose metro region population has increased from about 3 million in 1980 to more than 6.3 million today.

“The growth is crazy,” Tully says. “We don’t have infrastructure in place to deal with it.”

That’s changing. Toronto and other cities are addressing the sewage overflow problem in several ways. For example, they’re decoupling sewer lines and pipes that carry stormwater; in Toronto the combined systems are mainly in the oldest neighborhoods.

“We now know that in a post-climate change world, these storms unfortunately are a part of life that is going to happen for some time to come, and we just can’t allow the status quo to prevail.”They’re also creating sections of “green infrastructure”—planted areas that will help slow and absorb stormwater runoff.

A 2017 Toronto report lays out technical guidelines for options such as planters, large swales and rain gardens. These methods can reduce runoff and take pressure off the city’s stormwater drainage system, the report says.

But it adds a note of caution. The initiatives “will not contribute significantly to mitigating the impacts of extreme precipitation events. … [This is] one of many strategies that must be employed in order for the City to adapt to climate change and to build a more resilient Toronto for the future.”

The most dramatic changes are happening underground.

In December, the city marked the start of a $3 billion project that will include a series of tunnels, storage tanks and other improvements to hold overflow stormwater and sewage until treatment plants can handle it. The city also is building a new high-capacity treatment plant.

At the kickoff event, Mayor John Tory said that previous administrations had planned to virtually eliminate sewage dumped into Lake Ontario by 2038. Tory added that his goal is to do it within 10 years, according to a Canadian Broadcasting Corp. report.

“It is a core responsibility that we have for our generation … but also for the next generation to stop dumping raw sewage into Lake Ontario and into the waterways that flow into it,” he said.

And the mayor drew a direct connection to the intense storms triggered by climate change. “We now know that in a post-climate change world, these storms unfortunately are a part of life that is going to happen for some time to come, and we just can’t allow the status quo to prevail.”

Developing the Vision

Meanwhile, the Lake Ontario Waterkeeper is continuing to help.

Using a federal grant, it is taking samples in a number of waterways to determine if E. coli bacteria—a public health hazard—is present.

Toronto’s harbor along Lake Ontario will remain the largest monitoring hub, with five locations active from May to September. A core group of about 40 volunteers leads those sampling efforts.

The organization has purchased its own lab equipment to speed up analysis. Now it’s looking for ways to get results quicker than the standard 18-24 hours—and to develop models that can predict where bacteria will appear.

Tully says all of these potential solutions will help Toronto residents answer a crucial question: “How do we develop a vision you would expect from a city of this size?”

—Dave Rosenthal (@DaveRosenthal1), Great Lakes Now

Lauren Davis, Vivek Rao and Michael Skiles of Indiana University’s Arnolt Center for Investigative Reporting contributed research to this report.

This article is part of the Pulitzer Center’s nationwide Connected Coastlines reporting initiative. For more information, go to pulitzercenter.org/connected-coastlines-initiative.

This story originally appeared in Great Lakes Now. It is republished here as part of Eos’s partnership with Covering Climate Now, a global journalism collaboration committed to strengthening coverage of the climate story.

Methane’s Rising: What Can We Do to Bring It Down?

Mon, 05/04/2020 - 12:28

Methane emissions have increased dramatically over the past decade and a half, significantly contributing to climate warming. A recent article in Reviews of Geophysics examines how to measure methane emissions accurately from different sources, and explores various mitigation and emission reduction strategies. Here, one of the authors explains the causes of increased emissions, the imperative to address this problem, and what we might be able to do about it.

What are the main sources and sinks of atmospheric methane?

Methane comes from many sources. Roughly two-fifths of emissions are natural, such as wetlands, and three-fifths are human-caused, such as leaks from fossil fuel industries, ruminant farm animals, landfills, rice growing, and biomass burning.

Landfill site in Kuwait. Credit: D. Lowry, from Nisbet et al. [2020], Figure 3The main sink for methane is destruction by hydroxyl (OH) in the sunlit air, especially in the tropics in the moist air a few kilometers above the surface. Other smaller sinks are chlorine in the air, and destruction by bacteria in the soil.

Why has there been a sharp rise in atmospheric methane over the past few decades?

In 2007, unexpectedly, the amount of methane in the air started growing again, with very strong growth since 2014, much of it in tropical regions.Methane emissions rose quickly in the 1980s as the natural gas industry was rapidly expanding, especially in the former Soviet Union. Then the growth rate slowed and the methane budget (the balance between emissions and their destruction) seemed to have reached equilibrium in the early years of this century. However, in 2007, unexpectedly, the amount of methane in the air started growing again, with very strong growth since 2014, much of it in tropical regions [Nisbet et al., 2019].

Simultaneously, there was a marked change in the isotopic composition of atmospheric methane. For two centuries, the proportion of Carbon-13 in the methane in the air had been growing, reflecting the input from fossil fuels and fires, which is relatively rich in C-13, but from 2007, the proportion of C-12 methane has risen [Nisbet et al., 2016].

There is no clear agreement why this rise in methane began again in 2007, nor why it accelerated from 2014, nor why the carbon isotopes are shifting. One hypothesis is that biological sources of methane have increased; for example, population growth has increased farming in the tropics, and climate warming has made tropical wetlands both warmer and wetter. Another possible hypothesis is that the main sink has declined; if true, this would be profoundly worrying as OH is the ‘policeman of the air’ cleaning up so many polluting chemical species. A third hypothesis is more complex, speculating that fires (which give off methane rich in C-13) have declined while other sources have risen. Of course, these hypotheses are non-exclusive and all these processes could be happening at the same time.

Why is a focus on reducing methane emissions critical for addressing climate warming?

Methane is an extremely important greenhouse gas.Methane is an extremely important greenhouse gas. In its own right, it is the second-most important human-caused climate warmer after carbon dioxide (CO2), but it also has a lot of spin-off effects in the atmosphere that also cause warming.

In the 5th Assessment Report from the Intergovernmental Panel on Climate Change (IPCC) in 2013, warming from methane was assessed at about 0.5 watts per square meter (Wm-2) (the measure of solar irradiance) compared to the year 1750. That’s large, and when all its spin off impacts are added, the warming impact of methane was around 1 Wm-2 (IPCC 2013 report Fig. 8.17), which is significant when compared to about 1.7 Wm-2 warming from CO2. Sadly, both numbers of course have now much increased.

If methane emissions are quickly reduced, we will see a resulting reduction in climate warming from methane within the next few years.Methane’s atmospheric lifetime (the amount in the air divided by the annual destruction) is less than a decade. So, if methane emissions are quickly reduced, we will see a resulting reduction in climate warming from methane within the next few years. Over the longer-term CO2 is the key warming gas but reducing that will take much longer, so cutting methane is an obvious first step while we try to redesign the world’s economy to cut CO2. It’s rather like a dentist giving a quick acting pain reliever while making plans for a root canal procedure.

What might be the some of the easiest or most cost-effective ways to cut methane emissions from different sources?

Simple box model to show the potential impact of mitigation. The purple line approximates emission levels that would be compliant with the Paris Agreement. The blue line represents no change in emissions after 2020. The other lines show a 10% (orange line), 20% (green line) and 30% (red line) cut in emissions spread linearly over the period 2020–2055 followed by stable emissions. Credit: Paul Griffiths, in Nisbet et al. [2020], Figure 22 left panelWe need to identify the major human-caused sources that we can realistically change quickly.

Some relating to the fossil fuel industry are easily identified and already subject to regulatory control in most producing nations, so it should not be difficult to monitor and achieve better behavior. For example, gas industry leaks represent lost profit, while deliberate methane venting in the oil industry is simply lazy design. Meanwhile, the coal industry is rapidly becoming uncompetitive with renewable electricity.

Tropical fires are a particular problem and cause terrible pollution. Many fires are either unnecessary (such as crop waste fires and stubble burning) or very damaging (such as human-lit savanna grassfires and forest fires) so there is a very strong argument for using both financial incentives and legislation to halt fires across the tropics, although in some places there are strong vested interests.

Landfills are another significant source. Although these are highly regulated in Europe and parts of the Americas, in megacities in the tropics there are many immense landfills, often unregulated and often on fire. Just putting a half-meter of soil on top would greatly cut emissions.

And what are some of the most challenging types methane sources to address?

Changing food habits is perhaps the biggest challenge. Much methane is breathed out from ruminant animals such as cows, water buffalo, sheep, and goats. Across much of tropical Africa and India, cows tend to live in the open and their manure is rapidly oxidized so it is not an especially large methane source. But in Europe, China and the United States, cattle are often housed in barns with large anaerobic methane-producing manure facilities, that do make methane. These manure lagoon emissions should be tackled.

We could, of course, all give up food from ruminants and methane emissions would drop, but it would be countered by an increasing demand for crops. More intensive arable farming, especially in the tropics, would be needed, and likely achieved by plowing up forest and savannas, which would increase CO2 emissions, and also require increasing the use of nitrogen fertilizers.

Reducing meat and dairy consumption to only ‘organic’ grass-reared animals seems like a sensible first step for people in wealthier nations. But this needs to be seen in the context of broader issues in less developed nations. Population growth needs to be slowed if agricultural emissions are to be reduced: better schools, especially for girls, improved healthcare, and better pensions would reduce population growth and thus the burden on human food production. A focus on societal issues would ultimately address climate problems too.

Can we be optimistic that efforts to reduce methane emissions will help to meet the targets of the UNFCCC Paris Agreement?

If I’d been asked this question three months ago, I would have said “no”. Methane is rising much faster than anticipated in the scenarios that underlay the Paris Agreement. As I write we are several months in to the global COVID-19 epidemic and it is almost as if nature itself has so tragically hit the pause button. I am one of many scientists trying to measure the impact of the lockdown on CO2 and methane emissions. As we try to rebuild and find our way through the post-epidemic recovery, there will be great changes, and perhaps in many countries a pause for thought, and a chance to choose a new way forward.

—Euan Nisbet (E.Nisbet@rhul.ac.uk), Department of Earth Sciences, Royal Holloway, University of London, UK

Studying Earth’s Double Electrical Heartbeat

Mon, 05/04/2020 - 12:24

Lightning Science Strikes Lightning Research Flashes Forward   Planetary Lightning: Same Physics, Distant Worlds   Studying Earth’s Double Electrical Heartbeat   Returning Lightning Data to the Cloud   Understanding High-Energy Physics in Earth’s Atmosphere   Mapping Lightning Strikes from Space   New Study Hints at Bespoke Future of Lightning Forecasting   Students Launch Balloon-Borne Payloads into Thunderstorms   Investigating the Spark

Even on the fairest of days, without a single cloud in sight, an electric current flows from the sky to the ground. Driven by the difference in electrical potential between Earth’s surface and the ionosphere, it is a crucial component of the global electrical circuit (GEC), which connects many electrical processes in the atmosphere.

Lightning pumps charge into the atmosphere, as do galactic cosmic rays. Electrified clouds that don’t produce lightning shoulder a share of the burden equal to that of lightning. Dust, pollutants, and other particles in the lower troposphere also play a role in the global electrical circuit, as does the changing of the seasons.

“You’re looking at the total integrated effects of all the electrified weather across the globe,” said Michael Peterson, a staff scientist at Los Alamos National Laboratory in New Mexico who has studied the circuit with satellite lightning detectors. “People have described it as the electrical heartbeat of the planet.”

Researchers are paying more attention to that heartbeat these days. They are measuring the GEC in more detail, determining the roles of everything from layer clouds to the Sun’s magnetic cycle, and looking at incorporating the electrical circuit into global climate models. “Research on some questions was getting a bit stalled, but now we can use new technology, new methods, and new instruments to push it forward,” said R. Giles Harrison, a professor of atmospheric physics at the University of Reading in the United Kingdom. An artist’s rendering of the complexity of the global electrical circuit. Click image for larger version. Credit: Jeffrey Forbes, University of Colorado Boulder

Direct Currents, Alternating Currents

Like the Time Lords of Doctor Who, Earth actually has two (electrical) heartbeats. A direct current (DC) circuit operates continuously across the entire planet, driven by everything from lightning to fair-weather currents. An alternating current (AC) circuit, on the other hand, is driven exclusively by lightning, which creates electromagnetic waves that circle the planet. Scientists are studying the relationship between the two circuits.

The GEC (DC version) was first proposed in 1920 by Scottish physicist C. T. R. Wilson, who later won the Nobel Prize for his invention of the cloud chamber. He suggested that Earth’s surface and the base of the ionosphere, a zone of ionized air at an altitude of 50–80 kilometers, formed the conductive shells of a spherical capacitor. The air served as a “leaky” insulator, allowing electric current to flow between the nested shells. Thunderstorms, Wilson wrote, served as the primary generator for this system. Electrified shower clouds, which maintain an electric charge but produce no lightning, also contributed to the circuit.

“We have a saying that what comes down must have gone up. In other words, if we see current flowing down in fair-weather regions, there must have been a charge going up.”Wilson’s basic model of the DC circuit has been verified by observations over the past century, which have filled in some of the details of how it works.

“The conceptual framework is that you’re allowing the charge generated by disturbed-weather regions to flow around the planet and find its way back to the ground through fair-weather regions,” said Harrison. “The amount of charge is about the same in the fair-weather regions as in thunderstorms. A large part of the 20th century was spent working out that balance sheet. We have a saying that what comes down must have gone up. In other words, if we see current flowing down in fair-weather regions, there must have been a charge going up.”

Through lightning, sprites, jets, and other transient phenomena, thunderstorms cause electric currents to flow up and over clouds to the bottom of the ionosphere. Electrified shower clouds contribute an equal charge to that layer, which captures and distributes the charge around the globe, keeping the “battery” juiced up. (Thanks to those clouds, if thunderstorms suddenly disappeared, the strength of the DC circuit would be cut roughly in half but wouldn’t disappear completely.)

Under fair-weather conditions, the positive ionospheric charge filters back toward the negatively charged ground. The difference in the electrical potential between the ionosphere and the surface averages about 250 kilovolts, producing a downward flowing fair-weather electric field of about 100–300 volts per meter.

Thunderstorms transport negative charge from the cloud base to the ground through lightning strokes, charged rain, and other means, completing the circuit.

The total current flowing in the global circuit, and therefore the total reaching the surface, is about 1,800 amperes. The potential of the upper atmosphere is about 300 kilovolts compared with the surface. The total power in the global circuit is roughly 1 gigawatt—“the equivalent of a modest[-sized] biomass-burning power station at best,” said Harrison.

“Measuring the Global Circuit Is a History of Failure”

Although the atmosphere is a relatively efficient insulator, it leaks because it contains clusters of ions. Some of the ions are created when molecules are zapped by galactic cosmic rays, particles accelerated to high speed in such energetic environments as supernova remnants or accretion disks around black holes. Near the surface, air is mostly ionized by radon created by the decay of radioactive elements in the crust. Other sources involve dust particles, atmospheric pollutants, or other aerosols that carry their own electric charge.

Those contaminants make it hard to measure the GEC, especially over land. “The history of measuring the global circuit is a history of failure,” said Earle Williams, a research scientist at the Massachusetts Institute of Technology. “You have to be in clean air. It can’t be contaminated by pollution or changes in air mass. If only we could get a Radio Shack meter and put one probe in the upper atmosphere and one on Earth’s surface and monitor it continuously. Unfortunately, that’s too complicated.” .

A map compiled by detectors on orbiting satellites shows the three global lightning chimneys: over the Americas, Africa, and the Maritime Continent. Credit: NASA

. The best measurements are made from the oceans, where the air is relatively clean. In fact, much of the early evidence for the global circuit was compiled by the R/V Carnegie, operated by the Carnegie Institution of Washington, which measured the global electric field during a series of cruises from 1915 to 1929. (The vessel was destroyed in a fire in 1929.)

Its observations revealed that the global DC circuit does not exhibit a single constant value. Instead, it waxes and wanes over a 24-hour cycle. Known as the Carnegie curve, when averaged over a period of years, the cycle peaks at around 19:00 coordinated universal time (UTC) and bottoms out at around 03:00 UTC, regardless of where on Earth it’s measured.

That cycle reflects the peak of global thunderstorm activity, which feeds three electrical “chimneys”: over the Americas, Africa, and the Maritime Continent (an expanse of islands and seas at the intersection of the Indian and Pacific Oceans, from Southeast Asia to Australia). Most thunderstorms take place over land, where solar heating creates convection and rising air currents that drive cloud formation. The Americas have the most active thunderstorm seasons, so they dominate the Carnegie curve.

Africa, however, appears to dominate the AC circuit, which is driven only by lightning. The lightning discharges produce low-frequency radio waves that race around the planet, guided by the cavity formed by the charged ionosphere and the surface.

An artist’s rendering shows a simplified diagram of Schumann resonances. Credit: NASA

The waves combine and amplify each other, producing an electromagnetic effect known as a Schumann resonance. The primary resonance is at a frequency of about 8 hertz—eight trips around the planet per second. “It’s like sitting inside a ringing bell,” said Harrison. The resonance is maintained by the combined effect of all the lightning flashes on Earth—between 40 and 50 per second.

“There’s a paradox involving the DC and AC circuits,” said Williams. “America wins the DC circuit, which always peaks [at] around 19:00 UT. But if you look at global lightning, Africa wins, with a peak [at] around 14:00 UT. We’re trying to get a handle on the reason for that paradox.”

Shifting Baselines

That requires more extensive observations of global lightning and the Carnegie curve, one of the main challenges facing those who study the GEC. “Monitoring long-term trends in lightning is difficult,” said Keri Nicoll, an associate professor at the University of Reading and the University of Bath. “Lightning monitoring networks are constantly being updated to provide more and better measurements. This means that the baseline is constantly shifting.”

To help researchers assess lightning’s role in the global circuit, Nicoll and her colleagues established a database of observations from 19 lightning networks, primarily in Europe but stretching from India to Argentina to Antarctica. Although project funding for the Global Coordination of Atmospheric Electricity Measurements has ended, the database is still available to researchers and continues to accumulate observations from several networks. Williams and his colleagues plan to conduct their own observations to address the problem of the global circuit paradox.

New data should help scientists choose between two possible explanations for the differences in the American and African chimneys.An instrumented aircraft will fly off the East Coast of the United States (near New England for part of the year and off Florida during the winter) 1 day per month, making two trips per day—one in the morning, to measure the atmospheric electrical potential during the peak in lightning activity in Africa, and one in the afternoon, during the lightning peak in the Americas. (Even though they’re made in a single geographic location, the measurements represent electrical activity over the whole planet.)

The aircraft will make four sets of measurements during each flight, with each set starting at the aircraft’s peak altitude and then dropping toward the ocean surface. Scientists will compare the observations to lightning data compiled by surface networks and by sensors on board orbiting satellites.

The data should help scientists choose between two possible explanations for the differences in the American and African chimneys. “One is that there are more electrified shower clouds in the Americas than in Africa, and they’re boosting the DC circuit without affecting the AC circuit,” Williams said. “The other is that Africa might get a boost from more aerosols, which move condensates from the lower atmosphere up to the lightning-producing region,” enhancing thunderstorm formation.

Researchers are using observations old and new, often made with balloons or drones, to address several questions related to the global circuit.

“One area of research is looking at different types of storm structures,” said Los Alamos’s Peterson. “When you think of the global circuit, most people think of convection, which is the major driver of lightning. But the clouds outside the thunderstorm core are important because they become electrically active and they have a different charge structure. For example, stratiform clouds, which form behind massive lines of thunderstorms, often have inverted polarity structures (as compared to that of thunderclouds, where positive charge regularly resides above negative) which can either charge or discharge the global circuit. Where do these clouds occur, how often do they occur, [and] what kinds of charges do they actually produce? There are still a lot of questions about that.”

“The role of the global circuit in climate change is a standard essay question.”There are also questions about how climate change will affect the global circuit and whether changes in the circuit might, in turn, alter climate in any way.

“The role of the global circuit in climate change is a standard essay question,” said Harrison with a bit of a chuckle. “And there are a wide range of answers. But if there’s an increase in thunderstorms because of higher temperatures, then we’d certainly expect an increase in both the AC and DC circuits.”

Higher air temperatures increase evaporation, providing more water vapor to fuel thunderstorms. More and bigger thunderstorms would produce more lightning, which could alter the intensity of the global circuit, modify the timing of the Carnegie curve, or cause other changes. Increased monitoring of the GEC would allow scientists to note those changes and use the circuit parameters as indicators of increased climate change.

Williams and his colleagues are using “thunder day” observations—the number of days that thunder was recorded at meteorological stations around the world—since the late 1800s as proxies for lightning observations or measurements of the global circuit.

Little work had been dedicated to modeling climate feedbacks from the GEC. Researchers are looking at some aspects of that feedback, such as whether the vertical current flow in the circuit might change cloud formation or structure.

“The other difficulty is how best to model the GEC and to incorporate this with climate models,” said Nicoll. “For this we need to know how to predict GEC parameters such as the global charging current, ionospheric potential, and fair-weather conduction current from a climate model….This is an ongoing area of research and one which shows great promise for the future.”

Such models could tell us much more about our changing climate and about our planet’s double electrical heartbeat.

—Damond Benningfield, Science Writer

This Week: Antique Climate Science and Brand-New Broken Comets

Fri, 05/01/2020 - 18:27

Overlooked No More: Eunice Foote, Climate Scientist Lost to History. In 1856, Eunice Foote, an amateur scientist with an interest in the atmosphere and a variety of other topics, published a paper in which she noted (on the basis of an experiment she’d conducted) that an atmosphere containing more carbon dioxide warms more than an atmosphere with less of the gas in it. It was the first known instance of someone capturing the essence of the greenhouse effect in print. Long overlooked in the history of science, Foote and her contribution to climate science have been gaining well-deserved attention in recent years, including in a short film and in this obituary.

—Timothy Oleson, Science Editor

 

Comet ATLAS Broke Apart.

Like exploding aerial fireworks shells, comet ATLAS is breaking apart into more than 30 pieces, each roughly the size of a house. Hubble captured detailed images of the breakup last week: https://t.co/PYcgDD64hA pic.twitter.com/hV2n2OrVnY

— Hubble (@NASAHubble) April 28, 2020

COVID-19 may have closed down most professional ground-based telescopes, but amateur astronomy is still going strong. Many backyard astronomers were eagerly awaiting the arrival of comet ATLAS (C/2019 Y4), inbound toward the Sun from the outer solar system. ATLAS was growing brighter as it neared Earth and was predicted to become visible to the naked eye in May. But the comet became dimmer instead. Scientists looking with the Hubble Space Telescope discovered that the comet had broken apart into more than 30 house-sized pieces. Sad times! The pieces won’t be visible after all, but the science is still interesting.

—Kimberly Cartier, Staff Writer

 

Lightning flashes during a tornadic storm in Oklahoma. Credit: Media Drum World/Alamy Stock Photo

Lightning Research Flashes Forward. You know that feeling when you’re sort of horrified but also fascinated? That’s how I felt when I read about lightning that can spawn from the ground upward, called an upward streamer, like some sci-fi movie special effect from the 1980s. I really enjoyed this tour of lightning science featured in our May issue of Eos (even though it gave me some chills reading it!).

—Jenessa Duncombe, Staff Writer

 

Seattle’s Leaders Let Scientists Take the Lead. New York’s Did Not. While recognizing that Seattle and New York face different challenges and have different populations, this is still an utterly compelling look at public health, science communication, and the leadership of elected officials. The typically lengthy New Yorker feature gives each of those topics room to breathe and interact and is well worth the read.

—Caryl-Sue, Managing Editor

Tracing the Past Through Layers of Sediment

Fri, 05/01/2020 - 12:26

The stratigraphic record—layers of sediment, some of which are exposed at Earth’s surface—traces the planet’s history, preserving clues that tell of past climates, ocean conditions, mountain building, and more. As Rachel Carson once wrote, “The sediments are a sort of epic poem of the earth.”

Yet interpreting how these sedimentary layers document Earth’s past is complex and challenging. In a recently published study, Straub et al. identify three obstacles standing in the way of accurate stratigraphic interpretations and outline the grand challenges facing geologists trying to read the clues.

First, Earth’s surface responds dynamically to the forces shaping it (e.g., climate, tectonics, and land cover change). Yet the environmental signals, or markers, of such change are often buffered and dampened by the movement of sediment, which diminishes the signals’ detectability in sedimentary deposits. Second, surface conditions are recorded only when and where sediment accumulates; environmental conditions that do not coincide with this deposition will be absent in the recorded history of Earth. Last, environmental clues may be missing in rock layers because of the storage and later release of sediments in landforms like river bars and floodplains. This process, called signal shredding, destroys some sediment signals left by external events like storms and earthquakes.

In the review, the authors explore these impediments in depth, examining numerical, experimental, and field findings behind each. For example, when evaluating how signals are buffered as they move through landscapes, the authors dig into the diffusion equation. The equation describes how a property is conserved in one dimension and flows down a gradient, for instance, how heat disperses through a medium. In a sedimentary context, the equation helps model the formation of alluvial fans and other topographic features.

As discussed in the study, four grand challenges confront geologists today as they try to improve interpretations of the stratigraphic record. These include the following:

Defining the causes of landscape stochasticity across environments Increasing collaboration between research communities studying surface processes and stratigraphy Embracing hypothesis testing and quantifying uncertainty in stratigraphic interpretations Teaching both quantitative theory and field applications to the next generation of stratigraphers.

Improving stratigraphic interpretation, the authors argue, is key to unlocking quantitative information about the past that will improve forecasts of the future. Their exhaustive review charts a path forward for using the stratigraphic record to answer basic and applied science questions. (Journal of Geophysical Research: Earth Surface, https://doi.org/10.1029/2019JF005079, 2020)

—Aaron Sidder, Freelance Writer

Geoscientists Help Map the Pandemic

Fri, 05/01/2020 - 12:17

In the shadow of coronavirus disease 2019 (COVID-19), it can be difficult to manage research programs. Field seasons have been postponed, lab meetings have taken to online video calls, and some geoscientists have seen their productivity grind down to plate tectonics speeds while the pandemic rages on.

But not all geoscientists.

Babak Fard, an environmental data scientist at the University of Nebraska Medical Center (UNMC) College of Public Health in Omaha, has leveraged his interdisciplinary background to track and predict COVID-19 infection risks to Nebraskans.

He and his colleagues created a dashboard tool that can help responders visualize where outbreaks are trending or where they may spike in the future. The tool is helping healthcare providers and public policy wonks get supplies and resources to the areas of Nebraska that need them most.

The mapping tool can help responders visualize where outbreaks are trending or where they may spike in the future. Using Geoscience for Human Health Issues

While a doctoral student at Northeastern University in Boston, Fard mapped the risk of heat waves to residents of Brookline, Mass., using a framework tool. The project was part of AGU’s Thriving Earth Exchange, in which scientists work on a problem that advances community solutions.

“We wanted to look at how these extreme temperatures affect public health,” said Fard, adding that the issue has become a global concern. The team identified the hazard (heat waves) and vulnerabilities that can lead to adverse reactions to the hazard. Using these data, they created a regional map of communities with the highest risks of detrimental outcomes associated with heat waves.

“One purpose of the risk framework is to enable the decision-makers to prioritize their resources to different areas that need attention during a crisis.” Vulnerabilities are a set of social factors that play important roles in how people react to hazards, said Fard. “For example, age is a very important factor in [heat waves],” he noted, adding that different studies show that nonwhite and minority groups are more vulnerable as well.

The team used data on vulnerabilities to identify populations at the highest risk using something called a risk framework. The more vulnerabilities a person has—age, minority status, reliance on public transportation—the higher the risk is. “One purpose of the risk framework is to enable the decision-makers to prioritize their resources to different areas that need attention during a crisis,” said Fard, adding that with limited budgets and supplies, this information is crucial for prioritizing responses.

In his new position at UNMC, Fard used the bones of the risk framework his team built for heat waves for a new purpose: predicting coronavirus risks. This time, the hazard was not a heat wave, but COVID-19.

“The Centers for Disease Control and Prevention (CDC) identifies 15 sociodemographic variables to calculate social vulnerabilities,” said Fard, noting that the data are from the U.S. census. He explained that these factors can be grouped into four categories: socioeconomics, household composition and disability, minority status, and housing and transportation. Each category gets a value, and the values are averaged to represent the risk of COVID-19 infection to the population within a geopolitical boundary—in this case, a county.

Mapping a Pandemic

And the information is all easy to read on a map. It has been highly successful for those inside the state and in neighboring states as well. Fard noted that over the past month, on average, there have been more than 2,200 views of the dashboard tool each day.

The map can reveal insights into disease spreads, showing patterns and predicting virus hot spots. These data allow health professionals and government agencies to plan ahead—something Fard called adaptive capacity. “It’s any measure that can help in reducing the vulnerability,” he said, and can include anything from increasing the number of beds in intensive care units to addressing transportation issues.

It’s important to look beyond the number of cases and into why the cases are there.These maps might be a crucial tool for pandemic responders, said Kacey Ernst, an epidemiologist and program director of epidemiology at the University of Arizona who was not involved with the research. “We might want to enhance our level of testing to catch more cases [in a certain area] or put up a testing center if there’s an area where people would have to take the bus or public transport when they’re ill to get tested,” she said.

“I was impressed that [Fard] was looking at a multitude of underlying factors that might influence what the numbers would say,” said Ernst. She added that she was particularly impressed with the hospital data they included. “I appreciated the fact that he didn’t just put up the case numbers—that he was trying to delve into a little more deeply.”

Ernst said it’s important to look beyond the number of cases and into why the cases are there. “It’s absolutely critical to really understand the underlying population and how that might influence what you see, in terms of both differences in how diseases are reported and in how testing is being conducted.”

The Power of Interdisciplinary Research

The project is a perfect example of how geoscientists can think and apply their skills outside the traditional bounds of their research. “As geoscientists, we know how to work with maps and do geospatial analyses,” said Fard, adding that medical geologists can go one step further and study the effect of geological factors on health. He noted geospatial skills can add a lot of value for crisis responders who need a visual picture of where to focus.

Ernst agreed and said it is imperative, especially during a pandemic, for scientists to look critically at every data source and try to understand its limitations and caveats. “Many geoscientists do sort of broader scales, spatial scales,” she said, adding that often, geoscientists “get that blessing and curse of spotty data and you have to learn how to how to figure out what it actually means and what you can do with it.”

“It makes the research really strong when you have teams that are diverse and able to look at data from different angles.”In the increasingly connected world, interdisciplinary research like Fard’s may become the norm, not the exception. For Ernst, this is already the case. “I am a strong proponent of interdisciplinary research teams—that’s pretty much how I do all my work,” she said. “It makes the research really strong when you have teams that are diverse and able to look at data from different angles.”

Fard said that the framework tool is a larger part of the Nebraska Emergency Preparedness and Response effort. And although it is currently being used for COVID-19, “this framework is going to continue to be beneficial in other situations that might come up in the future,” such as natural hazards like floods.

The framework provides mayors, hospitals, and relief workers information for planning and disaster response. Fard said seeing the success of the coronavirus framework will hopefully “inspire other organizations to use it for their purposes.”

—Sarah Derouin (@Sarah_Derouin), Science Writer

Ultrahigh Speed Movies Catch Growing Earthquake Ruptures

Fri, 05/01/2020 - 11:30

Once initiated, an earthquake rupture usually propagates along the fault plane at a speed somewhat less than that of the seismic shear wave velocity VS in the surrounding earth. On less frequent occasions, ruptures propagating at speeds exceeding VS, (but still less than the compressional velocity VP) have been observed from large strike slip earthquakes, such as the highly destructive 1906 San Franciscso earthquake.

As such it is important to understand the factors influencing these sub-shear and super-shear propagation models with the ultimate goal to better assess the conditions leading to the most destructive strike slip earthquakes. As this process still remains poorly understood, laboratory analog tests can provide insight into how such fractures grow during an event.

Rubino et al. [2020] employ modern ultrahigh speed videography operating at up to 2 million frames per second to capture the propagation of ruptures whose fronts move as fast as 2.28 km/s. The ruptures propagate along pre-existing “fault” with a controlled surface roughness between two pieces of transparent acrylic glass. The assembly is loaded uniaxial to resolve normal and shear stresses onto the fault plane, the magnitudes of which depend on the orientation of the fault with respect to the loading. Ruptures are initiated from a small pulse produced from the electrical resistive failure of a wire embedded in the fault providing a time reference for controlling the video capture.

Digital image correlation applied to subsequent frame within a given video allowed the researchers to map particle velocity fields around the rupture both spatially and temporally. The results demonstrate differences in the near and far field wave propagation from the different rupture modes that agree well with theoretical predictions.

Citation: Rubino, V., Rosakis, A. J., & Lapusta, N. [2020]. Spatiotemporal properties of sub‐Rayleigh and supershear ruptures inferred from full‐field dynamic imaging of laboratory experiments. Journal of Geophysical Research: Solid Earth, 125, e2019JB018922. https://doi.org/10.1029/2019JB018922

—Douglas R. Schmitt, Editor, JGR: Solid Earth

Una Guerra Nuclear Podría Generar un “Niño Nuclear”

Thu, 04/30/2020 - 11:44

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Una guerra nuclear podría traer mucho más que una hambruna global, una lluvia radioactiva y un invierno incesante: También podría desatar el Niño más largo e intenso que el mundo haya visto. Una nueva investigación presentada en el Encuentro de Ciencias del Océano en marzo en San Diego, sugirió que el enfriamiento global producto de un conflicto nuclear podría interrumpir la circulación atmosférica normal, llevando a una respuesta persistente y severa en el Océano Pacífico similar a “el Niño”.

“Esperemos que nunca suceda,” dijo la coautora y profesora asistente en la Universidad de California en Santa Bárbara, Samantha Stevenson. “La idea detrás del uerraó es solo demostrar cuán catastrófico sería.”

“Un Martillo Bastante Grande”

Los investigadores probaron las reacciones de la atmósfera y el océano a una guerra nuclear usando un modelo climático del Centro Nacional para la Investigación Atmosférica. Aunque el modelo climático no tiene el ajuste de guerra nuclear, los investigadores agregaron cantidades variables de carbono negro durante una semana para simular los diferentes escenarios de la guerra nuclear (investigaciones pasadas han sugerido que las explosiones masivas y las ciudades en llamas podrían inyectar a la atmósfera alta partículas finas de cenizas y hollín, en donde podrían permanecer por años y podrían enfriar la Tierra).

Cada escenario de guerra nuclear que probaron provocó cambios en cascada en la cuenca oceánica más grande del mundo: la del Pacífico. La dirección de los vientos alisios, responsables de ayudar a los marineros a navegar en alta mar, podrían invertirse. La altura de la superficie del mar en los dos lados del Océano Pacífico se reconfiguraría, trayendo un poco más de agua a las costas de América del Sur que en Australia, lo que sería una inversión completa del océano actualmente. Sin la forma normal de la superficie del mar, la corriente ascendente a lo largo del ecuador se detendría, interrumpiendo el aporte de agua rica en nutrientes de la que depende la vida marina. En el caso más grave, la corriente descendente comenzaría a lo largo del ecuador una inversión total de la circulación oceánica.

“Lo estamos llamando ‘Niño nuclear’, porque se parece mucho a este fenómeno, pero dura de 7 a 8 años”.“Esto es como golpear al sistema climático con un martillo bastante grande, así que anticipábamos alguna reacción” mencionó Stevenson. “Pero verlo fue bastante impactante […] fue mucho más allá de lo que pensé que sería posible.”

Los cambios en el océano y la atmósfera se parecen mucho al fenómeno de “el Niño”, dijo Stevenson. El índice comúnmente utilizado para evaluar el ciclo de la Oscilación del sur y el Niño, el Índice de la Oscilación del Sur, se aleja cinco desviaciones estándar de la media para los casos más severos. “Lo estamos llamando ‘Niño nuclear’, porque se parece mucho a este fenómeno, pero dura de 7 a 8 años,” mencionó Stevenson.

La gravedad de la respuesta solo depende de cuánto los humanos decidan explotar. Si Estados Unidos y Rusia, por ejemplo, liberan sus arsenales masivos, el Niño resultante podría durar de entre 7 a 8 años. En los casos menos destructivos, como un conflicto entre India y Pakistán (en donde los arsenales son más pequeños en número y las cargas útiles son menores) el Niño resultante podría durar entre 1 y 5 años, aproximadamente, dependiendo de cuántas armas utilicen.

El autor y estudiante de posgrado de la Universidad Rutgers, Joshua Coupe, dijo que en el trabajo futuro examinará los mecanismos físicos que influyen en “el Niño nuclear”. La respuesta cualitativa hasta ahora apunta a la circulación del aire sobre el sureste de Asia como el culpable. Por lo general, las grandes corrientes ascendentes de agua cerca de Malasia, Papua Nueva Guinea y Filipinas, impulsan un circuito atmosférico sobre el Pacífico, llamado Circulación de Walker, pero el invierno nuclear lo apagaría. “Es el principal mecanismo que vemos en juego,” dijo Coupe. “En el futuro nos gustaría realizar pruebas de sensibilidad.”

El Niño Nuclear

“Nunca antes había visto una investigación como esta,”, dijo el oceanógrafo Christopher Wolfe, de la Universidad de Stony Brook en Stony Brook, Nueva York, quien no estuvo involucrado en el proyecto. “Creo que esto inspirará más trabajos en las consecuencias climatológicas del conflicto de gran escala”.

Detener las surgencias en el Pacífico ecuatorial “es un poco impresionante, porque es el estado básico del océano,” menciona. “Deberíamos estar preocupados por lo que los cambios en el océano harán a las personas.”

El último estudio también prueba el impacto de una guerra nuclear en la base de la red alimenticia marina, es decir, en el fitoplancton fotosintético del Pacífico tropical.  “De acuerdo con los resultados, pareciera que cuando ocurra esta enorme guerra nuclear, sería muy difícil producir alimento en el océano,” Menciona Coupe. Menos luz durante los inviernos nucleares, significa que menos fitoplancton puede crecer, reduciendo su consumo de carbón hasta en un tercio en los casos más severos. “Los números que vemos son bastante serios.”

La última investigación fue financiada por una donación otorgada a la Universidad de Rutgers por el Proyecto Open Philanthropy, una fundación iniciada por el cofundador de Facebook, Dustin Moskovitz, para estudiar los costos ecológicos y sociales de la guerra nuclear. “Espero que al proporcionar más detalles sobre lo que podría sucederle al clima, contribuya a disuadir el uso de estos”, afirmó Coupe.

—Jenessa Duncombe (@jrdscience), escritora de Eos

This translation was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando. Traducción de Rebeca Lomelí y edición de Alejandra Ramírez de los Santos.

Messages in the Bubbles

Thu, 04/30/2020 - 11:44

Volcanic gases are the engine of volcanic eruptions. They determine whether volcanoes erupt effusively or explosively, and they can also affect volcanic plumbing systems during periods of quiescence [Girona et al., 2015]. During such quiet periods, passive (noneruptive) volcanic degassing can release substantial amounts of carbon dioxide (CO2) into the air [Carn et al., 2016], and these emissions can influence atmospheric and oceanic CO2 levels over a much wider area [Aiuppa et al., 2019].

We sailed for 3 days, with as many as four boats at a time, on a lake surface that gave no indication of the large magma reservoir lying below the caldera.Directly observing CO2 emissions from subaerial volcanoes (those that are not submerged in water) is technically challenging because of the high CO2 levels already present in the atmosphere, but detecting CO2 emitted by volcanoes submerged in water using acoustic instruments is relatively straightforward.

Lake bed CO2 seeps were the topic of an unusually focused experiment involving a large group of scientists from various disciplines that took place last August at Laacher See, a scenic lake in the Eifel region of Germany. The 2-kilometer-wide lake, surrounded by trees and hosting a peaceful 900-year-old Benedictine abbey on its shore, sits in the caldera of a volcano thought to have last erupted at the end of the Pleistocene. Our multinational team of 13 volcanologists, sedimentologists, oceanographers, and hydrogeophysicists set out to look for evidence of connections between these gas seeps, the surrounding sedimentary structures, and volcanic activity. We sailed for 3 days, with as many as four boats at a time, on a lake surface that gave no indication of the large magma reservoir lying below the caldera. During our fieldwork, we were disturbed only by fishermen and tourists on pedal boats.

From Erupting to Bubbling

Gas seeps near submerged volcanoes in the ocean or in lakes can provide information that is not available for subaerial volcanoes. The lake bed of Laacher See contains numerous such gas seeps, making it an attractive candidate for our study.

Laacher See volcano produced a Plinian eruption about 12,900 years ago, spewing ash as far away as Greece. It erupted 20 cubic kilometers of the rocky fragments known as tephra, a dense-rock equivalent of 6.3 cubic kilometers of magma [Schmincke et al., 1999], making it similar in magnitude to the Pinatubo eruption of 1991. Since this large-scale eruption, a lake formed in the caldera, and the volcano has been quiet. In recent years, however, significant plumes of gas bubbles have been observed in the lake [Goepel et al., 2015], and high concentrations of dissolved CO2 have been measured.

Although we did not study bubble compositions in the present work, according to recent geochemical analyses, the CO2 contained within the lake appears to be of magmatic origin, and the CO2 content of the soil gas ranges between atmospheric values (0.03%) and 100% [Gal et al., 2011; Giggenbach et al., 1991]. A recent study documented deep and low-frequency earthquakes, inferred to have been caused by magma movements beneath the volcano [Hensch et al., 2019], although there are presently no signs of upcoming volcanic activity in the near future.

Sounds, Sediments, and Stratification

We divided our group of 13 scientists into several teams to study various aspects of the lake and its gas seeps. Two teams focused on the bottom of Laacher See and the degassing occurring in the water column, using iXblue Seapix 3-D and Norbit multibeam echo sounders to characterize bubble flares. Although echo sounders are typically used to map the contours of the underwater terrain [Morgan et al., 2003], they can also map gas bubbles in water. Acoustic waves emitted by a sounder reflect off gas bubbles and are recorded in water column data as high acoustic backscatter values. This backscatter provides a means to track bubbles during their ascent [Greinert et al., 2010] and to quantify the gas flux involved [Ostrovsky et al., 2008].

Team member Zakaria Ghazoui checks a hydrophone unit suspended beneath a raft in Laacher See. Credit: Corentin Caudron

Four scientific teams also investigated whether links exist between spots where degassing occurs and specific sedimentological structures in the lake bed, such as faults, small depressions, and pockmarks. Along with echo sounder profiles, we acquired seismic reflection profiles using the Innomar parametric echo sounder and the iXblue Echoes 10000 Chirp subbottom profiler to image the upper 35 meters of sedimentary infill in the caldera lake with a vertical resolution of about 8 centimeters. Additionally, teams deployed a remotely operated vehicle to provide visual observations from the lake bottom. Preliminary results from these efforts show clear relations between locations of degassing and lake bottom morphologies, such as possible pockmarks and faults, providing information on how these morphologies influence volcanic degassing in this area.

One team investigated new tools that, like seismic and infrasound instruments deployed on Earth’s surface, could become part of volcano monitoring efforts in aqueous environments.One team investigated new tools that, like seismic and infrasound instruments deployed on Earth’s surface, could become part of volcano monitoring efforts in aqueous environments. For example, we deployed a hydrophone—essentially a microphone immersed in the water—for several hours to record noise created by the bubbles released at the lake floor. Similar to observations of bubbles in other volcanic lakes [Vandemeulebrouck et al., 2000], the Laacher See gas bubbles radiated energy below 5 kilohertz; this consistency suggests the technique’s potential use for monitoring in a variety of submerged volcano environments.

Another team looked for new ways to detect impending limnic eruptions—a hazard typically associated with volcanic lakes. These sudden gas eruptions, which are not necessarily volcanic in origin, can occur as a consequence of lake water stratification, in which a sequence of stable layers of water that grow progressively cooler from the surface to the lake bed remain unmixed. CO2 seeping into cool, pressurized water near a lake bed is quickly dissolved, like carbonation in a chilled bottle of seltzer water, and can build up to high levels. If a disturbance such as a landslide stirs the water and disturbs the stratification (like shaking a sealed bottle of seltzer water), the CO2 can bubble up to the surface and erupt to form an oxygen-poor atmospheric layer that can suffocate people and animals. Limnic eruptions have caused substantial casualties in the past—a 1986 eruption of CO2 at Lake Nyos in Cameroon claimed some 1,700 victims [Kling et al., 1987].

An instrument raft floats on the placid surface of Laacher See in August 2019. Credit: Corentin Caudron

This team tested the potential of electrical resistivity tomography and of transient electromagnetic methods to detect thermal lake stratification from the lake surface. Preliminary results show that these nonintrusive techniques unambiguously detected lake stratification in Laacher See. Thus, these methods hold great promise for future continuous monitoring efforts in lakes where limnic eruptions could endanger surrounding communities.

Understanding the intrinsic and spatial dynamics of volcanic gas bubbles is a cornerstone of efforts to develop a new generation of early-warning systems needed for volcanic risk assessment. The 2019 Laacher See fieldwork provided a snapshot of underwater volcanic degassing, its relation to sedimentary structures, and its potential for use in monitoring degassing and thermal stratification. Analysis of the data collected and future experiments will inform decisions about the role of multiproxy hydroacoustic data in ongoing global scientific efforts to increase the predictability of volcanic and limnic eruptions.

Acknowledgments

The scientists on this expedition represented the Institut des Sciences de la Terre (France); the iXblue company, Flanders Marine Institute, and Ghent University (Belgium); the Technical University of Vienna (Austria); and GFZ Helmholtz Centre Potsdam and the State Office for Mining, Energy and Geology (LBEG; Germany). We warmly acknowledge Thomas Vandorpe, Robin Houthoofdt, Koen De Rycker, Anouk Verwimp, Philipp Högenauer, and Johannes Hoppenbrock, who helped on the field and during data processing.

Imaging Seismic Sources

Thu, 04/30/2020 - 11:43

Our planet is continually being affected by natural and human-induced seismic events causing shaking, rupturing, and cracking at Earth’s surface. These seismic waves are detected and recorded by different kinds of instruments, which reveal information about their spatial and temporal origin. A recent article in Reviews of Geophysics explores the potential of waveform‐based methods to improve the characterizing and understanding of seismic sources. Here, two of the authors outline developments in methodologies and remaining challenges in this field.

What is seismic source location and why is this information important?

The seismic source location specifies the spatial and temporal origin of seismic perturbations observed at Earth’s surface; it also refers to the procedure of locating a seismic event.

Exact knowledge of the location of a seismic source permits the assessment of risk and provides crucial information on the dynamic processes at work.Exact knowledge of the location of a seismic source not only permits the assessment of risk, but also provides crucial information on the dynamic processes at work and the structures related to them.

Also, it forms the foundation for subsequent seismological analysis and characterization, such as the estimation of magnitudes or the imaging of Earth’s interior structure.

How have seismic location methods developed over recent decades?

Active research in seismic location methods began in the nineteenth century. At first, this research considered drastically simplified graphical and geometrical representations of wave propagation like triangulation.

With increasing use of the modern computer since the 1960s, linearized inversions schemes like the Geiger method made use of onset traveltimes to estimate the location of an earthquake.

With the installation of first world-spanning seismograph networks, the following decades saw sophisticated extensions of these purely traveltime-based methods. These advances helped to increase precision through linking different measurements and provided joint estimates of subsurface heterogeneity the seismic waves were exposed to.

Fueled by rapid improvements in instrumentation and computation, from about 2000 to the present day, location schemes started to make use of the full waveform information contained in recorded seismograms, which results in exciting technological exchange with the fields of optical imaging and data science.

Historical evolution of seismic source location methods, with the most significant conceptual cornerstones highlighted. Red colors indicate the direct involvement of waveform information in the location process. Credit: Li et al. [2020], Figure 2

What are the advantages of modern waveform‐based seismic source location methods?

With the availability of dense seismograph networks, modern waveform-based techniques utilize the similarity of waveforms recorded at different locations to automatically extract directional information and enhance weak signals. Credit: Andrew Adams (CC BY-SA 2.0)

By accounting for amplitude and phase information, like in optics, recorded seismic wavefields can be back-propagated and re-focused in the subsurface, thereby revealing both the earthquake’s location, as well as its radiation characteristics and overall strength.

Like a laser, the systematic analysis of the consistency of waveforms across a dense seismic network enables the data-driven estimation of directional information and the amplification of weak signals that would otherwise drown in background noise.

The results are accurate images rather than idealized abstractions, which are less methodologically biased and significantly richer in terms of information content. Rather than chaining different types of approximations after one another, the inclusion of waveforms right from the beginning leads to a more integrated view on source processes and what lies beneath our feet.

What new insights have the waveform‐based methods offered into understanding seismic sources?

By utilizing the full information content of seismic time series, waveform-based location methods are highly automated and can minimize user interaction.

Waveform-based location methods are highly automated and can minimize user interaction.Rather than by appropriate abstractions, the underlying algorithms are directly driven by the waveform data themselves.

Waveform-based techniques can help to lower the detection threshold compared to conventional approaches due to constructive interference of multi-channel waveforms.

In addition to automation and the characterization of weak events, modern waveform-based methods construct images rather than infer discrete locations, which lets seismic source location appear conceptually similar to visual perception of a light source with the human eye. In addition to mere spatial and temporal coordinates, such images contain information on location uncertainty as well as the source’s orientation and strength. Both of these characteristics directly relate to the Earth’s stress field and important structural features like faults and fractures.

What are some of the challenges of applying these methods at different scales?

Many of the current source imaging methods still partly rely on the limiting abstraction of traveltimes and are of hybrid character. Full-waveform-based methods are extremely demanding, especially when large station networks, many events, and long observation periods are concerned. These big-data challenges currently remain serious obstacles for large-scale applications but promise to be alleviated by the transformational impact of recent developments in machine learning.

Especially for strong earthquakes and in the near field, seismic sources are known to be complex with spatially extended inner structure. While some fully waveform-driven source imaging attempts exist, many modern location methods have hybrid character and still, to some extent, rely on the point-source assumption known to be of limited use in such scenarios. Credit: IBBoard (CC BY-NC-SA 2.0)

Another challenge constitutes the complexity and vast frequency and amplitude range of full seismic wavefields.

As all location methods are in demand of reasonably accurate knowledge of the traversed medium, joint imaging and inversion schemes are likely the most promising candidates for fully data-driven applications in the future.

Despite the tremendous success of the point-source approximation, seismic source excitations are spatially and temporally extended.

It is only with the help of waveforms that this extended character and the inner structure of seismic energy release can accurately be captured.

—Lei Li (leileely@126.com;  0000-0002-3391-5608), Central South University, Changsha, China; and Benjamin Schwarz ( 0000-0001-9913-7311), GFZ German Research Centre for Geosciences, Potsdam, Germany

Tear, Don’t Cut, to Reduce Microplastics

Wed, 04/29/2020 - 12:18

Craving potato chips? Opening that bag will release various amounts of microplastics depending on whether it’s torn, cut open with scissors, or pierced with a knife, new research reveals. Other everyday activities—such as cutting tape and twisting open plastic bottlecaps—also shed microscopic plastic debris, the results show. These findings highlight the ubiquity of microplastics in everyday life and serve as a powerful motivator for studies of the potentially detrimental health effects of these tiny bits of plastic, the scientists suggest.

“Almost Everywhere”

Microplastics—pieces of plastic ranging in size from 1 micrometer to 5 millimeters—have been found on glaciers, deep in the ocean, and even in human stool samples. They’re so pervasive because they’re produced both directly (e.g., for some cosmetic products) and indirectly from the fragmentation of larger plastic debris. “Plastic is almost everywhere,” said Cheng Fang, a chemist at the University of Newcastle in Australia.

“We picked up several items from markets [we use in] our daily lives.”Fang and his colleagues conducted laboratory experiments to better understand how microplastics are generated. They focused on everyday items made of plastic like grocery bags, gloves, packing peanuts, and bottles. “We picked up several items from markets [we use in] our daily lives,” said Fang.

Destruction for Science

The researchers carefully measured the amount of microplastics shed by tearing, scissoring, and cutting (with a knife) different plastic items for 200 seconds. They weighed the microplastics that fell on a quartz crystal microbalance, an extremely sensitive scale. The team found that tearing, scissoring, and cutting generated between roughly 10 billionths and 30 billionths of a gram of microplastics. Given that the average mass of a microplastic particle is roughly 1 billionth of a gram, that translates into a few tens of pieces of microplastics. However, that quantity is likely a gross underestimate of the true number of microplastics generated, the team suggests.

That’s because the quartz crystal microbalance the researchers used had a very small collecting area—about the size of a pencil eraser—and many of the microplastic particles produced were probably small enough to become airborne. On the basis of the geometry of their experimental setup and assuming that microplastics were generated isotropically, the team estimated they were capturing only about one out of every 2,000 particles. That is, each experiment was, in actuality, generating tens of thousands of microplastics.

Watch Out for Cutting

When the researchers compared the amounts of microplastics shed by tearing, scissoring, and cutting polyethylene grocery bags, they found that cutting produced the most debris. That’s perhaps because the forward movement of a knife is less efficient than that of scissors, the team suggests. A dull knife could also pull off larger fragments of plastic, said Fang. “If your knife is not sharp, what you’re cutting is slices of plastic rather than fibers or particles.” Cutting generated about 50% more microplastics than tearing, which was in turn marginally better than scissoring, the team found.

“We know from our use of everyday plastic items that these can easily break or degrade.”The researchers caution against concluding that cutting always produces the most microplastics, however. That’s because such factors as a plastic’s stiffness, thickness, and density all play roles in how readily it sloughs off bits and pieces, they noted.

Fang and his colleagues also used scanning electron microscopy to image the microplastics produced by scissoring a plastic bottle, tearing a plastic cup, and cutting a plastic bag. They recorded a wide range of particle shapes and sizes, including fibers and fragments.

These results were published in March in Scientific Reports.

“These results are not a surprise,” said Alice Horton, a microplastics researcher at the National Oceanography Centre in Southampton, U.K., not involved in the research. “We know from our use of everyday plastic items that these can easily break or degrade. While we may not necessarily see this degradation with the naked eye, this is likely happening much more regularly than previously thought.”

Tearing polyethylene plastic film generates fibers and debris like these. Credit: Cheng Fang

This work demonstrates how microplastics are produced, but there’s still much more to investigate, said team member Ravi Naidu, an environmental scientist at the University of Newcastle. The health effects of microplastics are really unknown, he said. “Potentially, these can pose risks to people and animals, but there’s no evidence that exposure to microplastics had led to death.” There’s also the open question of exposure pathways, he said. “Is it inhalation, dermal absorption, or is it through the food chain? There’s a whole lot more research that still needs to be done.”

—Katherine Kornei (@KatherineKornei), Science Writer

The Coronavirus Hurts Some of Science’s Most Vulnerable

Wed, 04/29/2020 - 12:15

Daniel Gilford has studied climate science for nearly a decade, and after 2 years as a postdoctoral researcher at Rutgers University, he felt ready to take the next big career move: a faculty position.

Last fall, Gilford spent between 200 and 400 hours filling out applications, he said. The hard work paid off because he landed three in-person interviews in 2020, traveling to schools to spend 16-hour days teaching classes, giving lectures, and meeting with departments.

“In-person interviews are sort of the final stage of the academic job track,” Gilford said. “When you get to that stage, you’re usually a finalist in the top three.”

Science “stands to lose out on several years’ worth of the best upcoming new minds.”After the last interview, Gilford flew home in early March, just as the coronavirus was spreading in many communities. In the weeks that followed, the world changed drastically: Dozens of states imposed stay-at-home orders, and the economy screeched to a halt. “I heard that all three of the places I had had in-person interviews had frozen their hiring,” Gilford said. Soon after, two of the positions were canceled.

“To invest so much, and then basically be told not only are they not hiring me, they’re not hiring anyone,” Gilford said. The news was “pretty disappointing.”

The hiring cold snap is just one symptom of universities adjusting to a new economic reality during the pandemic.

Eos spoke with students and postdoctoral scholars in the United States about their experiences working during the coronavirus pandemic. Many expressed concerns about finding a job, accessing fieldwork or the lab, and the emotional toll of weathering the disaster.

Science “stands to lose out on several years’ worth of the best upcoming new minds,” said Bob Kopp, a professor and director of the Rutgers Institute of Earth, Ocean, and Atmospheric Sciences in New Brunswick, N.J.

As Gilford’s supervisor, Kopp sees the challenges faced by early-career scientists. “We need a stimulus act that preserves the STEM workforce of this country,” Kopp said.

Hiring on Hold

Universities around the country are taking massive, unprecedented pay cuts: They’re losing tens of millions of dollars in room and board expenses, watching their public support from states and other funders dwindle, and eyeing shrinking endowments. To survive, schools are looking for any way to tighten their budgets.

According to a crowdsourced list, more than 350 universities around the world have halted hires.As a result, many universities have imposed hiring freezes for the next 6 to 18 months. According to a crowdsourced list of hiring freezes by former tenured professor and career consultant Karen Kelsky, more than 350 universities around the world have halted hires. Some, like the University of Oregon, also froze offers for graduate assistantships.

Rutgers postdoc Gilford is thankful to have a job, an apartment, and good health for him and his family. “In that regard, I feel very lucky and blessed,” he said.

But Gilford said, in terms of his career, “it looks like for the next year I will possibly be in a holding pattern.”

“I thought this was the time that I would finally get to shift into sort of a more permanent position,” Gilford said. “I’ve been traveling with my family for almost a decade now.”

“It’s not the step that I wanted to be at this point in my career,” Gilford said. “Given the unknowns, it’s also possible that something new could come up.”

Locked Out of the Lab

Slesinger planned to spend the next year in the lab. Now she doesn’t know when she’ll be back.Students looking to continue their research worry about completing their work and connecting with others.

Ph.D. student Emily Slesinger planned to spend the next year in her lab at Rutgers. Now she doesn’t know when she’ll be back.

As a fourth-year student, Slesinger works with eight undergraduates to extract lipids and analyze proteins from black sea bass. Her research investigates climate change’s impacts on fish, and she’d collected tissue samples from fieldwork in the first years of her Ph.D.

But now her work is on pause. “If I’m back in the lab over summer, I might be okay,” she said. “But if I can’t get back in the lab till the fall and end of fall, then I probably will definitely push back [her defense].”

It’s not clear when Rutgers, and many other schools, will allow researchers to work again.

Boston University recently announced that the school may not reopen until January 2021, given the logistical, health, and safety concerns of thousands of students flocking back to school. The university’s president said that those working in labs and research enterprises will be the first allowed back on campus. .

Saying bye to and wishing for good behavior from my dearest love, the lab. I’ll miss you dearly and can’t wait to be back in maybe 2 months, maybe 4 months, maybe 6 months…weird to be kicked out of something that makes up so much of my life. Till then! #COVID19 #WomenInScience pic.twitter.com/nPg9QMpzzO

— Emily Slesinger (@Fishiologist_Em) April 8, 2020

The day Slesinger cleaned and locked her lab for good, she posted a photo to Twitter, saying goodbye to her “dearest love,” the lab. Amid the uncertainty, Slesinger said, “the lab was sort of this safe haven.”

“I feel like sometimes when the rest of the things in my life are chaotic, lab work can kind of make me feel like I have a little bit of control,” Slesinger said. Shutting the doors “was difficult.”

A significant portion of science takes place outside of the lab as well. Master’s student Emily Iskin at Colorado State University said that canceling conferences could be particularly impactful on early-career researchers. Meetings put green scientists in the same room with “the people who’ve written the seminal papers,” said Iskin. In the past, she’s been “starstruck” presenting to “this person I’ve been reading all of their papers for the last year.”

The experience of directly interacting with established researchers is inspiring for early-career scientists, she said, and a ripe time for informal job hunting.

Lifeboat Through Troubled Waters

For many early-career researchers, their degrees and appointments are their lifelines to regular income and health care. That puts them at particular risk from institutional changes. As an article in Inside Higher Ed noted, universities rushed to give extensions to tenure clocks for nontenured faculty. Aid to graduate students and early-career researchers has been more dispersed and slower to come to light.

Although Ph.D. student Slesinger said she feels “really fortunate that I have a job and I’m still getting paid,” she wonders if graduate students will continue to receive annual incremental salary increases, a recently enacted policy at Rutgers.

Jacob Partida, an incoming Ph.D. student at the Massachusetts Institute of Technology–Woods Hole Oceanographic Institution Joint Program, wondered if he’d have access to health insurance over the summer if classes were canceled, tweeting he “might still be able to work on my advisors’ project as an employee, but might not get the student health care—at a time I might need it most.” .

being told that the summer session for my grad program might be cancelled

Groundwater Is the “Hidden Connection” Between Land and Sea

Tue, 04/28/2020 - 11:48

If you are looking for a waterway between land and sea, you can start by looking beneath your feet.

“People think of rivers, which is a natural thing to come to mind,” said Nils Moosdorf, a professor of hydrogeology at Kiel University in Germany. “But groundwater has an invisible connection that is usually not considered.”

Moosdorf himself was not aware of this connection until 2012, when, as a graduate student studying river transport, he attended a scientific talk about how 10 times more groundwater leaves the Big Island of Hawaii than river water. “That was an eye-opener because I never heard about groundwater going directly to the ocean before,” Moosdorf said. And I thought, ‘Oh wait, that must be interesting,’ and then I started to get into that.”

Understanding the contribution of groundwater to oceans is important because fresh groundwater is rich in nutrients and solutes like carbon, iron, silica, and nitrogen that impact coastal ecosystems.

In a paper recently published in Nature Communications, researchers developed the first global computer model of fresh groundwater flow into oceans. The flow was “supervariable,” said Elco Luijendijk, a geologist at the Georg August University in Göttingen, Germany, and lead author of the study. Of the total groundwater flux worldwide, “about 10% of the coastline takes up 90% of the total amount of water.”

This percentage means that at a global scale, coastal groundwater discharge accounts for around 1% of fresh water flowing into the ocean. But in some locations, including sensitive coastal ecosystems like estuaries, salt marshes, and coral reefs, groundwater fluxes are larger and more important and could potentially impose risk for pollution and eutrophication into these systems.

“We have kind of slowly been moving towards integrating submarine groundwater discharge studies into more traditional physical hydrology,” said Alanna Lecher, a biogeochemist at Lynn University in Florida who was not involved in this study. “I think this paper was the first one to really do that well on a large scale.”

Building a Global Map of Groundwater Discharge

Traditionally, researchers investigate groundwater discharge in local field studies using isotope tracers like radon and radium. Using these tracers captures the total amount of groundwater discharge, which includes salt water and fresh water, Lecher said.

These field measurements are local and probably overestimate the amount of groundwater being discharged into the ocean. “Often they find spots where there is a high flux and sort of extrapolate that over areas or even make statements about the globe,” Luijendijk said.

A model examining a global view of groundwater discharge had not been made before because “in this day and age, it’s still a challenge to model these fairly simple processes,” Luijendijk said. The researchers ran two-dimensional models of cross sections of the coastline and varied different hydrogeological parameters, like the topographic gradient, permeability of the aquifer, and groundwater accumulation rate.

One of the most valuable parts of the study is that it reveals the limiting factors for groundwater discharge to the ocean, Lecher said. “We have to take into account the physical characteristics of the coastal aquifer. We can’t just rely on tracer studies.”

The permeability of the aquifer was the dominant factor for groundwater flow, said Moosdorf, who was the senior author of the study. “If you imagine a sponge sitting on your kitchen table and you put water on top, it flows out below.”

The researchers combined their computer models with analyses of existing global data sets to produce a high-resolution map of fresh groundwater discharge from coastlines around the world, which revealed coastal watersheds “hot spots” where fresh groundwater flux is high and likely important. “There have never been, at that resolution, global maps of groundwater discharge,” Moosdorf said. “So now we can see where it’s probably relevant.”

This world map displays the magnitude of fresh groundwater discharge at the world’s coastlines. The map highlights coastal discharge hot spots where groundwater discharge is high enough to pose a risk of pollution of coastal ecosystems. Click image for larger version. Credit: University of Göttingen/Elco Luijendijk A “Time Bomb” Beneath the Surface

Fresh groundwater is an important natural resource for human society and is used for drinking, washing, agriculture, and fishing, Moosdorf said.

For the people who rely on fresh groundwater, its power is captured in the language they use. “They have words for it. And I always think it’s when you have a word in a language for something that means it’s relevant to you because otherwise, you wouldn’t have created that word.” In Fiji, for example, the word “tuvu” refers to a freshwater submarine spring found on a beach. In Australia, fishermen closely guard their favorite fishing spots around submarine springs they call “wonky holes.”

Groundwater can also be a hidden source for pollution and potential eutrophication for coastal ecosystems.…“It’s potentially a little bit of a time bomb.”However, groundwater can also be a hidden source of pollution and potential eutrophication for coastal ecosystems. “If we start polluting the groundwater too much, the coastal ecosystem might get into trouble,” Moosdorf said. “Many people do not connect land and ocean if there’s no river.”

Groundwater moves very slowly, up to tens of meters per year down to a mere centimeter. “It’s not really monitored anywhere,” Luijendijk said. “It’s potentially a little bit of a time bomb,” he added.

However, Lecher cautioned, “using the term eutrophication is almost alarmist because that is going to have major environmental impacts, die-offs of organisms, whereas in this case, it’s just enhanced impact on the environment caused by submarine groundwater discharge.”

There are mitigating factors that might reduce the risk even if coastal ecosystems receive nutrients from fresh groundwater. If the body of water is larger, for example, the nutrients could be diluted.

That said, “this study shows the pathway that pollutants or nutrients can make to the ocean through fresh groundwater discharge,” Lecher added.

Connecting Fields to Study the Groundwater Connection

The global model can still be improved, Luijendijk said. “We definitely need more studies and we want to zoom into the local scale. With these global models, you’re always wrong in someone’s backyard.”

“This is a big step forward in submarine groundwater discharge science. The science should be cohesive, it should all work together.”To improve understanding in this field, making better connections between different communities is essential. “There is a disconnect between fieldwork and global modeling,” Moosdorf said. There are also disconnects between hydrogeologists concerned mainly for what happens on land and marine biologists who might not know that groundwater water enters the sea. “So to connect those two, I think, is an important step,” he said. “It’s not rocket science.”

But this one study is already making waves. “This is a big step forward in submarine groundwater discharge science. The science should be cohesive, it should all work together,” said Lecher.

—Richard J. Sima (@richardsima), Science Writer

La Contaminación del Aire Puede Empeorar la Tasa de Mortalidad por COVID-19

Tue, 04/28/2020 - 11:47

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Mientras en los Estados Unidos de América luchan para contener la epidemia de coronavirus, científicos han hallado que la contaminación del aire está empeorando las cosas. En un estudio enviado para publicación, investigadores de la Universidad de Harvard hallaron que un aumento incluso pequeño en la exposición prolongada a PM2.5 (o partículas con un diámetro de 2.5 micras o menor), puede traer un gran incremento en la tasa de mortalidad por COVID-19, la enfermedad causada por el nuevo coronavirus.

Calidad del Aire en Tiempos de Crisis

Con más de 460, 000 casos en los Estados Unidos, las muertes relacionadas con el coronavirus se acercan a 20, 000 y podrían alcanzar 60, 400 a inicios de agosto, de acuerdo con las predicciones del Instituto para Mediciones y Evaluación de Salud, basado en Seattle. Aunque los mecanismos de la COVID-19 siguen siendo investigados, la Organización mundial de la salud (OMS) ha reportado que uno de cada siete pacientes desarrolla dificultad para respirar y otras complicaciones severas.

Por otra parte, las partículas PM2.5 han sido asociadas a problemas de salud tales como muerte prematura, ataques cardiacos, asma, e irritación de las vías respiratorias. Sin embargo, en marzo, la Agencia de Protección Ambiental (EPA, por sus siglas en inglés) dijo que relajaría la aplicación de las medidas contra la contaminación del aire, y permitiría que plantas de energía y fábricas, entre otras entidades, omitieran pruebas de emisiones contaminantes.

Investigadores han determinado que un incremento de tan solo 1 microgramo por metro cúbico de PM2.5 se puede asociar con un incremento de 15% en la tasa de mortalidad de la COVID-19.Desde hace mucho tiempo los científicos han sabido de los efectos de la contaminación del aire en la salud pública. Por ejemplo, se cree que un evento severo de smog en Londres causó alrededor de 12, 000 muertes en 1952. Cuatro años después, la Ley para el aire limpio entró en efecto en el Reino Unido, prohibiendo la quema de combustibles contaminantes en áreas designadas, abriendo camino para legislaciones similares en otros países.

Los investigadores de la escuela T.H. Chan de Salud Pública de la Universidad de Harvard, notaron que muchas condiciones conocidas por contribuir a peores desenlaces de la COVID-19 son causadas por exposiciones prolongadas a las partículas PM2.5. Al buscar posibles conexiones, utilizaron una plataforma de datos de salud ambiental que ya habían desarrollado y que contiene información socioeconómica, demográfica y de partículas 2.5. Posteriormente agregaron a la mezcla información sobre los desenlaces asociados con la COVID-19.

Analizaron datos de 3, 080 condados en los Estados Unidos de América, ajustando los datos relacionados con tamaño de la población, número de pruebas aplicadas, estado del tiempo, obesidad, número de fumadores, tiempo promedio de exposición a partículas 2.5PM entre los años 2000 y 2006, y observaron las muertes relacionadas con la COVID-19. Estos datos consideran el 90% de las muertes confirmadas por COVID-19 en los Estados Unidos hasta el 4 de abril de 2020. En el estudio, que ha sido enviado a la Revista de Medicina de Nueva Inglaterra, los investigadores determinaron que un incremento de tan solo un microgramo por metro cúbico de PM2.5 se puede asociar con un incremento de 15% en la tasa de mortalidad por la COVID-19.

“Hallamos que la gente que vive en los condados que han tenido altos niveles de contaminación del aire en los Estados Unidos de América durante los últimos 15 a 20 años tienen una tasa de mortalidad sustancialmente más grande”, mencionó la coautora del estudio, Rachel C. Nethery, quien es profesora de bioestadística en la Universidad de Harvard. “Basándonos en nuestros hallazgos, esperamos que un condado con niveles PM2.5 de 15 microgramos por metro cúbico (alta contaminación) tenga una tasa de mortalidad por COVID-19 4.5 veces mayor que un condado con niveles PM2.5 de 5 microgramos por metro cúbico (baja contaminación), asumiendo que los condados son similares excepto por los niveles de contaminación.

Relajar Medidas Es la Opción Incorrecta

Los resultados del estudio (el primer estudio nacional de su clase en los Estados Unidos de América) no sorprenden si se consideran hallazgos epidemiológicos de la polución del aire para enfermedades como el síndrome respiratorio agudo severo (SARS, por sus siglas en inglés), pero el efecto de las partículas PM2.5 en la tasa de mortalidad podría ser dramático, dijo Zhanghua Chen (epidemióloga ambiental de la Universidad del Sur de California, quien no participó en el estudio):

“Las acciones tomadas por la EPA de relajar las reglas relacionadas a las emisiones de contaminantes desde plantas de energía y fábricas, entre otros, son una decisión obviamente incorrecta y podría resulta en mayor incidencia y muertes relacionadas a la COVID-19.”“Aunque los hallazgos se basaron en el desarrollo actual de la epidemia y no podemos descartar que existan factores de confusión que no han sido tomados en cuenta, las conclusiones de este artículo muestran claramente que debemos hacer todo lo posible para mejorar la calidad del aire para reducir el número de muertes por desastres como la COVID-19,” dijo Chen. “Las acciones tomadas por la EPA de relajar las reglas relacionadas a las emisiones de contaminantes desde plantas de energía y fábricas, entre otros, son una decisión obviamente incorrecta y podría resulta en mayor incidencia y muertes relacionadas a la COVID-19.”

Nethery mencionó que muchas personas han preguntado cómo podrían limitar los impactos nocivos de la contaminación durante la epidemia. Su equipo planea examinar los efectos que la exposición de corto plazo a la contaminación tiene en el desarrollo de la COVID-19, así como las relaciones de la enfermedad con factores como la raza y la pobreza.

—Tim Hornyak (@robotopia), escritor de ciencias

This translation was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando. Traducción de Argel Ramírez Reyes y edición de Alejandra Ramírez de los Santos.

The Climate and Health Impacts of Gasoline and Diesel Emissions

Tue, 04/28/2020 - 11:46

In the United States alone, it’s estimated that the transportation sector produces 1.9 billion tons of carbon dioxide (CO2) annually. It’s no secret that CO2 contributes substantially to warming the planet, but it’s not the only climate-active material in the atmosphere: Emissions can have both warming and cooling effects depending on their chemistry and the timescale over which they are observed.

In a new study, Huang et al. model the total global climate impact of gasoline and diesel vehicle emissions as well as their impact on human health. Using the National Center for Atmospheric Research’s Community Earth System Model, a global chemistry‐climate model, along with emissions data from 2015, they calculate the net radiative effect of the gasoline and diesel sectors to be about +91 and +66 milliwatts per square meter, respectively, on a 20-year timescale. A laser pointer produces about 5 milliwatts, so emissions from the two sectors combined are heating the planet by roughly the same amount that shining 32 laser pointers on every square meter of the Earth would. Earth’s surface area is 510 trillion square meters, so that’s 1.6 quadrillion laser pointers.

The researchers broke down the overall heating into individual effects from different component compounds in vehicle emissions, focusing on two broad categories of emissions: short‐lived climate forcers (SLCFs), which include things like aerosols and ozone precursors, and long-lived greenhouse gases, with CO2 being the most prominent. SLCFs from gasoline and diesel vehicle fleets accounted for about 14 and 9 milliwatts per square meter, respectively, confirming that most radiative forcing comes from longer-lived emissions.

In terms of public health, the researchers calculate that the gasoline sector causes 115,000 premature deaths annually, whereas the diesel sector causes 122,100. The researchers attribute the deaths largely to exposure to smog (ozone) and soot (particles smaller than 2.5 micrometers). The scientists also analyzed how premature death rates varied regionally and proportionally with respect to the total distance driven in a region using each fuel type. These results showed large variability by region: In some places, there were relatively few premature deaths for the large distances driven on a given fuel type, whereas in others—most notably for diesel used in India—there were disproportionately high numbers of premature deaths. (GeoHealth, https://doi.org/10.1029/2019GH000240, 2020)

—David Shultz, Freelance Writer

Why Sunlight Matters for Marine Oil Spills

Tue, 04/28/2020 - 11:46

A Decade of Science Since Deepwater Horizon Modeling Under Pressure   Deepwater Horizon and the Rise of the Omics   Why Sunlight Matters for Marine Oil Spills   Thirty Years, $500 Million, and a Scientific Mission in the Gulf   Leveraging Satellite Sensors for Oil Spill Detection   Deepwater Horizon’s Legacy of Science

Ten years ago this month, the blowout and explosion aboard the Deepwater Horizon (DWH) oil rig killed 11 people and caused hundreds of millions of gallons of oil and natural gas to begin pouring into the Gulf of Mexico, a spill that eventually became the largest marine spill in U.S. history. In 2016, a federal district judge approved a $21 billion settlement with the companies involved—the largest settlement in U.S. history for damage done to natural resources—that included nearly $9 billion for the restoration of natural resources and the services they provide.

One result of this disaster was a substantial increase in the amount of research focused on oil spill science. Approximately 1 month after the spill began, for example, BP committed to providing an unprecedented $500 million over 10 years to fund independent research on the impacts of the spill. These funds established the Gulf of Mexico Research Initiative (GoMRI), which has supported a diverse array of research projects since 2010.

With this level of sustained and directed funding, the number of peer-reviewed papers published on oil spill science skyrocketed. A notable breakthrough that arose from this new body of research is that we now have a better grasp on how oil behaves, physically and chemically, once it enters the environment. In particular, the role of sunlight in photooxidizing floating surface oil, long discounted or overlooked, has taken on new precedence, and researchers now agree this role must be better accounted for in oil spill assessments and models.

Oil Weathers in Many Ways

When crude oil is spilled into the ocean, it undergoes a series of weathering processes, including dissolution, evaporation, emulsification, biodegradation, and photooxidation. Some of these processes relocate oil, whereas some transform it. For example, dissolution and evaporation transfer low–molecular weight hydrocarbons from oil into the water column and air, respectively, but they do not alter the chemical composition of these compounds. Emulsification is a process in which water is entrained by the oil, which changes the oil’s physical properties (in particular, it increases viscosity) but not its chemical properties. In contrast, biodegradation and photooxidation are transformative: They add oxygen to components in the oil, creating new compounds with different properties than those initially in the spilled oil.

Oil weathering processes have wide-ranging implications for ecosystem and human health, as well as for spill response operations.These oil weathering processes have wide-ranging implications for ecosystem and human health, as well as for spill response operations. Dissolution of oil components into the water can facilitate microbial biodegradation while simultaneously increasing exposure of aquatic animals to harmful compounds. Evaporation of oil from the sea surface can expose governmental and industry oil spill first responders to toxic compounds. Moreover, evaporated oil can be photooxidized into secondary organic aerosols and ozone; this process negatively affected air quality along the Gulf Coast after the DWH spill [Middlebrook et al., 2012]. Photooxidation of floating oil makes it more difficult to clean up, contributing to oil residues that wash up on valuable and sensitive coastlines. All of these weathering processes are considered by first responders as they decide when and where to allocate precious resources to mitigate damages from the spilled oil.

As part of the ongoing GoMRI Synthesis and Legacy efforts, which aim to document and exploit scientific achievements and advances made over the past 10 years, we hosted a workshop with a group of experts on oil weathering at sea that included members of the federal government, academia, and industry. Attendees discussed all weathering processes, although for reasons presented below, the workshop focused mainly on photooxidation. The conclusions from this workshop were recently distilled into three take-home messages [Ward and Overton, 2020].

A Nonnegligible Process

Fig. 1. The relative importance of floating surface oil weathering processes as understood before and after the 2010 Deepwater Horizon spill. Credit: Modified from Ward and Overton [2020]First, the rate and extent of photooxidation of oil floating on the sea surface after the DWH blowout far surpassed estimates based on early conceptual models of oil weathering. Before the spill, the consensus perspective across many subdisciplines of oil spill science was that evaporation, emulsification, and biodegradation were the most important weathering processes influencing the fate of oil spilled at sea (Figure 1). Photooxidation was widely considered to affect only the small fraction of light-absorbing aromatic compounds in oil [e.g., Garrett et al., 1998]. And photooxidation was not incorporated into mass balance assessments of spilled oil (e.g., the National Oceanic and Atmospheric Administration’s oil budget calculator), oil spill fate and transport models, or spill response guidance documents.

But more than a dozen studies published since 2010, making use of analytical tools from simple elemental analysis to state-of-the-art mass spectrometers, have documented the rapid and extensive photooxidation of oil floating on the sea surface during the DWH spill [Ward and Overton, 2020]. Ward et al. [2018a] estimated that within a week of surfacing, half of the floating oil was transformed by sunlight into new compounds with different physical and chemical properties. Photooxidation certainly was not a negligible weathering process as previously thought, and conceptual models have now been revised to reflect the importance of photochemical weathering (Figure 1) [National Research Council, 2019].

How Sunlight Oxidized All This Oil

The second take-home message focuses on how sunlight oxidized so much oil floating on the sea surface. There are two ways that oil photooxidation can occur: directly and indirectly (Figure 2). Direct photooxidation occurs when compounds in crude oil that absorb natural sunlight, such as polycyclic aromatic hydrocarbons, are oxidized. If this were the dominant pathway, oxidation would be limited to the tiny fraction of oil components that absorbs sunlight.

 

Fig. 2. Direct photooxidation of oil (left) occurs when a light-absorbing molecule (black aromatic ring) is partially oxidized into a new molecule (orange aromatic ring). Indirect photooxidation of oil (middle) occurs when the absorption of light leads to the production of reactive oxygen species. These reactive species can oxidize a wide range of compounds (right), not just those that directly absorb light. Credit: Modified from Ward and Overton [2020]

Indirect photooxidation is a little more complicated. When compounds in crude oil absorb sunlight, a wide range of reactive oxygen species are produced, including singlet oxygen, peroxy radicals, and hydroxyl radicals. These species can oxidize other compounds in oil, not just those that absorb light directly. Thus, if indirect photooxidation were the main oxidation pathway, a much larger fraction of spilled oil would be vulnerable to oxidation.

The oil spill research community now recognizes that a much larger fraction of spilled oil is vulnerable to oxidation by sunlight.Prior to DWH, there was no consensus about whether direct or indirect photooxidation dominates. But with samples and resources in hand and access to newly developed analytical technologies, numerous field and laboratory studies have now documented the governing role that indirect photooxidation pathways play [e.g., Hall et al., 2013; Ruddy et al., 2014]. Again, half the floating oil from the DWH spill was photooxidized within a week [e.g., Aeppli et al., 2012; Ward et al., 2018a], much greater than the small percentage that absorbed light directly. By settling this long-standing debate about the main pathway of photooxidation, the oil spill research community now recognizes that a much larger fraction of spilled oil is vulnerable to oxidation by sunlight.

Adding Photochemistry to Models and Response Plans

The third take-home message relates to incorporating photochemistry into oil spill models that predict the many fates of spilled oil on the sea surface over space and time, information that is critical to effective response contingency planning. Historically, such models neglected to consider the Sun’s effects on oil properties. Moreover, the performance of tools used in response to oil spills, such as chemical dispersants, was rarely evaluated with respect to photooxidized oil. Instead, dispersants were traditionally evaluated with respect to evaporation and emulsification because these processes alter the viscosity of the spilled oil.

The oil spill response community is now starting to acknowledge the prudence of incorporating sunlight-driven processes into oil spill models and response contingency planning because sunlight alters both the physical and chemical properties of crude oil at the sea surface. There is synergy between photochemical weathering and emulsification. Sunlight produces surface-active compounds that reside at the oil-water interface and that promote the formation of highly viscous and stable emulsions, which are very challenging to disperse.

This synergy was hypothesized roughly 40 years ago [Thingstad and Pengerud, 1982; Overton et al., 1980], but it could not be tested because of analytical constraints at the time. In a win for the iterative and long-arching nature of science, once the research community overcame these constraints, the hypothesis was confirmed [Zito et al., 2020]. Still, this synergy is not fully captured in oil spill models for surface spills because of the lack of data to quantify this process, giving researchers pause about the accuracy of model predictions.

Changes to the chemical composition of floating oil once released into the environment also affect the performance of chemical dispersants. In principle, these dispersants work by breaking up floating surface oil into droplets that disperse into the water column and thus reduce the amount of oil that reaches sensitive coastal ecosystems (Figure 3). But only a few days of sunlight exposure, which changes how floating oil interacts with chemical dispersants, may reduce dispersant effectiveness by 30% [Ward et al., 2018b]. Modeling efforts comparing the time that oil floated at sea prior to treatment with dispersants versus oil photooxidation rates further indicate that a considerable fraction of dispersant applications during the DWH spill may have targeted photooxidized oil that was not easily dispersed [Ward et al., 2018b]. These impacts of sunlight exposure on chemical dispersant effectiveness at the sea surface were likely not reported by earlier studies simply because photochemical oxidation was not perceived to affect a significant fraction of the floating spilled oil.

Fig. 3. Chemical dispersants used to break up floating oil are mixtures of solvents and surfactants. When dispersants are applied aerially to unweathered crude oil (left), the solvent promotes interactions between the oil and the surfactant, leading to the formation of small oil droplets that disperse into the water column. When they are applied to photochemically weathered oil (right), the oil is only partially solubilized in the solvent, which hinders interactions between the oil and the surfactant and decreases the amount of oil dispersed into the water.

 

Where Are We Now?

The 10-year anniversary of the DWH spill provides an opportunity to reflect not only on our improved understanding of photochemical weathering of oil at sea but also on why it took a devastating environmental disaster to make such progress.

There are four key reasons why the DWH spill sparked the advancement in knowledge described above:

1. A Unique Sample Set. Oil floated on the sea surface for 102 days, allowing researchers ample time to coordinate sampling campaigns that yielded an extremely rare and valuable set of oil samples. These samples provided opportunities to validate laboratory-based predictions about the rates, relative importance, and controls of oil weathering processes under natural field conditions.

2. Sustained Funding. The sustained funding provided by GoMRI and other sources allowed researchers time to follow up on early findings. Early studies about the extent of oxidation [e.g., Aeppli et al., 2012; Lewan et al., 2014; Ruddy et al., 2014] laid foundations for later studies of oxidation rates and pathways [Ward et al., 2018a, 2019; Niles et al., 2019] and their potential impacts on fate and transport models and response operations [Ward et al., 2018b; Zito et al., 2020].

Satellite-based remote sensing technologies provided estimates of oil film surface area and thickness throughout the 102-day period of surface oiling, a key parameter for estimating rates of photooxidation.3. Technological Breakthroughs. Advances in technology proved critical. Satellite-based remote sensing technologies provided estimates of oil film surface area and thickness throughout the 102-day period of surface oiling [MacDonald et al., 2015], a key parameter for estimating rates of photooxidation [Ward et al., 2018a]. Comprehensive two-dimensional gas chromatography coupled with flame ionization detection helped determine the precursors of photooxidation, which proved to be mainly compounds that do not absorb light directly [Hall et al., 2013]. On the other side of the reaction scheme, Fourier transform ion cyclotron resonance mass spectrometry helped determine the products of photooxidation [e.g., Ruddy et al., 2014; Niles et al., 2019], confirming that indirect processes governed oxidation. Last, novel separation technologies [Clingenpeel et al., 2017] allowed researchers to isolate and identify oil components that are produced by sunlight, partition to the oil-water interface, and promote emulsification [Zito et al., 2020], corroborating Thingstad and Pengerud’s [1982] decades-old hypothesis.

4. Diversified Expertise. The DWH spill sparked unprecedented interdisciplinary collaborations and insights into the photochemical weathering of oil at the sea surface. The expertise represented in these collaborations was wide-ranging, including petroleum and environmental chemists, modelers and oil spill response scientists, and biogeochemists and isotope geochemists. This interdisciplinary approach, in which foundational information learned from basic science was applied to fill long-standing knowledge gaps, was undoubtedly a formula for success. Moreover, interdisciplinary approaches taken to study the DWH spill, such as tracing oil photooxidation using stable oxygen isotopes [Ward et al., 2019], will likely lead to a more complete understanding of the cycling of other reduced forms of carbon, like organic pollutants and natural organic matter.

 

Where We Go from Here

Perhaps now more than ever, we have a prime opportunity to continue advancing oil spill science with sustained and directed research.Our vastly improved understanding of photochemical weathering of floating oil at sea is a clear example of the accomplishments made in oil spill science in the past 10 years, and we have learned so much more about spill dynamics, biodegradation, ecosystem responses, and other issues. Perhaps now more than ever, we have a prime opportunity to continue advancing oil spill science with sustained and directed research. The workforces and laboratories are primed and ready, the cross-disciplinary connections are established, and findings from the past 10 years provide a road map for future research priorities. These priorities include (1) establishing the applicability of findings from the DWH spill to other scenarios, such as spills at different water depths (i.e., surface versus deepwater) or involving different oil types (i.e., light to heavy, sweet to sour) or in different locations (e.g., temperate versus high-latitude waters); (2) assessing the impact of photooxidation on the effectiveness of chemical agents used in oil spill response operations (e.g., herders and surface-washing agents) other than dispersants; and (3) developing empirical data sets to incorporate photochemical processes into oil spill fate, transport, and response operation models.

Despite widespread calls to curb global petroleum use and notwithstanding the current reduced consumption and drop in demand tied to the COVID-19 pandemic, global demand is expected to climb steadily [International Energy Agency, 2019]. Even in a scenario in which policies are adopted to curb demand, demand is predicted not to plateau until the 2030s. It follows that oil spills at sea will continue to happen and perhaps even accelerate with a shift in offshore oil production from shallow-water resources (<125 meters depth) to more technically challenging deepwater (125–1,500 meters) and ultradeepwater (>1,500 meters) resources. Let’s not slow down or deemphasize oil spill research just because a decade has passed since 2010. The more knowledge gaps we fill now about the fate, transport, and impacts of oil spills, the better prepared we will be to respond to the next big spill.

New Special Collection: Fire in the Earth System

Mon, 04/27/2020 - 12:37

Over the past few years, major fire seasons in fire-prone regions such as Australia and the western United States and unprecedented fire activity in fire-sensitive areas such as the Amazon basin and the Arctic have captured much attention from the public and the media—as well as the Earth and atmospheric science community. AGU is launching a new special collection across 10 journals to bring together the most up-to-date research under the theme ‘Fire in the Earth System’.

Fire has always been an important component of many ecosystems, but global warming is changing fire regimes over much of Earth’s land surface.Fire has always been an important component of many ecosystems, and of human–landscape interaction, but anthropogenic global warming is already changing fire regimes over much of Earth’s land surface [Abatzoglou et al., 2019]. Between 1979 and 2013 the length of fire-weather season increased globally by 19% [Jolly et al., 2015], and higher wildfire activity attributable to climate change is evident already in regions of Africa, Europe, and the Americas [IPCC, 2014].

In addition to climate, human activities are also key controllers of fire via ignition, suppression, and changes in fuel availability [Knorr et al., 2016]. In some regions, most wildfires are ignited by humans, some through arson and many through various unintentional means; however, climate and weather control fire behavior and spread, determining whether ignitions grow into catastrophic fires.

Understanding the interactions of fire, humans, and climate in the Earth system is fundamental to prepare for the future ahead, but also extremely challenging.Understanding the interactions of fire, humans, and climate in the Earth system is fundamental to prepare for the future ahead but also extremely challenging as these interactions are many and complex. For example, elevated fire risk under a warmer climate has long been recognized as a result of warmer temperatures drying out fuels, setting the stage for higher fire spread probability, larger burn area, and greater risk of extreme fire behavior; however, increases in aridity that decrease vegetation cover can lead to reductions of fire activity [Rogers et al., 2020].

In the coming years, the predicted changes in fire regimes may result in more widespread impacts to landscapes, ecosystems, and hazards to human health, life, and property. For example, the changing role of fire in forest landscapes is already threatening drinking-water supplies across the world [Bladon, 2018], and the number of people at risk from wildfire smoke is rapidly increasing in the United States [Cascio, 2018].

In addition, feedbacks between fires and other natural hazards such as extreme storms can be exacerbated under a changing climate; a recent example occurred in Montecito, California, in January 2018, causing 23 deaths and over $200 million of property damage as debris flows formed during intense rainfall on steep recently burnt slopes [Lai et al., 2018].

Fire is considered a means by which terrestrial ecosystems and the carbon cycle could surpass irreversible ‘tipping points’ under future climate [Adams, 2013]. Fire also interacts with other landscape processes affected by climate change—through feedbacks such as reduced albedo of snow and ice where soot deposits, and through enhanced wind erosion of burned dryland soils.

A newly opened special collection across 10 AGU journals will bring together new research on the processes associated with landscape fires and their impacts.A newly opened special collection across 10 AGU journals, entitled Fire in the Earth System, will bring together new research on physical and biogeochemical processes associated with landscape fires, implications for human and ecosystem health, effects on water resources and critical infrastructure, fires in the wildland-urban interface, the use of prescribed fire and other mitigation strategies, and modeling efforts to characterize potential future fire regimes in a changing world.

We solicit manuscripts on research representing new advances in understanding these and other aspects of fire, and we especially encourage cross-disciplinary consideration of fire-related processes. The manuscript-submission window will remain open until May 2021 to allow for the inclusion of findings from the 2020 (northern-hemisphere summer) and 2020–21 (southern hemisphere summer) fire seasons.

—Amy E. East (earthsurface@agu.org;  0000-0002-9567-9460), Editor in Chief, JGR: Earth Surface; and Cristina Santin ( 0000-0001-9901-2658), Associate Editor, JGR: Biogeosciences

The Massive Ice Avalanches of Mars

Mon, 04/27/2020 - 12:36

Catastrophic ice avalanches may have blasted down kilometers of polar ice craters of Mars at speeds of up to 80 meters per second on at least two occasions.

According to new research, these massive ice avalanches, also called fast running glacier surges, might solve a mystery about strange features on the Red Planet.

“This is the result of catastrophic flow of downslope material,” said Sergey Krasilnikov, a postdoctoral researcher at the Vernadsky Institute of Geochemistry and Analytical Chemistry at the Russian Academy of Sciences and the lead author of a study published recently in Planetary and Space Science.

Mars is full of strange features like fast shifting sand dunes and surface carbonates. Researchers had long noticed strange, linear features that traveled down the sides of craters in the north polar region. Given that the lines appear to be moraines, researchers “thought they might be from carbon dioxide glaciers, which is super cool and exotic sounding,” said Mike Sori, a planetary scientist at the University of Arizona who was not involved in Krasilnikov’s study.

Data, Models, Geometry, and Strange Features

Krasilnikov and his coauthors wanted to test an alternate theory, believing that water ice avalanches may be responsible for creating moraine-like ridges in two craters. The fast moving, destructive forces may have pushed debris to their edges as they tumbled down the slopes of these craters.

The researchers used open-source multispectrum and radar data from NASA. They ran this information through a 3-D model software program called Rapid Mass Movement Simulation in addition to using a separate math and geometry approach.

The simulations and calculations revealed that ice avalanches indeed could have created these moraine-like ridges that travel down the craters.The simulations and calculations revealed that ice avalanches indeed could have created these moraine-like ridges that travel down the craters.

A massive amount of ice tumbled down the craters, Krasilnikov said: about 2.42 square kilometers in the first case they looked at and about 1.1 square kilometers in the second.

“It’s a very big mass,” he said.

Krasilnikov said that Mars accumulates ice similarly to the way frost forms on Earth, layer by layer. This ice can build up enormous weight and pressure in areas such as the tops of craters. He and his coauthors calculated that the first crater they studied had an ice massif buildup of 150 meters, whereas the second was about 100 meters.

Once the pressure gave out, the speed of these avalanches was fast, with the flow of ice traveling about 80 meters per second—similar to the speed of snow avalanches on Earth, Krasilnikov said. But the lower gravity of Mars means that they traveled much farther than Earth avalanches. The ice avalanches on the first crater traveled about 15 kilometers, whereas those on the second flowed for about 12 kilometers. The larger one had a width of about 5 kilometers at its thickest point and covered 104 square kilometers in total.

Ice Avalanches or Moraines?

Sori said that he thinks Krasilnikov’s paper is a nice advance in the scientific debate over these features, but he isn’t completely convinced it settles the debate and hopes it inspires more tests or observations.

“It’s a nice alternative explanation,” he said, adding that we know avalanches occur on other parts of Mars today. He also said that these researchers have shown that an avalanche can cover the right distance to produce these moraine-like features.

For Sori, the drawback in this new theory has to do with the fact that the ridges look so much like moraines, though he concedes that different processes could produce something that looked similar.

The theory on carbon dioxide (CO2) glaciers is that they can occur when Mars has a low axial tilt, which has occurred every few million years. At these times, conditions are cold enough that CO2 freezes, accumulates, and flows like the snow glaciers we know on Earth.

But Krasilnikov said that although the craters in question were formed more than 10 million years ago, the moraine-like ridges in these two craters were likely formed in the past few million years. The oblique angle of the planet necessary for CO2 glaciers to form hasn’t happened in the past 10 million or so years, he said.

As to why these ice avalanches seem to have happened only in these two places, Krasilnikov is unsure. “I think it’s [because of] very special physical conditions,” he said.

—Joshua Rapp Learn (@JoshuaLearn1), Science Writer

Bringing Earthquake Education to Schools in Nepal

Mon, 04/27/2020 - 12:36

Deep beneath Nepal, two tectonic plates converge. The ground shakes with many small tremors and occasional devastating earthquakes as the Indian subcontinent slides below Eurasia in the slow-motion collision forming the Himalayas.

The project’s twin goals are local community education and the generation of open-access earthquake data.Nepal is draped across 800 kilometers (500 miles) of this “seismic hazard zone,” but the country does not teach seismology or earthquake preparedness in its schools. Shiba Subedi, a Nepali doctoral candidate studying earthquake seismology at the University of Lausanne in Switzerland, is currently working to change that through the Seismology at School in Nepal program.

Subedi and his Ph.D. adviser began planning the program in the fall of 2017. Its twin goals are local community education and the generation of open-access earthquake data: the Nepal School Seismology Network. In service of these goals, they have created and distributed educational materials on earthquakes for use in classrooms and hosted a workshop to help educators integrate the materials into their curricula. They have also installed 22 seismometers in schools across central and western Nepal and made the data they collect freely available online. They published their progress to date in Frontiers in Earth Science this month.

A Community-First Approach to Earthquake Education

On 25 April 2015, a magnitude 7.8 earthquake shook Nepal. The Gorkha earthquake radiated out from its epicenter 80 kilometers (50 miles) northwest of the capital of Kathmandu, killing roughly 9,000 people and leveling hundreds of thousands of buildings across the country.

Barpak, Nepal, was the village at the epicenter of the 2015 Gorkha earthquake. Seventy-two people in Barpak died during the event. Credit: Seismology at School in Nepal

Subedi was in Kathmandu at the time, having just finished his master’s degree in physics. “During the earthquake, I came to witness the injuries of my friends and the damage to infrastructure around me,” Subedi said. He realized that many people lost their lives “just because of the lack of understanding of what an earthquake actually is and what precautions are needed.”

In 2017, Subedi decided to pursue a Ph.D. with György Hetényi, an Earth science professor at the University of Lausanne who has worked extensively in the Himalayas. Hetényi came up with the idea of equipping secondary schools in remote villages around the area of the 2015 Gorkha earthquake with low-cost seismometers. The seismometers would serve a dual purpose by giving students a hands-on learning tool as well as creating a local seismology network collecting real-time data. Hetényi and Subedi also decided to develop teaching materials for the schools, in the hope that educating schoolchildren about earthquakes would be the most effective way to educate an entire community.

The Right Tools for the Right Schools

The first task was to select the best seismometer for the project. They needed a model that was low-cost, low-maintenance, and easy to use while also providing useful earthquake data for the network. Subedi and Hetényi purchased four sample models (with names like “the Lego” and “the Slinky”) to test in the lab, ultimately settling on the Raspberry Shake 1D (RS1D).

Shiba Subedi installs a seismometer in a school by affixing the device to the ground. Credit: Seismology at School in Nepal

Emily Wolin, a geophysicist with the U.S. Geological Survey, called the RS1D a “great choice.” RS1Ds are ideal for teaching: They are small, sturdy, and transparent and allow students to see their inner workings.

The next step was choosing the right schools. Over 100 schools in the selected area of western Nepal submitted an application to participate, and Subedi and Hetényi needed to balance adequate coverage of their study area with the logistical feasibility of installing and maintaining the seismometers. Some schools were highly motivated to participate but lacked a reliable electricity supply or an Internet connection, for instance. Subedi’s local knowledge proved crucial to locating the 22 best schools and gaining their support for the project.

“Shiba has such a broad network of contacts, both geographically and thematically, that no foreigner will ever have,” Hetényi said. “This was absolutely key to implement the project locally, and also very much for the local acceptance by teachers, students, school principals, [and] other people who helped.”

The final seismometer was installed in the spring of 2019. That April, over 80 teachers gathered in the city of Pokhara for an educational seismology workshop. (The workshop was supported by an AGU Celebrate 100 grant.) One of the workshop attendees—secondary school teacher Kalpana Pandey—said that she came away with more ideas and resources to teach students about earthquakes. She particularly appreciated how Hetényi demonstrated P and S waves with a Slinky (each teacher was also given a Slinky to use in their own classrooms) and the care he took in explaining the science, showing sensitivity to local religious practices.

György Hetényi shows recordings of a seismometer during the International Workshop on Educational Seismology in Pokhara, Nepal, in April 2019. Credit: Seismology at School in Nepal

Religious sensitivity was front-of-mind for the team. Hetényi consulted with an expert in Hinduism before the workshop, which Subedi called “an excellent concept.”

In rural Nepali communities, many people understand earthquakes in religious or traditional terms. By meeting with an expert in Hinduism, Hetényi learned that “Hindus don’t have exclusivist perspectives and can accept other explicatory systems,” he said. “I needed to make sure that I treated science and religion on an equal level, without any judgment.”

“We were conscious of the fact that we should not oppose religious beliefs while explaining the earthquake science. To explain everything in English, it could be a little bit complex,” Subedi said. “In the end, the event was a great success.”

Required Reading

To date, the team estimates that over 18,000 students have benefited from Seismology at School in Nepal, and the RS1D network has detected nearly 200 local earthquakes—data that are freely available online.

“If the data [are] kept in a box, the local population will see it almost as a secret project, with very little and usually delayed information for them,” Hetényi said. “But if the waveforms of the earthquake they just felt show up in an app on their smartphone, or in the classroom’s computer screen, they will immediately see the benefit.”

Wolin applauded the project’s research benefits as well. “Before this project, there [were] no publicly available data from seismometers in western Nepal,” she said. A network of 22 stations with open-access data “will improve scientists’ ability to detect and locate earthquakes.”

Subedi and Hetényi hope to continue building on the project, ultimately expanding their network across the whole country. They also hope to install stronger seismometers to complement the RS1Ds, which can record only smaller local events or larger distant ones.

“I was floored by the thoughtfulness and consideration demonstrated throughout the whole project,” Wolin said. “I think this is required reading for any scientist who wants to develop a school outreach program.”

—Rachel Fritts (@rachel_fritts), Science Writer

Health Concerns from Combined Heat and Pollution in South Asia

Fri, 04/24/2020 - 15:35

Rising global temperatures and, correspondingly, increasing incidences of extreme heat events are occurring across most of the world. This is even more concerning in South Asia with many other geographical factors leading to prolonged period of hazardous weather. The examination of potential health exposure by Xu et al. [2020] is unique. The researchers studied projections for heat events in combination with one aspect of air quality—particulate matter—for the current and projected climate of India and a few other countries in South Asia.

The authors show that such jointed events would increase by 175% in frequency by mid-century. The fraction of land exposed to prolonged high particulate pollution increases by more than a factor of ten in 2050. The alarming increases in health exposures over just a few decades in South Asia pose great challenges to adaptation. Action addressing the combined impacts of climate change is needed across the world.

Citation: Xu, Y., Wu, X., Kumar, R., Barth, M., Diao, C., Gao, M., et al. [2020]. Substantial increase in the joint occurrence and human exposure of heatwave and high‐PM hazards over South Asia in the mid‐21st century. AGU Advances, 1, e2019AV000103. https://doi.org/10.1029/2019AV000103

—Donald Wuebbles, Editor, AGU Advances

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer