EOS

Syndicate content Eos
Science News by AGU
Updated: 7 hours 6 min ago

Observations from Space and Ground Reveal Clues About Lightning

Fri, 06/11/2021 - 12:47

Capturing the fleeting nature and order of lightning and energy pulses has been a goal of many studies over the past 3 decades. Although bolts of white lightning and colorful elves (short for emissions of light and very low frequency perturbations due to electromagnetic pulse sources) can be seen with the naked eye, the sheer speed and sequence of events can make differentiating flashes difficult.

In particular, researchers want to understand the timing of intracloud lightning, elves, terrestrial gamma ray flashes (TGFs), and energetic in-cloud pulses. Do all of these energy pulses occur at the same time, or are there leaders, or triggers, for lightning events?

This video is related to new research that uncovers the timing and triggering of high-energy lightning events in the sky, known as terrestrial gamma ray flashes and elves. Credit: Birkeland Centre for Space Science and MountVisual

In a new study, Østgaard et al. observed lightning east of Puerto Rico. They used optical monitoring along with gamma ray and radio frequency monitoring from the ground to determine the sequence of an elve produced by electromagnetic waves from an energetic in-cloud pulse, the optical pulse from the hot leader, and a terrestrial gamma ray flash.

The Atmosphere–Space Interactions Monitor (ASIM) is mounted on the International Space Station and includes a gamma ray sensor along with a multispectral imaging array. The optical measurements captured the lightning and ultraviolet measurements of the elves. The gamma ray instruments measured TGFs. In Puerto Rico, the researchers measured low-frequency radio measurements from lightning.

The team found that by using this combined monitoring technique, they could observe high-resolution details about the timing and optical signatures of TGFs, lightning, and elves. They found that the TGF and the first elve were produced by a positive intracloud lightning flash and an energetic in-cloud pulse, respectively. Just 456 milliseconds later, a second elve was produced by a negative cloud-to-ground lightning flash about 300 kilometers south of the first elve.

This combination of observations is unique and unprecedented. The detailed observations suggest that coordinated monitoring is the future method for lightning and thunderstorm research efforts. (Journal of Geophysical Research: Atmospheres, https://doi.org/10.1029/2020JD033921, 2021)

—Sarah Derouin, Science Writer

Below Aging U.S. Dams, a Potential Toxic Calamity

Fri, 06/11/2021 - 12:45

This article was originally published on Undark. Read the original article.

1 June 2021 by James Dinneen and Alexander Kennedy.

A multimedia version of this story, with rich maps and data, is available here.

On May 19, 2020, a group of engineers and emergency officials gathered at a fire station in Edenville, Michigan to decide what to do about the Edenville Dam, a 97-year-old hydroelectric structure about a mile upstream on the Tittabawassee River. Over the preceding two days, heavy rains had swelled the river, filling the reservoir to its brim and overwhelming the dam’s spillway. The group was just about to discuss next steps when the radios went off, recalled Roger Dufresne, Edenville Township’s longtime fire chief. “That’s when the dam broke.”

“Up at the dam, Edenville residents watched as a portion of the eastern embankment liquified. Muddy water gushed through the breach. Over the next few minutes, that water became a torrent, snapping trees and telephone poles as it rushed past town, nearly submerging entire houses further downstream.

About 10 miles and two hours later, the flood wave bowled into a second aging dam, damaging its spillway, overtopping, and then breaching the embankment.

Al Taylor, then chief of the hazardous waste section within the state’s Department of Environment, Great Lakes, and Energy, was following the situation closely as the surge swept 10 miles further downstream into the larger city of Midland, where it caused a Dow Chemical Company plant flanking the river to shut down, and threatened to mix with the plant’s containment ponds. Taylor, who retired at the end of January, worried that contamination from the ponds would spill into the river. But that was just the first of his concerns.

In prior decades, Dow had dumped dioxin-laden waste from the plant directly into the river, contaminating more than 50 miles of sediment downstream — through the Tittabawassee, the Saginaw River, and the Saginaw Bay — with carcinogenic material. The contamination was so severe that the U.S. Environmental Protection Agency stepped in, and since 2012, worked with Dow to map and cap the contaminated sediments. In designing the cleanup, engineers accounted for the river’s frequent flooding, Taylor knew, but nobody had planned for the specific impacts of flooding caused by a dam failure.

An Undark investigation has identified 81 other dams in 24 states, that, if they were to fail, could flood a major toxic waste site and potentially spread contaminated material into surrounding communities.While the dramatic breach of the Edenville Dam captured national headlines, an Undark investigation has identified 81 other dams in 24 states, that, if they were to fail, could flood a major toxic waste site and potentially spread contaminated material into surrounding communities.

In interviews with dam safety, environmental, and emergency officials, Undark also found that, as in Michigan, the risks these dams pose to toxic waste sites are largely unrecognized by any agency, leaving communities across the country vulnerable to the same kind of low-probability, but high-consequence disaster that played out in Midland.

After the flooding subsided, Dow and state officials inspected the chemical plant’s containment ponds and found that, though one of the brine ponds containing contaminated sediment had been breached, there was no evidence of significant toxic release. Preliminary sediment samples taken downstream did not find any new contamination. The plant’s and the cleanup’s engineering, it seemed, had done its job.

“Dow has well-developed, comprehensive emergency preparedness plans in place at our sites around the world,” Kyle Bandlow, Dow’s corporate media relations director, wrote in an email to Undark. “The breadth and depth of these plans — and our ability to quickly mobilize them — enabled the safety of our colleagues and our community during this historic flood event.”

But things could have gone differently — if not in Midland, then somewhere else with a toxic waste site downstream of an aging dam less prepared for a flood. “As a lesson learned from this,” Taylor said, “we need to be aware of that possibility.”

In the United States, there are more than 90,000 dams providing flood control, power generation, water supplies, and other critical services, according to the National Inventory of Dams database maintained by the U.S. Army Corps of Engineers, which includes both behemoths like the Hoover Dam and small dams holding back irrigation ponds. Structural and safety oversight of these dams falls under a loose and, critics say, inadequate patchwork of state and federal remit.

A 2019 report from the Congressional Research Service (CRS), the nonpartisan research arm of the U.S. Congress, found roughly 3 percent of the nation’s dams are federally owned, including some of the country’s largest, with the rest owned and operated by public utilities, state and local governments, and private owners. The report estimated that half of all dams were over 50 years old, including many that were built to now obsolete safety standards. About 15 percent of dams in the Army Corps database lacked data on when they were built.

In addition to information on age and design, the Army Corps database includes a “hazard potential” used to describe the possible impact of a dam failure to life and property. In 2019, roughly 17 percent, or 15,629 dams, had a high hazard potential rating, indicating that a loss of human life was likely in the event of a dam failure. The number of high-hazard dams has increased in recent years due to new downstream development.

According to the CRS report, more than 2,300 dams in the database were both high-hazard and in “poor” or “unsatisfactory” condition during their most recent inspection. Due to security concerns that arose after the September 11 terrorist attacks, the report did not name these dams, though an investigation by The Associated Press in 2019 identified nearly 1,700 of them.

For all that is known about America’s aging dam infrastructure, however, little information exists about the particular hazards dams pose to toxic waste sites downstream.For all that is known about America’s aging dam infrastructure, however, little information exists about the particular hazards dams pose to toxic waste sites downstream. This is why regulators knew about problems with the Edenville Dam and knew about the Dow cleanup, but had not connected the dots.

To identify dams that might pose the most serious risk to toxic waste sites, Undark searched for dams in the national database that are both high-hazard and older than 50 years, the age after which many dams require renovations. To narrow our search, we selected dams that sit 6 or fewer miles away from and appear in satellite images to be upstream of an EPA-listed toxic waste site. Experts say that many dams would flood much farther than 6 miles.

We then filed requests under state and federal freedom of information laws with various agencies, including the Federal Energy Regulatory Commission, seeking dam inspection reports and the Emergency Action Plans (EAPs) that dam owners are typically required to prepare and maintain. Among other things, these plans usually include inundation maps, which model the area that would likely be flooded in a dam failure scenario.

The inputs for these models vary by state, and while some inundation maps were highly sophisticated, involving contingencies for weather and other variables, others were less so. In one Emergency Action Plan for a dam in Tennessee, the inundation zone was simply hand-drawn on a map with a highlighter (see above image). But whatever their quality, the maps represent dam officials’ best estimate of where large volumes of water will flow if a dam fails.

Undark successfully obtained inundation modeling information for 153 of the 259 dams identified in our search. For 63 dams, state and federal officials declined to provide or otherwise redacted pertinent inundation information, citing security concerns. For 31 dams, agencies said they did not have inundation maps prepared, or provided maps that were illegible or did not extend to the site. Despite improvement in recent years, about 19 percent of high-hazard dams still lacked plans as of 2018, according to the American Society of Civil Engineers.

With those maps, we then looked to see if any EPA-listed toxic waste sites fell within the delineated inundation areas. Because the precise boundaries of each toxic waste site are not consistently available, we followed the methodology of a 2019 Government Accountability Office analysis of flood risks to contaminated sites on the EPA’s National Priorities List — more commonly known as Superfund sites — which used a 0.2-mile radius around the coordinates listed by the EPA for each location.

For a number of dams for which we were not able to obtain inundation maps to review ourselves, dam regulators or owners confirmed that coordinates we provided for the toxic waste site fell within 0.2 miles of the dam’s inundation zone.

We focused our search on the nation’s highest priority cleanup sites, as indicated by a designation of Superfund (for non-operating sites) or RCRA (Resource Conservation and Recovery Act of 1976, for operating sites). We considered 5,695 of these sites, including both current and former sites. Types and levels of contamination vary widely across sites, as would the impact of any flooding.

“There are many situations across the country then with these dams where they don’t meet the current safety standards.”Using this methodology, we identified at least 81 aging high-hazard dams that could flood portions of at least one toxic waste site if they failed, potentially spreading contaminated material into surrounding communities and exposing hundreds or thousands of people — in the case of very large dams, many more — to health hazards atop significant environmental impacts. At least six of the dams identified were in “poor” or “unsafe” condition during their most recent inspection.

In many instances, state and local agencies responsible for dam safety and toxic waste have not accounted for this kind of cascading disaster, and so remain largely unprepared.

Undark shared this analysis with engineering and dam safety experts, who verified the methodology. Several suggested that the true number of dams that could flood toxic waste sites if they were to fail is almost certainly far greater, but because no agency tracks this particular hazard, the actual number remains unknown.

“There are many situations across the country then with these dams where they don’t meet the current safety standards …” said Mark Ogden, a civil engineer who reviewed the American Society of Civil Engineer’s 2021 Infrastructure Report Card section on dams, which gave U.S. dams a “D” grade. “And the fact that there could be these hazardous sites as part of that, just increases that concern and what the consequences of a failure might be.”

Though impacts would vary widely, environmental scientists and toxicologists interviewed by Undark suggested that a sudden, intense flood caused by a dam failure could spread contaminants from hazardous waste sites into surrounding communities. Even sites designed to withstand flooding might be impacted if the debris carried in floodwater managed to scour and erode protective caps, potentially releasing toxic material into the water, explained Rick Rediske, a toxicologist at Grand Valley State University in Michigan. In Houston in 2017, flooding from Hurricane Harvey eroded a temporary protective cap at the San Jacinto River Waste Pits Superfund site, exposing dioxins and other toxic substances.

Water could then move contaminants around the site and redeposit them anywhere in the floodplain, exposing people and ecosystems to health hazards, said Jacob Carter, a research scientist at the Union of Concerned Scientists, who formerly studied flooding hazards to contaminated sites for the EPA. Carter also pointed out that communities living nearest to toxic waste sites and so most vulnerable to these events tend to be low income and communities of color.

It’s possible that any toxic material would be diluted by the flood and new clean sediment, said Allen Burton, director of the Institute for Global Change Biology at the University of Michigan. But this, he emphasized, would be a best-case scenario.

“You have no way of predicting, really, how much of the bad stuff moved, how far it moved, how far it got out into the floodplain, what the concentrations are.”Generally, when there’s a massive flood like the one in Michigan, “it just moves the sediment everywhere downstream,” said Burton. “You have no way of predicting, really, how much of the bad stuff moved, how far it moved, how far it got out into the floodplain, what the concentrations are.” And regulated waste sites are just one source of potential contamination in a dam breach scenario, said Burton. Sediment behind dams is itself often contaminated after years of collecting whatever went into the river upstream.

Contamination can also come from more mundane sources in the floodplain, like wastewater treatment plants or the oil canisters in people’s basements that get swept into floodwaters, said Burton. “The fish downstream,” he quipped, “don’t care if contaminants came from your garage or Dow Chemical.”

Undark’s investigation found that state and local governments often have not prepared for the flooding that could occur at toxic waste sites in the event of a dam failure.

Emporia Foundry Incorporated, a federally-regulated hazardous waste site in Greensville County, Virginia, provides a representative example. It falls within the inundation zone of the 113-year-old Emporia Dam, which is a hydroelectric dam partially owned by the city and located on the Meherrin River, just over one mile west of the foundry site.”

The foundry, which once manufactured manhole covers and drain grates, includes a landfill packed with byproducts containing lead, arsenic, and cadmium. The landfill was capped in 1984, and in 2014, a second cap was added nearer to the river as a buffer against flooding. As in Midland, cleanup engineers accounted for flooding within the 100-year floodplain, but according to a spokesperson from the Virginia Department of Environmental Quality, they did not account for flooding from a dam failure.

The Emporia Dam inundation map shows that if the dam were to fail during a severe storm, the entire foundry site could be flooded, potentially disintegrating the cap and spreading contaminants across the floodplain. However, the site would not be flooded in the event of a “sunny day” failure.

More than 3,000 people live within a mile of the Emporia Foundry site, around 75 percent of whom are Black, according to EPA and 2010 census data.

Wendy C. Howard Cooper, director of Virginia’s dam safety program, explained that her program’s mandate is to define a dam’s inundation zone and inform local emergency managers of any immediate risks to human life and property — not to identify toxic waste sites and analyze what might happen to them during a flood. “That would be a rabbit hole that no one could regulate,” Howard Cooper said. She added that local governments should be familiar with both the dams and the contaminated sites inside their borders and should have proper emergency procedures in place.

This turned out not to be true in Greensville County, where the program coordinator for emergency services, J. Reggie Owens, told Undark he was unaware of the potential for the foundry site to flood if the Emporia Dam were to fail. The site is “not even in the floodplain,” he said. “It’s never been put on my radar by DEQ or anyone else.”

A similar pattern emerged in other states. In Rhode Island, for instance, our search identified eight dams. One of these, the 138-year-old Forestdale Pond Dam, was considered “unsafe” during its most recent inspection.

Located in the town of North Smithfield, the dam is immediately adjacent to the Stamina Mills Superfund site, which once housed a textile mill that spilled the toxic solvent trichloroethylene into the soil. Another area on the site was used as a landfill for polycyclic aromatic hydrocarbons, sulfuric acid, soda ash, wool oil, plasticizers, and pesticides.

A few years after trichloroethylene was detected in groundwater in 1979, the site received a Superfund designation from the EPA. According to the federal agency, construction for the site cleanup — which involved removing the contaminated soil from the landfill and installing a groundwater treatment system — was completed in 2000 and accounted for a 100-year flood, but it did not account for flooding due to a dam failure.

According to EPA and census data, more than 2,500 people lived within a mile of Stamina Mills as of 2010, and Forestdale Pond is not the only dam that could pose a threat.

In fact, the site sits within the inundation zones of two other high hazard dams identified by Undark. A failure of either of these dams on the Slatersville Reservoir could cause a domino effect of dam failures downstream, according to Rhode Island dam safety reports, all leading to flooding at Stamina Mills.

When asked to comment on possible flood risks to the Superfund site, the EPA responded that the only remaining remedy at Stamina Mills, the groundwater treatment system, would not be affected if Forestdale Pond Dam were to fail. EPA made no reference to the larger Slatersville Reservoir dams less than two miles upstream.

Spokespersons at the Rhode Island dam safety office and the state office responsible for hazardous waste had not considered that a dam failure could flood any of the sites identified by Undark, including Stamina Mills.

By building engineered structures or taking other resiliency measures, the most hazardous waste sites can be designed to withstand flooding, explained Carter, who recently co-authored a report on climate change and coastal flooding hazards to Superfund sites. But in order to prepare for floods, Carter said, flooding hazards have to be recognized first, whether they come from rising seas, increasing storm surge, or, as in these cases, dams.

“They could have looked at that dam and said, ‘Oh, it gets a D minus for infrastructure. This thing could break.’”“They could have looked at that dam and said, ‘Oh, it gets a D minus for infrastructure. This thing could break,’” said Burton, referring to the Edenville Dam. “So in the future, it would be smart of EPA to require the principal party who’s responsible for the cleanup to look at the situation to see if it actually could happen.”

One step that could make that process much easier is for dam inundation zones to be regularly included in FEMA’s publicly available flood risk maps, which show the 100-year floodplain and other flood risks to communities, said Ogden. A lack of available data on dam inundations — sometimes the result of security concerns — presents a major obstacle, said a FEMA spokesperson, but plotting inundation zones on commonly-used flood risk maps would ensure communities and agencies are aware of and can respond to dam hazards.

Some states, including Rhode Island, have already made inundation zones, Emergency Action Plans, and inspection reports for the dams they regulate publicly available online. In South Carolina, following a 2015 event when heavy rains caused 50 dams to fail, dam inundations for the most hazardous state-regulated dams were made publicly available. Though no state agency tracks hazardous waste sites within dam inundation zones, Undark was able to identify three dams in South Carolina which could flood a hazardous waste site in the state using this resource.

In California, inundation zones for the state’s most hazardous dams were made available following a 2017 dam failure scare at the Oroville Dam, the tallest dam in the country, which led to the evacuation of more than 180,000 people.

Using this resource, Undark identified four dams which would flood at least one hazardous waste site in California. These included the Oroville Dam, which could flood at least one current and one former Superfund site if it were to fail.

According to the EPA, neither of those sites downstream of the Oroville Dam had considered the possibility of flooding due to dam failure prior to the failure scare. Even so, commented EPA, due to the “extraordinary volume of water” that would flood the sites if the Oroville Dam were to fail, “it is not feasible to alter the existing landfills and groundwater remedy infrastructure to protect against the potential failure of the Oroville Dam.”

In order to fix the nation’s dams, the first step is to spread awareness about the importance of dams and the hazards they pose to people and property.In order to fix the nation’s dams, the first step is to spread awareness about the importance of dams and the hazards they pose to people and property, said Farshid Vahedifard, a civil engineer at Mississippi State University who co-authored a recent letter in Science on the need to proactively address problematic dams. “The second thing is definitely we need to invest more.”

According to the Association of State Dam Safety Officials, the fixes necessary to rehabilitate all the nation’s dams would cost more than $64 billion; rehabilitating only the high hazard dams would cost around $22 billion. Meanwhile, the $10 million appropriated by Congress in 2020 for FEMA’s high hazard dam rehabilitation program are “kind of a drop in the bucket for what’s really needed,” said Ogden.

Indeed, state dam safety programs report a chronic lack of funds for dam safety projects, both from public sources and from private dam owners unable or unwilling to pay for expensive repairs. In Michigan, both dams that failed were operated by a company called Boyce Hydro, which received years of warnings from dam safety regulators that there were deficiencies.

Lee Mueller, Boyce Hydro’s co-manager, told Undark that the company made numerous improvements to the dams over the years. After losing revenue when the Federal Energy Regulatory Commission (FERC) revoked the company’s hydroelectric permit, however, it was unable to fund repairs that might have prevented the dam failures.

“Regarding the Edenville Dam breach, the subject of the State of Michigan’s governance and political policy failures and the insouciance of the environmental regulatory agencies are the subject of on-going litigation and will be more thoroughly detailed in the course of those legal proceedings,” Mueller wrote in an email.

“The state of Michigan knew about this,” said Dufresne, the Edenville fire chief. State regulators, he says, should have insisted that the company pay for the badly needed repairs. “They needed to push him,” said Dufresne, referring to Mueller. More than half of all dams in the U.S. are privately owned.

Without the funding to match the problem, members of the state dam safety community have looked to non-typical sources of funding, says Bill McCormick, chief of the Colorado dam safety program. In Eastern Oregon for example, the 90-year old Wallowa Lake Dam — which Undark found would flood the former Joseph Forest Products Superfund site if it were to fail — was slated last year for a $16 million renovation to repair its deteriorating spillway and add facilities for fish to pass through. But the plans have stalled since the Covid-19 pandemic has reduced Oregon’s lottery revenues, which were funding most of the project.

“If we start getting much bigger storms, then that itself will lead to a higher probability of overtopping and dam failure.”The challenges facing U.S. dams are also exacerbated by climate change, say dam safety experts, with more frequent extreme weather events and more intense flooding expected in parts of the country adding new stresses to old designs. “If we start getting much bigger storms, then that itself will lead to a higher probability of overtopping and dam failure,” said Upmanu Lall, director of the Columbia Water Center at Columbia University and co-author of a recent report on potential economic impacts of climate-induced dam failure, which considered how the presence of hazardous waste sites might further amplify damages. The report also outlines how in addition to more extreme weather, factors like changes in land use, sediment buildup, and changing frequencies of wet-dry and freeze-thaw cycles all can contribute to a higher probability of dam failure.

Several state dam safety programs contacted by Undark said they are planning for climate change-related impacts to dam infrastructure, though according McCormick, the Colorado dam safety chief, his state is the only one with dam safety rules which explicitly account for climate change. New rules that took effect in January require dam designs “to account for expected increases in temperature and associated increases in atmospheric moisture.”

“We were the first state to take that step, but I wouldn’t be surprised if others follow that lead,” McCormick said.

No deaths were reported in the Michigan flooding, but more than 10,000 residents had to be evacuated from their homes and the disaster likely caused more than $200 million in damage to surrounding property, according to a report from the office of Michigan Gov. Gretchen Whitmer. Restoring the empty reservoirs, as well as rebuilding the two dams, could cost upwards of $300 million, according to the Four Lakes Task Force, an organization that had been poised to buy the dams just before they failed.

In contrast, the Four Lakes Task Force, which now owns the dams, planned to spend about $35 million to acquire and repair those dams and an additional two dams prior to the breach. Boyce Hydro declared bankruptcy in July and now faces numerous lawsuits related to the flooding. FERC is coordinating with officials in Michigan on investigations into the dam failures, and has fined Boyce Hydro $15 million for failing to act on federal orders following the incident.

Dufresne, the Edenville fire chief, watched for years as political and financial challenges prevented the dams on the Tittabawassee from getting fixed. His advice for any other community dealing with a problematic dam: Call your state representatives, tell them, “Hey you need to investigate this.”

By August, life in Midland County was slowly getting back to normal. “Some of the people started putting their houses back together. The businesses are trying to figure out what to do next,” said Jerry Cole, the fire chief of Jerome Township, located south of Edenville.

At the Edenville Dam, neat houses looked out over a wide basin of sand-streaked mud where the impounded lake used to be. Near the bottom, where the river was still flowing through the gap in the fractured dam, a group of teenagers lounged on inner tubes, splashing around.

“It just amazes me that this actually happened here,” said Dufresne.

James Dinneen is a science and environmental journalist from Colorado, based in New York.

Alexander Kennedy is a software engineer specializing in data visualization.

This article was originally published on Undark. Read the original article.

Particles at the Ocean Surface and Seafloor Aren’t So Different

Thu, 06/10/2021 - 14:48

Although scientists often assume that random variations in scientific data fit symmetrical, bell-shaped normal distributions, nature isn’t always so tidy. In some cases, a skewed distribution, like the log-normal probability distribution, provides a better fit. Researchers previously found that primary production by ocean phytoplankton and carbon export via particles sinking from the surface are consistent with log-normal distributions.

In a new study, Cael et al. discovered that fluxes at the seafloor also fit log-normal distributions. The team analyzed data from deep-sea sediment traps at six different sites, representing diverse nutrient and oxygen statuses. They found that the log-normal distribution didn’t just fit organic carbon flux; it provided a simple scaling relationship for calcium carbonate and opal fluxes as well.

Uncovering the log-normal distribution enabled the researchers to tackle a longstanding question: Do nutrients reach the benthos—life at the seafloor—via irregular pulses or a constant rain of particles? The team examined the shape of the distribution and found that 29% of the highest measurements accounted for 71% of the organic carbon flux at the seafloor, which is less imbalanced than the 80:20 benchmark specified by the Pareto principle. Thus, although high-flux pulses do likely provide nutrients to the benthos, they aren’t the dominant source.

The findings will provide a simple way for researchers to explore additional links between net primary production at the ocean surface and deep-sea flux. (Geophysical Research Letters, https://doi.org/10.1029/2021GL092895, 2021)

—Jack Lee, Science Writer

“Earth Cousins” Are New Targets for Planetary Materials Research

Thu, 06/10/2021 - 14:46

Are the processes that generate planetary habitability in our solar system common or rare elsewhere? Answering this fundamental question poses an enormous challenge.

For example, observing Earth-analogue exoplanets—that is, Earth-sized planets orbiting within the habitable zone of their host stars—is difficult today and will remain so even with the next-generation James Webb Space Telescope (JWST) and large-aperture ground-based telescopes. In coming years, it will be much easier to gather data on—and to test hypotheses about the processes that generate and sustain habitability using—“Earth cousins.” These small-radius exoplanets lack solar system analogues but are more accessible to observation because they are slightly bigger or slightly hotter than Earth.

Here we discuss four classes of exoplanets and the investigations of planetary materials that are needed to understand them (Figure 1). Such efforts will help us better understand planets in general and Earth-like worlds in particular.

Fig. 1. Shown here are four common exoplanet classes that are relatively easy to characterize using observations from existing telescopes (or telescopes that will be deployed soon) and that have no solar system analogue. Hypothetical cross sections for each planet type show interfaces that can be investigated using new laboratory and numerical experiments. CO2 = carbon dioxide, Fe = iron, H2O = water, Na = sodium. What’s in the Air?

Atmospheres are now routinely characterized for Jupiter-sized exoplanets. And scientists are acquiring constraints for various atmospheric properties of abundant smaller worlds.On exoplanets, the observable is the atmosphere. Atmospheres are now routinely characterized for Jupiter-sized exoplanets. And scientists are acquiring constraints for various atmospheric properties of smaller worlds (those with a radius R less than 3.5 Earth radii R⨁), which are very abundant [e.g., Benneke et al., 2019; Kreidberg et al., 2019]. Soon, observatories applying existing methods and new techniques such as high-resolution cross-correlation spectroscopy will reveal even more information.

For these smaller worlds, as for Earth, a key to understanding atmospheric composition is understanding exchanges between the planet’s atmosphere and interior during planet formation and evolution. This exchange often occurs at interfaces (i.e., surfaces) between volatile atmospheres and condensed (liquid or solid) silicate materials. For many small exoplanets, these interfaces exhibit pressure-temperature-composition (PTX) regimes very different from Earth’s and that have been little explored in laboratory and numerical experiments. To use exoplanet data to interpret the origin and evolution of these strange new worlds, we need new experiments exploring the relevant planetary materials and conditions.

Studying Earth cousin exoplanets can help us probe the delivery and distribution of life-essential volatile species—chemical elements and compounds like water vapor and carbon-containing molecules, for example, that form atmospheres and oceans, regulate climate, and (on Earth) make up the biosphere. Measuring abundances of these volatiles on cousin worlds that orbit closer to their star than the habitable zone is relatively easy to do. These measurements are fundamental to understanding habitability because volatile species abundances on Earth cousin exoplanets will help us understand volatile delivery and loss processes operating within habitable zones.

For example, rocky planets now within habitable zones around red dwarf stars must have spent more than 100 million years earlier in their existence under conditions exceeding the runaway greenhouse limit, suggesting surface temperatures hot enough to melt silicate rock into a magma ocean. So whether these worlds are habitable today depends on the amount of life-essential volatile elements supplied from sources farther from the star [e.g., Tian and Ida, 2015], as well as on how well these elements are retained during and after the magma ocean phase.

Different types of Earth cousin exoplanets offer natural solutions that can ease volatile detection.Volatiles constitute a small fraction of a rocky planet’s mass, and quantifying their abundance is inherently hard. However, different types of Earth cousin exoplanets offer natural solutions that can ease volatile detection. For example, on planets known as sub-Neptunes, the spectroscopic fingerprint of volatiles could be easier to detect because of their mixing with lower–molecular weight atmospheric species like hydrogen and helium. These lightweight species contribute to more puffed-up (expanded) and thus more detectable atmospheres. Hot, rocky exoplanets could “bake out” volatiles from their interiors while also heating and puffing up the atmosphere, which would make spectral features more visible. Disintegrating rocky planets may disperse their volatiles into large, and therefore more observable, comet-like tails.

Let’s look at each of these examples further.

Unexpected Sub-Neptunes

About 1,000 sub-Neptune exoplanets (radius of 1.6–3.5 R⨁) have been confirmed. These planets, which are statistically about as common as stars, blur the boundary between terrestrial planets and gas giants.

A warm, Neptune-sized exoplanet orbits the red dwarf star GJ 3470. Intense radiation from the star heats the planet’s atmosphere, causing large amounts of hydrogen gas to stream off into space. Credit: NASA/ESA/D. Player (STScI)

Strong, albeit indirect, evidence indicates that the known sub-Neptunes are mostly magma by mass and mostly atmosphere by volume (for a review, see Bean et al. [2021]). This evidence implies that an interface occurs, at pressures typically between 10 and 300 kilobars, between the magma and the molecular hydrogen (H2)-dominated atmosphere on these planets. Interactions at and exchanges across this interface dictate the chemistry and puffiness of the atmosphere. For example, water can form and become a significant fraction of the atmosphere, leading to more chemically complex atmospheres.

Improved molecular dynamics calculations are needed to quantify the solubilities of gases and gas mixtures in realistic magma ocean compositions (and in iron alloys composing planetary cores, which can also serve as reservoirs for volatiles) over a wider range of pressures and temperatures than we have studied until now. These calculations should be backed up by laboratory investigations of such materials using high-pressure instrumentation like diamond anvil cells. These calculations and experiments will provide data to help determine the equation of state (the relationship among pressure, volume, and temperature), transport properties, and chemical kinetics of H2-magma mixtures as they might exist on these exoplanets.

Fig. 2. Ranges of plausible conditions at the interfaces between silicate surface rocks and volatile atmospheres on different types of worlds are indicated in this pressure–temperature (P-T) diagram. Conditions on Earth, as well as other relevant conditions (critical points are the highest P-T points where materials coexist in gaseous and liquid states, and triple points are where three phases coexist), are also indicated. Mg2SiO4 = forsterite, an igneous mineral that is abundant in Earth’s mantle.

Because sub-Neptunes are so numerous, we cannot claim to understand the exoplanet mass-radius relationship in general (in effect, the equation of state of planets in the galaxy) without understanding interactions between H2 and magma on sub-Neptunes. To understand the extent of mixing between H2, silicates, and iron alloy during sub-Neptune assembly and evolution, we need more simulations of giant impacts during planet formation [e.g., Davies et al., 2020], as well as improved knowledge of convective processes on these planets. Within the P-T-X regimes of sub-Neptunes, full miscibility between silicates and H2 becomes important (Figure 2).

Beyond shedding light on the chemistry and magma-atmosphere interactions on these exoplanets, new experiments may also help reveal the potential for and drivers of magnetic fields on sub-Neptunes. Such fields might be generated within both the atmosphere and the magma.

Hot and Rocky

Hot, rocky exoplanets experience high fluxes of atmosphere-stripping ultraviolet photons and stellar wind, but whether they retain life-essential elements like nitrogen, carbon, and sulfur is unknown.From statistical studies, we know that most stars are orbited by at least one roughly Earth-sized planet (radius of 0.75–1.6 R⨁) that is irradiated more strongly than our Sun’s innermost planet, Mercury. These hot, rocky exoplanets, of which about a thousand have been confirmed, experience high fluxes of atmosphere-stripping ultraviolet photons and stellar wind. Whether they retain life-essential elements like nitrogen, carbon, and sulfur is unknown.

On these hot, rocky exoplanets—and potentially on Venus as well—atmosphere-rock or atmosphere-magma interactions at temperatures too high for liquid water will be important in determining atmospheric composition and survival. But these interactions have been only sparingly investigated [Zolotov, 2018].

Many metamorphic and melting reactions between water and silicates under kilopascal to tens-of-gigapascal pressures are already known from experiments or are tractable using thermodynamic models. However, less well understood processes may occur in planets where silicate compositions and proportions are different than they are on Earth, meaning that exotic rock phases may be important. Innovative experiments and modeling that consider plausible exotic conditions will help us better understand these planets. Moreover, we need to conduct vaporization experiments to probe whether moderately volatile elements are lost fast enough from hot, rocky planets to form a refractory lag and reset surface spectra.

Exotic Water Worlds?

Water makes up about 0.01% of Earth’s mass. In contrast, the mass fraction of water on Europa, Ceres, and the parent bodies of carbonaceous chondrite meteorites is some 50–3,000 times greater than on Earth. Theory predicts that such water-rich worlds will be common not only in habitable zones around other stars but even in closer orbits as well. The JWST will be able to confirm or refute this theory [Greene et al., 2016].

If we could descend through the volatile-rich outer envelope of a water world, we might find habitable temperatures at shallow depths [Kite and Ford, 2018]. Some habitable layers may be cloaked beneath H2. Farther down, as the atmospheric pressure reaches 10 or more kilobars, we might encounter silicate-volatile interfaces featuring supercritical fluids [e.g., Nisr et al., 2020] and conditions under which water can be fully miscible with silicates [Ni et al., 2017].

We still need answers to several key questions about these worlds. What are the equilibria and rates of gas production and uptake for rock-volatile interfaces at water world “seafloors”? Can they sustain a habitable climate? With no land, and thus no continental weathering, can seafloor reactions supply life-essential nutrients? Do high pressures and stratification suppress the tectonics and volcanism that accelerate interior-atmosphere exchange [Kite and Ford, 2018]?

Relative to rock compositions on Earth, we should expect exotic petrologies on water worlds.As for the deep interiors of Titan and Ganymede in our own solar system, important open questions include the role of clathrates (compounds like methane hydrates in which one chemical component is enclosed within a molecular “cage”) and the solubility and transport of salts through high-pressure ice layers.

Experiments are needed to understand processes at water world seafloors. Metamorphic petrologists are already experienced with the likely pressure-temperature conditions in these environments, and exoplanetary studies could benefit from their expertise. Relative to rock compositions on Earth, we should expect exotic petrologies on water worlds—for example, worlds that are as sodium rich as chondritic meteorites. Knowledge gained through this work would not only shed light on exoplanetary habitability but also open new paths of research into studying exotic thermochemical environments in our solar system.

Magma Seas and Planet Disintegration

Some 100 confirmed rocky exoplanets are so close to their stars that they have surface seas of very low viscosity magma. The chemical evolution of these long-lived magma seas is affected by fractional vaporization, in which more volatile materials rise into the atmosphere and can be relocated to the planet’s dark side or lost to space [e.g., Léger et al., 2011; Norris and Wood, 2017], and perhaps by exchange with underlying solid rock.

Magma planets usually have low albedos, reflecting relatively little light from their surfaces. However, some of these planets appear to be highly reflective, perhaps because their surfaces are distilled into a kind of ceramic rich in calcium and aluminum. One magma planet’s thermal signature has been observed to vary from month to month by a factor of 2 [Demory et al., 2016], implying that it undergoes a global energy balance change more than 10,000 times greater than that from anthropogenic climate change on Earth. Such large swings suggest that fast magma ocean–atmosphere feedbacks operate on the planet.

To learn more about the chemical evolution and physical properties of exoplanet magma seas, we need experiments like those used to study early-stage planet formation, which can reveal information about silicate vaporization and kinetics under the temperatures (1,500–3,000 K) and pressures (10−5 to 100 bars) of magma planet surfaces.

Exoplanets and exoplanetesimals that stray too close to their stars are destroyed—about five such cases have been confirmed. These disintegrating planets give geoscientists direct views of exoplanetary silicates because the debris tails can be millions of kilometers long [van Lieshout and Rappaport, 2018]. For disintegrating planets that orbit white dwarf stars, the debris can form a gas disk whose composition can be reconstructed [e.g., Doyle et al., 2019].

To better read the signals of time-variable disintegration, we need more understanding of how silicate vapor in planetary outflows condenses and nucleates, as well as of fractionation processes at and above disintegrating planets’ surfaces that may cause observed compositions in debris to diverge from the bulk planet compositions.

Getting to Know the Cousins

Investigating Earth cousins will illuminate the processes underpinning habitability in our galaxy and reveal much that is relevant for understanding Earth twins.In the near future, new observatories like JWST and the European Space Agency’s Atmospheric Remote-sensing Infrared Exoplanet Large-survey (ARIEL, planned for launch in 2029) will provide new data. When they do, and even now before they come online, investigating Earth cousins will illuminate the processes underpinning habitability in our galaxy and reveal much that is relevant for understanding Earth twins.

From sub-Neptunes, for example, we can learn about volatile delivery processes. From hot, rocky planets, we can learn about atmosphere-interior exchange and atmospheric loss processes. From water worlds, we can learn about nutrient supplies in exoplanetary oceans and the potential habitability of these exotic environments. From disintegrating planets, we can learn about the interior composition of rocky bodies.

Laboratory studies of processes occurring on these worlds require only repurposing and enhancing existing experimental facilities, rather than investing in entire new facilities. From a practical standpoint, the scientific rewards of studying Earth cousins are low-hanging fruit.

Acknowledgments

We thank the organizers of the AGU-American Astronomical Society/Kavli workshop on exoplanet science in 2019.

Modeling Urban-weather Effects Can Inform Aerial Vehicle Flights

Wed, 06/09/2021 - 13:06

New modes of aerial operations are emerging in the urban environment, collectively known as Advanced Air Mobility (AAM). These include electrically propelled vertical takeoff and landing aerial vehicles for infrastructure surveillance, goods delivery, and passenger transportation. However, ultra-fine weather and turbulence guidance products are needed that contribute to safe and efficient deployment of these activities. In fact, initial testing/demonstration exercises are planned to occur in the very near future, thus the timely and relevant nature of the present work.

To enable successful operation of these new aerial operations in the urban environment, the meteorological community must provide relevant guidance to inform and support these activities. Muñoz-Esparza et al. [2021] demonstrate how seasonal, diurnal, day-to-day, and rapidly evolving sub-hourly meteorological phenomena create unique wind and turbulence distributions within the urban canopy. They showcase the potential for efficient ultra-fine resolution atmospheric models to understand and predict urban weather impacts that are critical to these AAM operations.

Citation: Muñoz-Esparza, D., Shin, H., Sauer, J. et al. [2021]. Efficient GPU Modeling of Street-Scale Weather Effects in Support of Aerial Operations in the Urban Environment. AGU Advances, 2, e2021AV000432. https://doi.org/10.1029/2021AV000432

—Donald Wuebbles, Editor, AGU Advances

Raising Central American Orography Improves Climate Simulation

Wed, 06/09/2021 - 13:05

Global Climate Models (GCMs) suffer from the tropical rainfall bias, with double peaks on both sides of the equator rather than just north of the equator, known as the double Inter-Tropical Convergence Zone (ITCZ) bias. The tropical mean state bias limits the fidelity of GCMs in projecting the future climate. Much effort has gone into improving this double ITCZ bias, but it has not been alleviated since the early days of model development.

Baldwin et al. [2021] suggest that a significant portion of the double ITCZ bias originates from low biases in Central American orography in models. Orographic peaks are often smoothed out in models that use observed orography averaged onto model grid. Elevation of Central American orography is demonstrated to reduce the double ITCZ bias as the northeastern tropical Pacific becomes warmer owing to blocked easterlies. The study offers a simple and computationally inexpensive yet physically based method for improving pervasive double ITCZ bias.

Citation: Baldwin, J., Atwood, A., Vecchi, G. and Battisti, D. [2021]. Outsize Influence of Central American Orography on Global Climate. AGU Advances, 2, e2020AV000343. https://doi.org/10.1029/2020AV000343

The Earth in Living Color: Monitoring Our Planet from Above

Wed, 06/09/2021 - 12:18

For more than five decades, satellites orbiting Earth have recorded and measured different characteristics of the land, oceans, cryosphere, and atmosphere, and how they are changing. Observations of planet Earth from space are a critical resource for science and society. With the planet under pressure from ever-expanding and increasingly intensive human activities combined with climate change, observations from space are increasingly relied upon to monitor and to inform adaptation and mitigation activities to maintain food security, biodiversity, water quality, and responsiveness to disasters.

A new cross-journal special collection, The Earth in Living Color, aims to provide a state-of-art and timely assessment of how advances in remote sensing is revealing new insights and understanding for monitoring our home planet.  We encourage papers that cover the use of imaging spectroscopy and thermal infrared remote sensing to observe and understand the Earth’s vegetation, coastal aquatic ecosystems, surface mineralogy, snow dynamics, and volcanic activity. These may range from architecture studies that determine spaceborne measurement objectives, to papers on algorithm development, calibration and validation, and modeling to support traceability. Papers can be submitted either to Journal of Geophysical Research: Biogeosciences or Earth and Space Science.

The special collection is associated with the NASA Surface Biology and Geology Designated Observable (SBG), and will document:

how SBG will meet science and applications measurement objectives; how international partnerships (with the European Space Agency’s Copernicus Hyperspectral Imaging Mission (CHIME) and Land Surface Temperature Monitoring mission (LSTM) and with the Centre National d’Études Spatiales (CNES) and Indian Space Research Organization’s (ISRO) Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment mission (TRISHNA) will improve revisit times; describe new developments in atmospheric correction, surface reflectance retrievals, and algorithms; and detail synergies with other NASA Decadal Survey missions.

SBG leverages a rich heritage of airborne imaging spectroscopy that includes the AVIRIS and PRISM instruments, and thermal imagers such as HYTES and MASTER, as well space-based observations from pathfinder missions such as HYPERION, and current missions, including ECOSTRESS, PRISMA, DESIS, and HISUI.

Satellite measurements represent very large investments and the United States and space agencies around the globe organize their efforts to maximize the return on that investment. For instance, the US National Research Council conducts a decadal survey of NASA earth science and applications to prioritize observations of the atmosphere, ocean, land, and cryosphere. The most recent NASA Decadal survey, published in 2017, prioritized observations of surface biology and geology using a visible to shortwave infrared (VSWIR) imaging spectrometer and a multi-spectral thermal infrared (TIR) imager to meet a range of needs. As announced by NASA in May 2021, SBG will become integrated within a larger NASA Earth System Observatory (ESO)  that will include observations of aerosols, clouds, convection, and precipitation, mass change, and surface-deformation and change.

The SBG science, applications and technology build on over a decade of experience and planning for such a mission based on the previous Hyperspectral Infrared Imager (HyspIRI) mission study. During the course of a three-year study (2018-2021), the SBG team analyzed needed instrument characteristics (spatial, temporal and spectral resolution, measurement uncertainty) and assessed the cost, mass, power, volume, and risk of different architectures. The SBG Research and Applications team examined available algorithms, calibration and validation, and societal applications, and used end-to-end modeling to assess uncertainty.  The team also identified valuable opportunities for international collaboration to increase the frequency of revisit through data sharing, adding value for all partners. Analysis of the science, applications, architecture, and partnerships led to a clear measurement strategy and a well-defined observing system architecture.

SBG addresses global vegetation, aquatic, and geologic processes that quantify critical aspects of the land surface, responding to NASA’s Decadal Survey priorities, which then interact with the Earth’s climate system. The SBG observing system has a defined set of critical observables that equally inform science and environmental management and policy for a host of societal benefit areas. Click image for larger version. Credit: NASA JPL

First, and perhaps, foremost, SBG will be a premier integrated observatory for observing the emerging impacts of climate change. It will characterize the diversity of plant life by resolving chemical and physiological signatures. It will address wildfire, observing pre-fire risk, fire behavior and post-fire recovery. It will provide information for the coastal zone on phytoplankton abundance, water quality, and aquatic ecosystem classification. It will inform responses to natural and anthropogenic hazards and disasters guiding responds to a wide range of events, including oil spills, toxic minerals, harmful algal blooms, landslides and other geological hazards, including volcanic activity.

The NASA Earth System Observatory initiates a new era of scientific monitoring, with SBG providing an unprecedented perspective of the Earth surface through new spatial, temporal, and spectral information with high signal-to-noise. The Earth in Living Color special collection will showcase the latest advances in remote sensing that are providing vital insights into changes in planet Earth.

—David Schimel (david.schimel@jpl.nasa.gov,  0000-0003-3473-8065), NASA Jet Propulsion Laboratory, USA; and Benjamin Poulter ( 0000-0002-9493-8600), NASA Goddard Space Flight Center, USA

Siltation Threatens Historic North Indian Dam

Wed, 06/09/2021 - 12:15

When it opened in 1963, Bhakra Dam was called a “new temple of resurgent India” by Jawaharlal Nehru, India’s first prime minister. Today the dam is threatened as its reservoir rapidly fills with silt.

Much to the worry of hydrologists monitoring the situation, the reservoir—Gobind Sagar Lake—has a rapidly growing sediment delta that, once it reaches the dam, will adversely affect power generation and water deliveries.

Bhakra Dam stands 226 meters tall and stretches 518 meters long, making it one of the largest dams in India. Electricity generated by the dam supports the states of Himachal Pradesh (where the dam is located), Punjab, Haryana, and Rajasthan, and the union territories of Chandigarh and Delhi. The reservoir supplies these areas with water for drinking, hygiene, industry, and irrigation. Loss of reservoir capacity as a result of sedimentation could thus have severe consequences for the region’s water management system and power grid.

A Leopard’s Leap to a Green Revolution

In 1908, British civil services officer Sir Louis Dane claimed to have witnessed a leopard leaping from one end of a gorge on the Sutlej River to the other. “Here’s a site made by God for storage,” he wrote. Little happened, however, until 40 years later, when Nehru took up the proposal as one of the first large infrastructure projects in India after independence.

“Before the canal brought water to our area, we were poor [and] used to live [lives] of nomads, in the sand dunes. Now we grow a variety of crops…and we are referred [to] as affluent farmers.”Bhakra Dam’s waters quickly catalyzed the nation’s green revolution of increased agricultural production. In the early 1960s, for instance, 220,000 hectares of rice were under paddy cultivation in Punjab. Within 10 years, that number increased to 1.18 million, which doubled by 1990. Today Punjab contributes up to 50% of India’s rice supply.

Parminder Singh Dhanju, a rural resident of Rajasthan whose village is about 565 kilometers from Bhakra Dam, has a farm fed by canals originating from the reservoir. “The water availability has changed the lives of us villagers,” he said. “Before the canal brought water to our area, we were poor [and] used to live [lives] of nomads, in the sand dunes. Now we grow a variety of crops such as wheat, rice, cotton, and citrus fruits (oranges and kinnows), and we are referred [to] as affluent farmers.”

The Saga of Silt

According to investigations led by D. K. Sharma, former chairman of the Bhakra Beas Management Board (BBMB, the power company responsible for the dam), nearly a quarter of Gobind Sagar Lake has filled with silt. The sedimentation flows from the lake’s catchment areas, which are spread over 36,000 square kilometers in the Himalayas.

“The storage of the reservoir is 9.27 billion cubic meters, out of which 2.13 billion cubic meters are filled with silt, which is an alarming situation,” explained Sharma. He said the studies related to silt pileup are carried out every 2 years.

Sharma and other BBMB engineers submitted a report last year on siltation at Bhakra Dam. In it, Sharma said the dam was projected to be an effective reservoir for at least 100 years. However, he explained, the silt buildup will likely shorten that time frame. “It depends on the amount of silt in the reservoir,” he said. “The increase in siltation will hasten the process of turning the dam into a dead project, making the canal system downstream vulnerable to deposition of silt and floods.”

The Way Out

To combat siltation, Sharma suggested extensive reforestation in the reservoir’s catchment area. “The partner states of BBMB—Punjab, Haryana, Rajasthan, and Himachal Pradesh—need to plan forestation to bind the loose soil,” he said.

“If we can reduce silt inflows by 10%, the dam’s life can be extended by 15–20 years,” he added.

“We need to act fast and engage local population and NGOs to carry out plantation, before it’s too late.”BBMB joint secretary Anurag Goyal heads the reforestation project around the dam. He said that in 2019, 600,000 saplings were planted over the reservoir’s catchment area. “We have resumed plantation that was temporarily halted in 2020 due to COVID-19 pandemic.”

Other suggestions to prevent or mitigate siltation include dredging the reservoir, although Goyal dismisses that idea as cost prohibitive. Goyal agreed with Sharma that reforestation or other mitigation projects must include local governments. “Reforestation over [such a] vast area needs a road map and the involvement of the north Indian states…. We need to act fast and engage local population and NGOs to carry out plantation, before it’s too late.”

—Gurpreet Singh (@JournoGurpreet), Science Writer

Gulf Stream Intrusions Feed Diatom Hot Spots

Wed, 06/09/2021 - 12:09

The Gulf Stream, which has reliably channeled warm water from the tropics northward along the East Coast of North America for thousands of years, is changing. Recent research shows that it may be slowing down, and more and more often, the current is meandering into the Mid-Atlantic Bight—a region on the continental shelf stretching from North Carolina to Massachusetts and one of the most productive marine ecosystems in the world.

Previous studies have suggested that this intrusion of Gulf Stream water, which is comparatively low in nutrients at the surface, could hamper productivity. But in a new study, Oliver et al. found that intrusions of deeper, nutrient-rich Gulf Stream water can also feed hot spots of primary productivity.

By analyzing data collected by R/V Thomas G. Thompson in July of 2019, the team spotted a series of hot spots about 50 meters below the surface, just east of a large eddy known as a warm-core ring. This ring had formed off the side of the Gulf Stream current and was pushing westward toward the continental shelf, drawing cool water into the slope region off the edge of the shelf.

The hot spots had chlorophyll levels higher than those typically seen in the slope region and were packed with a diverse load of diatoms, a class of single-celled algae. Studying images of the hot spots, the team found that the colony-forming diatom Thalassiosira diporocyclus was an abundant type in the hot spots.

The researchers used a model that combined upper ocean and biogeochemical dynamics to support the idea that the upwelling of Gulf Stream water moving northward into the Mid-Atlantic Bight could cause the hot spots to form. The study demonstrates how Gulf Stream nutrients could influence subsurface summer productivity in the region and that such hot spots should be taken into account when researchers investigate how climate change will reshape circulation patterns in the North Atlantic. (Geophysical Research Letters, https://doi.org/10.1029/2020GL091943, 2021)

—Kate Wheeling, Science Writer

Mejorando el presupuesto mundial para el metanol atmosférico

Tue, 06/08/2021 - 11:54

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

A pesar de ser el segundo gas orgánico más común detrás del metano, los científicos todavía están trabajando en comprender el papel y el movimiento del metanol en las regiones de la atmósfera sobre el océano remoto. El gas interactúa con una serie de otras moléculas atmosféricas importantes, como el ozono y el radical hidroxilo (OH), y sirve como precursor del monóxido de carbono y el formaldehído. A nivel mundial, la mayor parte del metanol atmosférico proviene de plantas terrestres, pero también es porducida por los humanos, los océanos y la quema de biomasa. En la última década, los científicos han comenzado a comprender que la química natural de la atmósfera es otra fuente importante de metanol, especialmente por encima de las regiones oceánicas remotas.

En un nuevo estudio, Bates et al. intentan actualizar los modelos de metanol atmosférico utilizando mediciones tomadas durante la Misión de Tomografía Atmosférica (ATom) de la NASA, que duró desde el verano de 2016 hasta la primavera de 2018. La misión consistió en un conjunto de vuelos de aviones alrededor de los océanos Pacífico y Atlántico a varias altitudes. Sensores a bordo de los aviones tomaron medidas de metanol y otros gases traza. Luego, el equipo utilizó estos datos para restringir el modelo GEOS-Chem, un modelo global de química atmosférica, para crear una imagen más completa de la química atmosférica sobre el océano.

En general, los científicos descubrieron que la vida media atmosférica del metanol era de 5.3 días, con aproximadamente la mitad del gas procedente de la biosfera terrestre. De la mitad restante, la mayoría (~60%) del metanol en regiones remotas proviene de la química de gases, específicamente la reacción de radicales metilperoxi (CH3O2) con OH, ellos mismos y otros radicales peroxi. Los nuevos números muestran que el océano es un sumidero neto del gas porque gran parte del metanol que sale del agua a menudo se deposita rápidamente en la superficie del océano. Debido a la importancia del metanol en la química atmosférica, los investigadores esperan que las actualizaciones al modelo ayuden a dar una imagen más completa de cómo el metanol influye en la concentración de varias especies importantes de radicales en el aire sobre océanos remotos. (Journal of Geophysical Research: Atmospheres, https://doi.org/10.1029/2020JD033439, 2021)

—David Shultz, Escritor de ciencia

This translation by Daniela Navarro-Pérez (@DanJoNavarro) was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Seafloor Seismometers Look for Clues to North Atlantic Volcanism

Tue, 06/08/2021 - 11:53

The famous Giant’s Causeway on the coast of Northern Ireland comprises tens of thousands of spectacular basalt columns created as ancient lava flows slowly cooled and fractured into characteristic polygonal forms. Impressive in size though it is, the causeway is only a small part of the Antrim Lava Group, which comprises formations resulting from flood basalt eruptions that covered large areas with lava in the Paleocene, about 60 million years ago [Ganerød et al., 2010].

At that time—just prior to continental separation and the onset of seafloor spreading in the northeastern Atlantic Ocean—these large eruptions were scattered across the entire region, forming the vast North Atlantic Igneous Province (NAIP). Remnants of these flood basalts are found on both sides of the Atlantic, from western Greenland and Baffin Island in the west to the northwestern European margin—including at Giant’s Causeway, Fingal’s Cave in Scotland, and elsewhere around Ireland and Great Britain—in the east [Peace et al., 2020].

In contrast to recent eruptions in the North Atlantic, those occurring 60 million years ago were not focused in the same area but, rather, were scattered across thousands of kilometers.Today, volcanic activity in the northeastern Atlantic continues through numerous volcanoes in Iceland. This volcanism is unusual in that it occurs on a plate tectonic boundary—that separating the Eurasian and North American plates—but is not caused just by tectonic activity. Instead, the volcanism and the abnormally shallow bathymetry of the northeastern Atlantic basin are thought to be caused by the Iceland plume, an upwelling of hot mantle rock originating from the core-mantle boundary that causes the tectonic plate in this region to flex upward [Morgan, 1971].

The formation of the NAIP has also been attributed to this plume [White and McKenzie, 1989]. In contrast to recent eruptions, however, those occurring 60 million years ago were not focused in the same area but, rather, were scattered across thousands of kilometers. Explaining this broadly distributed volcanism, as well as many other outstanding questions about the dynamics and evolution of the Iceland plume and North Atlantic lithosphere, has long motivated scientific inquiry, but clear answers have been hard to come by because of gaps in seismic data coverage.

Recently, researchers working on a new project, called Structure, Evolution and Seismicity of the Irish Offshore (SEA-SEIS), have collected data that will provide new insights into these old questions and that may finally tell us whether and how the Iceland plume could have caused havoc across the North Atlantic 60 million years ago [White and Lovell, 1997].

Hot Mantle, Heated Debates

Hot plumes of rock that rise slowly from Earth’s core-mantle boundary to the surface are conventionally thought to be the cause of large igneous provinces, such as the NAIP, and of continental rifting and breakup [Morgan, 1971]. The breakup leading to the opening of the North Atlantic Ocean, however, was a complex, multistage process accompanied by compositionally variable and geographically scattered magmatic events—events that are far from fully understood [Peace et al., 2020].

Whereas some researchers have countered the plume model with alternatives that attempt to explain NAIP volcanism solely through lithospheric processes, others debate mechanisms by which magma from a relatively narrow mantle plume can be distributed over a much broader area of the surface. In particular, to what extent might topography at the base of the lithosphere have influenced the NAIP?

Topography at the lithosphere-asthenosphere boundary can include channels that guide lateral flows of buoyant material supplied by mantle plumes for many hundreds of kilometers.The thickness of Earth’s tectonic plates varies laterally and is typically between 60 and 300 kilometers. Just as the topsides of these plates can have differing topography—shaped by tectonic forces, by volcanism, and by weathering and sedimentation processes at the planet’s surface—the undersides can as well, as a result of interactions between the lithosphere of the plates and the hotter, more ductile asthenosphere. This topography at the lithosphere-asthenosphere boundary (LAB) can include channels that guide lateral flows of buoyant material supplied by mantle plumes for many hundreds of kilometers. Such thin-lithosphere channels have been observed in seismic tomography models at the base of the currently active East Africa–Arabia volcanic region [Celli et al., 2020]. They have also been detected beneath Greenland [Lebedev et al., 2018], offering a potential explanation for Paleocene volcanism that occurred simultaneously along both its western and eastern coasts [Steinberger et al., 2019].

Could the topography of the LAB beneath the stretched continental lithosphere to the west and northwest of present-day Ireland—the westernmost portion of the Eurasian plate—have channeled hot material from the Iceland plume all the way to Ireland and Great Britain? If so, some of that topography might still be present today, as it is beneath Greenland, despite the continuing thermal evolution and reshaping of the LAB. Land-based seismic instruments do not offer the needed resolution to detect and map such features. Rather, we need data from seismic stations deployed on the North Atlantic seafloor—data that were not available until the SEA-SEIS project.

Seismometers Dive Deep

In September and October 2018, seismologists from the Dublin Institute for Advanced Studies (DIAS) in Ireland deployed 18 ocean bottom seismometers (OBSs) from the R/V Celtic Explorer, operated by Ireland’s Marine Institute, to the bottom of the northeastern Atlantic at depths of 1–4 kilometers (Figure 1). The network covered offshore waters south, west, and northwest of Ireland, as well as south of Iceland. The OBS units were NAMMUs, manufactured by Umwelt- und Meerestechnik Kiel (K.U.M.) in Germany, equipped with Trillium Compact 120-second broadband seismometers. They recorded three ground motion components and a broadband hydrophone channel, all at 250 samples per second.

Fig. 1. Locations of the 18 broadband, ocean bottom seismic stations (red circles) deployed by the SEA-SEIS project are shown on this map of the northeastern Atlantic region (left). Blue triangles show locations of broadband stations on land and wideband stations offshore from other permanent and temporary networks in the region. The SEA-SEIS stations were named by school students from across Ireland and in Italy (right).

The station retrieval cruise took place from April to May 2020, at the height of the first wave of the COVID-19 pandemic, requiring organization and planning efforts by DIAS and the Marine Institute above and beyond those of typical research cruises. Each of the scientists involved self-isolated for 14 days prior to the survey before a private bus delivered them from their homes in Dublin to the ship in Galway. On board, each person had their own cabin and observed social distancing and other measures, like staggered mealtimes, for the duration of the cruise, which was completed without illness or other incident.

Of the 14 instruments retrieved, one recorded data for 17 months, and the other 13 recorded for the full 19 months of the deployment, thanks to the batteries powering the instruments well past their 14-month nominal life span. Four of the instruments (Allód, Harry, Nemo, and Sebastian; see Figure 1) responded to communications from the ship but failed to detach from their anchors and, at the time of writing, remain on the seafloor. Recovery with a remotely operated underwater vehicle at a later date is now the most realistic—albeit challenging—option.

The deployment (top left, top right, and bottom left) and retrieval (bottom right) of ocean bottom seismometers are shown in this sequence. During deployment, the instrument is craned overboard and released into the water, where it descends to the seafloor. During retrieval, the instrument receives an acoustic command from the ship, detaches from its anchor, and slowly ascends (at roughly 1 meter per second) to the surface. The orange flag makes the seismometer easy to spot from the ship, and it is hooked and lifted onto the deck. Credit: Raffaele Bonadio, Janneke de Laat, and the SEA-SEIS team/DIAS Earthquakes and Whale Songs

SEA-SEIS has now delivered broadband seismic data from across a vast stretch of the North Atlantic seafloor. The new data will shed light on patterns and locations of offshore seismicity west of Ireland, which remains poorly understood but includes earthquakes larger than those occurring onshore Ireland. From preliminary analyses, the data indeed show clear recordings of both regional and teleseismic (distant) earthquakes; an example of one of these recordings is presented in the video clip below.

These recordings will facilitate tomographic imaging and other seismic studies, which will yield new information about the lithospheric structure and evolution of the North Atlantic region. This information will reveal the structure and thickness of the lithosphere, which has been observed to be generally colder and thicker in the eastern compared with the western half of the northeastern Atlantic basin [Celli et al., 2021]. We will use the new imaging to search for thin-lithosphere channels connecting Iceland with volcanic areas in and around Ireland and Great Britain.

The new data will fill the large existing sampling gap and help us to reconcile puzzling and seemingly contradictory observations.SEA-SEIS data should also inform understanding of the Iceland plume itself. Many whole-mantle tomography models show a major low-seismic velocity anomaly, interpreted as the Iceland plume, in the shallow lower mantle (below 660-kilometer depth) to the southeast of Iceland [Bijwaard and Spakman, 1999]. Recent waveform tomography, by contrast, showed a tilted low-velocity anomaly rising toward Iceland from the northwest, beneath eastern Greenland [Celli et al., 2021]. The new data will fill the large existing sampling gap and help us to reconcile these and other puzzling and seemingly contradictory observations.

Beyond revealing new information about regional seismic activity, lithospheric structure, and volcanism past and present, SEA-SEIS data will aid other investigations as well. Recordings of the ambient noise between earthquakes, created as the ocean interacts with the seafloor and the continental shelf, will be analyzed to study noise generation and propagation in the North Atlantic [Le Pape et al., 2021]. The SEA-SEIS data set also presents 19 months of continuous recordings of baleen whale vocalizations (a sample of which is given in the video clip below) collected across an area spanning more than half the width of the northeastern Atlantic Ocean.

The frequency band of the seismic and hydrophone data—determined at the high-frequency limit by the 250-per-second sampling rate—is sufficient to capture the ranges of blue and fin whale vocalizations, as well as the low-frequency parts of the ranges of humpbacks and North Atlantic right whales. The SEA-SEIS data set will be used to map the acoustic environment of these great whales, to detect and track them, and to study their migration patterns.

When the R/V Celtic Explorer retrieved OBS Charles, an octopus that had attached itself to the buoyant orange foam shell hitched a ride to the surface from the seafloor 1,127 meters below. The pressure tube holding the seismometer and data logger is visible at left. Credit: Sergei Lebedev and the SEA-SEIS team/DIAS

We were reminded often of the close presence of whales and other sea life while at sea. Fin whales that passed by were easily spotted by their tall columnar blows. Large groups of pilot whales surrounded the ship when we stopped and communicated with OBSs via an overboard transducer. They seemed curious about these acoustic communications, which are low amplitude by design, and the transducer picked up their constant squeaky chatter. When we lifted the seismometers onto the deck, we found that the ones from greater depths had been populated by hydroids, mollusks, and sponges. The two seismometers retrieved from depths of less than 1,200 meters each carried three octopuses guarding eggs laid on the devices.

The mix of seismic, biological, and human signals in the SEA-SEIS data is awe-inspiring. When the signals are transformed into frequencies audible by the human ear, they give the listener the powerful experience of perceiving Earth and the ocean directly. Listen, for example, to the sample in the video below produced by the Sounds of the Earth project using recordings collected from the “Brian” OBS.



The Lighter Side of Seismometry

Education and outreach were major components of the SEA-SEIS project, which worked with schools to encourage students’ interest in science and science careers. In one competition in 2018, prior to the deployment cruise, the SEA-SEIS seismic stations were all named by students from Ireland and as far away as Italy, who drew inspiration from various sources for their creativity (Figure 1).

Most of the participating schools also took part in live video linkups with scientists on the research ship during the deployment and retrieval cruises, allowing students to glimpse research in action. And further competitions had primary and secondary school students create Earth science and SEA-SEIS themed drawings and songs, as you can see—and hear!—in the following video:



SEA-SEIS researchers themselves found the engagement with teachers and students exciting and rewarding [Lebedev et al., 2019]. And the outreach gave them opportunities to tap their creative sides by, for example, creating engaging videos (see below) that were used to publicize the project online through blogs and social media. SEA-SEIS was also covered extensively in national and regional news media in Ireland, exposing broader audiences to the work.

A Clearer View Below the Seafloor

Heated debates about the origins of North Atlantic volcanoes continue to stimulate multidisciplinary research and lead to a deeper understanding of mantle dynamics and its relationship to magmatism. Uncertainty has persisted, however, because of a vast gap in data sampling related to the lack of seismometers on the North Atlantic seafloor. The SEA-SEIS seismic stations fill a big part of this gap, allowing us to see deep below the North Atlantic seafloor and address long-standing and fundamental questions.

These data should help reveal the deep structure of the Iceland plume, past and current dynamics of the North Atlantic lithosphere, and interactions between the two—all with greatly improved clarity. As for Giant’s Causeway, we plan to find out whether its distinctive columns really do share the same deep-mantle origins as the volcanoes now active in Iceland.

Acknowledgments

The OBSs and logistical support were provided by the Insitu Marine Laboratory for Geosystems Research (iMARL). We are grateful to Capt. Denis Rowan and the crew of the R/V Celtic Explorer and to R/V manager Aodhán Fitzgerald, Rosemarie Butler, and others at the Marine Institute for their expert assistance in collecting these unique data. The OBSs were NAMMU models, manufactured by K.U.M. in Germany. The data, now undergoing extensive preprocessing, quality control, and preparation for use in research, will be made openly available after the end of the project, scheduled for 2024. SEA-SEIS is cofunded by Science Foundation Ireland, the Geological Survey of Ireland, and the Marine Institute (grant 16/IA/4598). SEA-SEIS would not be possible without the dedication and hard work of everybody on the SEA-SEIS team.

Climate Clues from One of the Rainiest Places on Earth

Mon, 06/07/2021 - 12:55

A jet stream known as the Chocó low-level jet (the ChocoJet) connects the Pacific Ocean with western Colombia. It helps dump more than 9,000 millimeters of rain each year, making the area offshore of the Colombian town of Nuquí one of the rainiest places on the planet.

“The ChocoJet—this low-level flow—is a physical bridge between the sea surface temperatures and sources of moisture in the Pacific, and the climate patterns of western South America.”“The ChocoJet—this low-level flow—is a physical bridge between the sea surface temperatures and sources of moisture in the Pacific, and the climate patterns of western South America,” said John Mejia, an associate research professor of climatology at Nevada’s Desert Research Institute and lead author of a new paper on the phenomenon.

In addition to its regional impact, the ChocoJet plays a role in the El Niño–Southern Oscillation (ENSO), a climate pattern whose variations can signal droughts and floods for Colombian farmers. ENSO also has significant impacts on Europe, Africa, Asia, and North America.

“In the atmosphere, we are all connected,” Mejia said, and the ChocoJet “is part of the engine that redistributes the heat from the tropics to higher latitudes.”

Rainy Puzzle

In 2019, after 6 months of preparations, Mejia and his team were able to get enough helium tanks and sonde balloons to this remote region (accessible by only sea or air). They launched the balloons up to four times a day over 51 days, resulting in new data on temperature, winds, and other atmospheric conditions.

“If you better understand the processes that are causing this high rainfall, you can find better ways for climate models to fill in the gaps where there [aren’t] hard data.”They detailed their findings in a recent paper published in the Journal of Geophysical Research: Atmospheres. The new data contribute to a better understanding of the dynamics and thermodynamics of the ChocoJet’s processes, which have implications for regional wildlife and agriculture, as well as for natural hazards. Mejia said the main contribution of the field campaign in Nuquí and the resulting data was to find out why and how these precipitation mechanisms produce one of the rainiest places on Earth, with the added benefit of building on the very scant climate data gathered previously. “This is a field experiment that can help test climate models.…Figuring this out can make global models more accurate,” Mejia said.

Alejandro Jaramillo, a hydrologist at the Center for Atmospheric Sciences at the National Autonomous University of Mexico, said that more observations will allow for a better model, which will lead to better prediction of rainfall and major weather events, like hurricanes. Jaramillo was not involved in the new research.

“If you better understand the processes that are causing this high rainfall, you can find better ways for climate models to fill in the gaps where there [aren’t] hard data,” Jaramillo said.

Impacts Beyond Climate

“With all the marvelous things I learned, my outlook changed completely, and my professional career changed course.”According to Germán Poveda, a coauthor on the recent study and a professor at the National University of Colombia, the project not only aimed to understand the dynamics and thermodynamics that explain the rainiest region on Earth but also was an opportunity to train Colombia’s next generation of climate scientists.

Juliana Valencia Betancur, for instance, was an undergraduate environmental engineering student at Colegio Mayor de Antioquia in Medellín during the Nuquí field campaign. She and a half dozen other undergraduate students were in Nuquí to help prepare and launch balloons as part of their research undergraduate experience.

Students from Institución Educativa Ecoturistica Litoral Pacifico in Nuquí prepare to launch a sonde balloon during the fieldwork campaign. Credit: Organization of Tropical East Pacific Convection (OTREC) participants/John Mejia

“I hadn’t had much interest in atmospheric science, but after Nuquí, with all the marvelous things I learned, my outlook changed completely, and my professional career changed course,” she said, adding that she is now looking for graduate opportunities in atmospheric science.

Johanna Yepes, a coauthor and researcher based at Colegio Mayor de Antioquia, said Nuquí’s local schoolchildren also benefited from the project’s outreach activities. During the field campaign, the researchers visited Nuquí’s only school and, with enthusiastic support of the principal, presented their project to students in the fourth to seventh grades. The students were also invited to visit the launch site, and two of them got a chance to launch a sonde balloon themselves.

“For me, it was the most beautiful part, putting what we were doing in very simple words and seeing how the children understood the daytime cycle of rain, sometimes even better than we did ourselves,” Yepes said.

—Andrew J. Wight (@ligaze), Science Writer

Planning and Planting Future Forests with Climate Change in Mind

Mon, 06/07/2021 - 12:54

Lumber prices are up more than 500% in some areas, and people are even poaching trees from public forests for home projects, but this late-pandemic surge in lumber demand is a short-term threat to trees compared to the ongoing climate emergency. In some regions, temperatures are warming too fast for trees to catch up.

Because trees take decades to grow, today’s well-adapted seedlings may be unprepared for the future climate. That can have consequences far beyond the forest. Trees that are not adapted to their climate are more likely to succumb to pests and disease, and dead trees intensify forest fires. Fires lead to increased flood risks, and floods drag dust and dirt to downstream water systems, which can clog pipes and strain treatment systems.

“It’s just a vicious vortex that arises when trees are maladapted because of climate.”“It’s just a vicious vortex that arises when trees are maladapted because of climate,” said Greg O’Neill, a climate change adaptation scientist for the British Columbia Ministry of Forests, Lands, Natural Resource Operations and Rural Development.

O’Neill and forestry researcher Erika Gómez-Pineda are coauthors of a recent article published in the Canadian Journal of Forest Research that makes the case for forestry practices to incorporate a process known as assisted population migration, in which seeds are moved to colder climates within the species’ natural range. This can help foresters capitalize on existing adaptations and bolster their forests for future climate change.

Searching for Seeds

The study addressed two seed stock questions: Where will British Columbia have problems, and where are solutions already growing in the United States? Using Representative Concentration Pathway 4.5 projections, the authors identified key parameters, including average precipitation, average temperature, and growing degree days for British Columbia’s ecosystems in the year 2055. The study design assumes an impending planting date in 2040 and accounts for the first years of growth when trees are most susceptible to stress. Out of 207 seed zones, 44 (about 21%) were at high or moderate risk of losing adapted domestic seed supply by 2040, suggesting that these zones may soon lack domestic seeds worth planting.

Next, the researchers mapped areas of the Pacific Northwest and British Columbia where historical growing conditions from 1945 (the earliest era of province-wide weather records) mirrored the expected climate in 2055. O’Neill and Gómez-Pineda found that a matching climate area greater than 20,000 square kilometers existed in the United States for 42 of British Columbia’s 44 at-risk seed zones—that’s an area about twice the size of Puerto Rico. Even considering lakes, slopes, and neighborhoods that would reduce the feasible seed access area, that’s still a large region where Canadian foresters could secure heartier seeds.

However, when assessing the possibility of putting assisted migration practices into place, it is important to keep phenology in mind, said Csaba Mátyás, a professor emeritus of forestry at the University of Sopron in Hungary who was not involved in the study.

Trees, Mátyás said, develop a “physiological clock” that assesses climate patterns to guide growth. Generations of trees can gradually adjust that timing, but “these horrible changes we are expecting will happen in 50–100 years, which is one generation,” Mátyás said. “You cannot rely on natural processes because [they’re] too slow.”

Mátyás noted that planning is important and seeds adapted for the future may struggle at the time of planting. Local seeds that enter artificial plantations will naturally outcompete transferred seeds. “You should be quite careful in the first 10–20 years,” he cautioned. Planting for the future works only if seedlings survive the present.

A clear-cut line separates Canada from the United States. Cross-border collaboration may be needed to ensure forest health in Canada under climate change. Credit: Carolyn Cuskey, CC BY 2.0 Adaptive Capacity and Culture Change

Assisted population migration is already in practice in some places. British Columbia, which plants around 300 million trees each year and exports a significant share of U.S. lumber sales, switched from local seed selection to climate-based selection in 2018. Ontario followed in 2020. What’s more, the U.S. Forest Service launched the Seedlot Selection Tool in 2009 to guide planting decisions.

However, borders complicate the seed-sharing process, and barriers persist. States like California have botanical border patrols, which keep out unwanted pests but complicate seed sharing. In most areas, local seeds are more convenient to access.

“Climate change and these environmental problems don’t respect borders. If we don’t pay attention to climate in selecting seed sources for reforestation, then we have serious problems ahead of us.”“It’s a complicated set of fragmented laws that determines when a species can be transported or planted,” said Alejandro Camacho, an environmental law professor at the University of California, Irvine.

But forests are traditionally managed to increase yield, which lessens resistance to population migration. “National forests generally have more built-in legal adaptive capacity than other public lands,” Camacho said. But those rules, he noted, don’t ensure the promotion of ecological health.

O’Neill believes that forestry needs a culture shift to face climate change. “We’re not in the habit of doing this,” he said of migrating seeds from other jurisdictions. “Climate change and these environmental problems don’t respect borders. If we don’t pay attention to climate in selecting seed sources for reforestation, then we have serious problems ahead of us.”

—J. Besl (@J_Besl), Science Writer

A New Approach to Calculate Earthquake Slip Distributions

Fri, 06/04/2021 - 12:28

During an earthquake, Earth’s crust moves, or slips, along fractures in rock called faults. These movements can be detected and recorded by geophysical instruments located at various locations on Earth’s surface. Recordings from geophysical instruments have a different orientation relative to the earthquake’s epicenter and therefore record a different aspect of a fault slip. An important problem in seismology is reconciling these different measurements to determine the true orientation of an earthquake’s many fault slips, as well as the large-scale stresses that create them.

The process of determining the distribution of fault slips that creates a given set of geophysical observations is called slip inversion. In the computer era, it has traditionally been accomplished by a variety of least squares fitting routines that attempt to match possible slip distributions to the observed data. However, this technique faces a number of challenges, including ensuring a physically plausible solution, properly handling complex observational uncertainties, and determining a slip distribution that varies spatially.

To address these issues, modern slip inversion techniques have begun to use a probabilistic approach using Markov chain Monte Carlo (MCMC) methods. A traditional MCMC approach overcomes many of the issues encountered by an optimization technique like least squares but can face difficulty when encountering the severely nonuniform distribution of seismic observations. To address this, Tomita et al. developed a transdimensional MCMC technique. In a transdimensional approach, the number of model parameters is not predetermined but, rather, emerges naturally from the complexity of the input data.

The authors created their approach from the reversible-jump MCMC (rj-MCMC) technique, an existing framework for carrying out transdimensional MCMC calculations. To evaluate their approach, they simulated the effects of an earthquake located in an undersea trench within several hundred kilometers of various geodetic observation sites. They considered three scenarios: two with a mix of onshore and offshore observation sites and one with only onshore locations.

In the mixed scenarios, the rj-MCMC technique and the least squares approach both reproduced the slip distribution reasonably. However, only the rj-MCMC calculation could deal with the more asymmetric scenario of only onshore observations.

Finally, they applied the rj-MCMC method to observational data of the 2011 Tōhoku-oki earthquake off Japan in the Pacific Ocean. Their result is broadly similar to past work on this event but provides better expression of the most substantial slips. Overall, the transdimensional, probabilistic approach appears to be a promising tool for future earthquake studies. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2020JB020991, 2021)

—Morgan Rehnberg, Science Writer

Establishing a Link Between Air Pollution and Dementia

Fri, 06/04/2021 - 12:27

More people around the world are falling ill and dying from dementia than they used to. Between 2000 and 2019, the rate of dementia increased by 86%, while deaths from the cognitive disorder more than doubled. Longer life spans and aging populations in much of the world are partly to blame. However, evidence suggests that lifestyle and environmental causes may also play a role, namely, air pollution, excessive alcohol consumption, and traumatic brain injury.

In new research, Ru et al. explored the role of air pollution in rising dementia cases. The authors perused existing literature to find links between fine particulate matter 2.5 (PM2.5)—defined as particulates having a diameter less than or equal to 2.5 micrometers—and dementia. PM2.5 arises from both anthropogenic and natural sources, like burning gas for vehicles and wildfires. In addition, cigarettes produce fine particulate matter, which is inhaled by the smoker and through secondhand smoke. When these pollutants enter the body, they can affect the central nervous system and lead to cognitive disorders.

The study’s findings indicate that in 2015, air pollution caused approximately 2 million incidences of dementia worldwide and around 600,000 deaths. The countries most affected were China, Japan, India, and the United States. What is more, Asia, the Middle East, and Africa face increasing burdens from the disease as living standards and pollution climb. The analysis concludes that air pollution causes roughly 15% of premature deaths and 7% of disability-adjusted life years (which accounts for mortality and morbidity) associated with dementia, with estimated economic costs of around $26 billion.

The study establishes air pollution as a potentially significant risk factor for dementia. It suggests that reducing air pollution may help prevent dementia in older populations. However, the researchers note high uncertainty in the relationship. Future work that focuses on high-exposure regions will be necessary to clarify the link better. (GeoHealth, https://doi.org/10.1029/2020GH000356, 2021)

—Aaron Sidder, Science Writer

A Tectonic Shift in Analytics and Computing Is Coming

Fri, 06/04/2021 - 12:26

More than 50 years ago, a fundamental scientific revolution occurred, sparked by the concurrent emergence of a huge amount of new data on seafloor bathymetry and profound intellectual insights from researchers rethinking conventional wisdom. Data and insight combined to produce the paradigm of plate tectonics. Similarly, in the coming decade, a new revolution in data analytics may rapidly overhaul how we derive knowledge from data in the geosciences. Two interrelated elements will be central in this process: artificial intelligence (AI, including machine learning methods as a subset) and high-performance computing (HPC).

Already today, geoscientists must understand modern tools of data analytics and the hardware on which they work. Now AI and HPC, along with cloud computing and interactive programming languages, are becoming essential tools for geoscientists. Here we discuss the current state of AI and HPC in Earth science and anticipate future trends that will shape applications of these developing technologies in the field. We also propose that it is time to rethink graduate and professional education to account for and capitalize on these quickly emerging tools.

Work in Progress

Great strides in AI capabilities, including speech and facial recognition, have been made over the past decade, but the origins of these capabilities date back much further. In 1971, the U.S. Defense Advanced Research Projects Agency substantially funded a project called Speech Understanding Research, and it was generally believed at the time that artificial speech recognition was just around the corner. We know now that this was not the case, as today’s speech and writing recognition capabilities emerged only as a result of both vastly increased computing power and conceptual breakthroughs such as the use of multilayered neural networks, which mimic the biological structure of the brain.

Artificial intelligence (AI) and many other artificial computing tools are still in their infancy, which has important implications for high-performance computing (HPC) in the geosciences.Recently, AI has gained the ability to create images of artificial faces that humans cannot distinguish from real ones by using generative adversarial networks (GANs). These networks combine two neural networks, one that produces a model and a second one that tries to discriminate the generated model from the real one. Scientists have now started to use GANs to generate artificial geoscientific data sets.

These and other advances are striking, yet AI and many other artificial computing tools are still in their infancy. We cannot predict what AI will be able to do 20–30 years from now, but a survey of existing AI applications recently showed that computing power is the key when targeting practical applications today. The fact that AI is still in its early stages has important implications for HPC in the geosciences. Currently, geoscientific HPC studies have been dominated by large-scale time-dependent numerical simulations that use physical observations to generate models [Morra et al, 2021a]. In the future, however, we may work in the other direction—Earth, ocean, and atmospheric simulations may feed large AI systems that in turn produce artificial data sets that allow geoscientific investigations, such as Destination Earth, for which collected data are insufficient.

Data-Centric Geosciences

Development of AI capabilities is well underway in certain geoscience disciplines. For a decade now [Ma et al., 2019], remote sensing operations have been using convolutional neural networks (CNNs), a kind of neural network that adaptively learns which features to look at in a data set. In seismology (Figure 1), pattern recognition is the most common application of machine learning (ML), and recently, CNNs have been trained to find patterns in seismic data [Kong et al., 2019], leading to discoveries such as previously unrecognized seismic events [Bergen et al., 2019].

Fig. 1. Example of a workflow used to produce an interactive “visulation” system, in which graphic visualization and computer simulation occur simultaneously, for analysis of seismic data. Credit: Ben Kadlec

New AI applications and technologies are also emerging; these involve, for example, the self-ordering of seismic waveforms to detect structural anomalies in the deep mantle [Kim et al., 2020]. Recently, deep generative models, which are based on neural networks, have shown impressive capabilities in modeling complex natural signals, with the most promising applications in autoencoders and GANs (e.g., for generating images from data).

CNNs are a form of supervised machine learning (SML), meaning that before they are applied for their intended use, they are first trained to find prespecified patterns in labeled data sets and to check their accuracy against an answer key. Training a neural network using SML requires large, well-labeled data sets as well as massive computing power. Massive computing power, in turn, requires massive amounts of electricity, such that the energy demand of modern AI models is doubling every 3.4 months and causing a large and growing carbon footprint.

AI is starting to improve the efficiency of geophysical sensors: Some sensors use AI to detect when “interesting” data are recorded, and these data are selectively stored.In the future, the trend in geoscientific applications of AI might shift from using bigger CNNs to using more scalable algorithms that can improve performance with less training data and fewer computing resources. Alternative strategies will likely involve less energy-intensive neural networks, such as spiking neural networks, which reduce data inputs by analyzing discrete events rather than continuous data streams.

Unsupervised ML (UML), in which an algorithm identifies patterns on its own rather than searching for a user-specified pattern, is another alternative to data-hungry SML. One type of UML identifies unique features in a data set to allow users to discover anomalies of interest (e.g., evidence of hidden geothermal resources in seismic data) and to distinguish trends of interest (e.g., rapidly versus slowly declining production from oil and gas wells based on production rate transients) [Vesselinov et al., 2019].

AI is also starting to improve the efficiency of geophysical sensors. Data storage limitations require instruments such as seismic stations, acoustic sensors, infrared cameras, and remote sensors to record and save data sets that are much smaller than the total amount of data they measure. Some sensors use AI to detect when “interesting” data are recorded, and these data are selectively stored. Sensor-based AI algorithms also help minimize energy consumption by and prolong the life of sensors located in remote regions, which are difficult to service and often powered by a single solar panel. These techniques include quantized CNN (using 8-bit variables) running on minimal hardware, such as Raspberry Pi [Wilkes et al., 2017].

Advances in Computing Architectures

Powerful, efficient algorithms and software represent only one part of the data revolution; the hardware and networks that we use to process and store data have evolved significantly as well.

Since about 2004, when the increase in frequencies at which processors operate stalled at about 3 gigahertz (the end of Moore’s law), computing power has been augmented by increasing the number of cores per CPU and by the parallel work of cores in multiple CPUs, as in computing clusters.

Accelerators such as graphics processing units (GPUs), once used mostly for video games, are now routinely used for AI applications and are at the heart of all major ML facilities (as well the U.S. Exascale Strategy, a part of the National Strategic Computing Initiative). For example, Summit and Sierra, the two fastest supercomputers in the United States, are based on a hierarchical CPU-GPU architecture. Meanwhile, emerging tensor processing units, which were developed specifically for matrix-based operations, excel at the most demanding tasks of most neural network algorithms. In the future, computers will likely become increasingly heterogeneous, with a single system combining several types of processors, including specialized ML coprocessors (e.g., Cerebras) and quantum computing processors.

Computational systems that are physically distributed across remote locations and used on demand, usually called cloud computing, are also becoming more common, although these systems impose limitations on the code that can be run on them. For example, cloud infrastructures, in contrast to centralized HPC clusters and supercomputers, are not designed for performing large-scale parallel simulations. Cloud infrastructures face limitations on high-throughput interconnectivity, and the synchronization needed to help multiple computing nodes coordinate tasks is substantially more difficult to achieve for physically remote clusters. Although several cloud-based computing providers are now investing in high-throughput interconnectivity, the problem of synchronization will likely remain for the foreseeable future.

Boosting 3D Simulations

AI has proven invaluable in discovering and analyzing patterns in large, real-world data sets. It could also become a source of realistic artificial data sets.Artificial intelligence has proven invaluable in discovering and analyzing patterns in large, real-world data sets. It could also become a source of realistic artificial data sets, generated through models and simulations. Artificial data sets enable geophysicists to examine problems that are unwieldy or intractable using real-world data—because these data may be too costly or technically demanding to obtain—and to explore what-if scenarios or interconnected physical phenomena in isolation. For example, simulations could generate artificial data to help study seismic wave propagation; large-scale geodynamics; or flows of water, oil, and carbon dioxide through rock formations to assist in energy extraction and storage.

HPC and cloud computing will help produce and run 3D models, not only assisting in improved visualization of natural processes but also allowing for investigation of processes that can’t be adequately studied with 2D modeling. In geodynamics, for example, using 2D modeling makes it difficult to calculate 3D phenomena like toroidal flow and vorticity because flow patterns are radically different in 3D. Meanwhile, phenomena like crustal porosity waves (waves of high porosity in rocks; Figure 2) and corridors of fast-moving ice in glaciers require extremely high spatial and temporal resolutions in 3D to capture [Räss et al., 2020].

Fig. 2. A 3D modeling run with 16 billion degrees of freedom simulates flow focusing in porous media and identifies a pulsed behavior phenomenon called porosity waves. Credit: Räss et al. [2018], CC BY 4.0Adding an additional dimension to a model can require a significant increase in the amount of data processed. For example, in exploration seismology, going from a 2D to a 3D simulation involves a transition from requiring three-dimensional data (i.e., source, receiver, time) to five-dimensional data (source x, source y, receiver x, receiver y, and time [e.g., Witte et al., 2020]). AI can help with this transition. At the global scale, for example, the assimilation of 3D simulations in iterative full-waveform inversions for seismic imaging was performed recently with limited real-world data sets, employing AI techniques to maximize the amount of information extracted from seismic traces while maintaining the high quality of the data [Lei et al., 2020].

Emerging Methods and Enhancing Education

Interactive programming and language-agnostic programming environments are young techniques that will facilitate introducing computing to geoscientists.As far as we’ve come in developing AI for uses in geoscientific research, there is plenty of room for growth in the algorithms and computing infrastructure already mentioned, as well as in other developing technologies. For example, interactive programming, in which the programmer develops new code while a program is active, and language-agnostic programming environments that can run code in a variety of languages are young techniques that will facilitate introducing computing to geoscientists.

Programming languages, such as Python and Julia, which are now being taught to Earth science students, will accompany the transition to these new methods and will be used in interactive environments such as the Jupyter Notebook. Julia was shown recently to perform well as compiled code for machine learning algorithms in its most recent implementations, such as the ones using differentiable programming, which reduces computational resource and energy requirements.

Quantum computing, which uses the quantum states of atoms rather than streams of electrons to transmit data, is another promising development that is still in its infancy but that may lead to the next major scientific revolution. It is forecast that by the end of this decade, quantum computers will be applied in solving many scientific problems, including those related to wave propagation, crustal stresses, atmospheric simulations, and other topics in the geosciences. With competition from China in developing quantum technologies and AI, quantum computing and quantum information applications may become darlings of major funding opportunities, offering the means for ambitious geophysicists to pursue fundamental research.

Taking advantage of these new capabilities will, of course, require geoscientists who know how to use them. Today, many geoscientists face enormous pressure to requalify themselves for a rapidly changing job market and to keep pace with the growing complexity of computational technologies. Academia, meanwhile, faces the demanding task of designing innovative training to help students and others adapt to market conditions, although finding professionals who can teach these courses is challenging because they are in high demand in the private sector. However, such teaching opportunities could provide a point of entry for young scientists specializing in computer science or part-time positions for professionals retired from industry or national labs [Morra et al., 2021b].

The coming decade will see a rapid revolution in data analytics that will significantly affect the processing and flow of information in the geosciences. Artificial intelligence and high-performance computing are the two central elements shaping this new landscape. Students and professionals in the geosciences will need new forms of education enabling them to rapidly learn the modern tools of data analytics and predictive modeling. If done well, the concurrence of these new tools and a workforce primed to capitalize on them could lead to new paradigm-shifting insights that, much as the plate tectonic revolution did, help us address major geoscientific questions in the future.

Acknowledgments

The listed authors thank Peter Gerstoft, Scripps Institution of Oceanography, University of California, San Diego; Henry M. Tufo, University of Colorado Boulder; and David A. Yuen, Columbia University and Ocean University of China, Qingdao, who contributed equally to the writing of this article.

El Antropoceno marciano

Thu, 06/03/2021 - 12:05

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

El Antropoceno podría hacer pronto su debut como el primer período geológico multiplanetario.El impacto de las actividades humanas en la Tierra sirve como la base para definir un nuevo intervalo de tiempo geológico en nuestro planeta: el Antropoceno. Si el ritmo de los esfuerzos actuales para mandar humanos a Marte sirviera como indicador, el impacto de las actividades humanas podría pronto ser tan cuantificables en Marte como lo son en la Tierra y, el Antropoceno podría pronto hacer su debut como el primer período geológico multiplanetario.

La época del Antropoceno, propuesta como un nuevo intervalo de tiempo geológico post-Holoceno que inicia algún momento a mediados del siglo XX, aún no es una unidad geológica formalmente definida dentro de la escala geológica terrestre. Sin embargo, el término ha tenido un uso extenso en la literatura científica y popular, al igual que en los medios de comunicación, desde que se popularizó en el 2000. Está caracterizado por la manera en que actividades humanas han alterado profundamente diversas condiciones y procesos geológicos significativos, dejando evidencias características en el registro estratigráfico de la Tierra (Tabla 1).

 

Tabla 1. Señales del impacto humano en la Tierra y potenciales impacto en Marte

Parámetro Tierra (Observado)a Marte (Previsto) Singularidad Las huellas del impacto humano son lo suficientemente diferentes a los rasgos naturales del Holoceno como para constituir una nueva unidad de tiempo geológico. Los humanos dejarán huellas estratigráficas (edificios, evidencia de cambios atmosféricos, biomasa) en sedimentos y hielo nunca antes vistas en Marte.

Excepto por eras de hielo, la colonización humana será el primer cambio global en Marte desde la pérdida atmosférica hace miles de millones de años. Extensión global El registro antropogénico muestra una excelente correlación global o semi global en una amplia variedad de cuerpos sedimentarios marinos y terrestres. La utilización de recursos in situ y la dispersión microbiana será global porque las materias primas (potencialmente minadas) y el hielo (potencialmente biocontaminado) están distribuidos globalmente. Potencial preservación Los registros arqueológicos registran no del sistema Tierra. Habrá mayor potencial de preservación que la Tierra por la atmósfera más delgada y la escasez de microbiota activa retrasan las alteraciones al registro antropogénico. Base sincrónica Todos los efectos anteriores son marcadores estratigráficos sincrónicos globales para el Antropoceno, empezando a mediados del siglo XX. Todos los marcadores estratigráficos se desarrollarán al mismo tiempo, una vez que los humanos comiencen a asentar colonias en Marte.

 a Los parámetros de la Tierra tomados de Waters et al. (2016).

 

Durante las siguientes décadas y por primera vez en la historia, el impacto de las actividades y tecnologías humanas podrían ser analizadas y cuantificadas no solo en la Tierra, sino también en otros cuerpos planetarios. Probablemente, aún es muy pronto para proponer una nueva época que defina la geología de los otros planetas basados en el impacto de las actividades humanas, pero podemos empezar a considerar el caso de Marte.

Hasta ahora, la exploración en Marte se ha llevado a cabo por exploradores robóticos, los cuales probablemente han dejado poco impacto duradero que además no es a escala global. Pero un cambio fundamental ya está en marcha: la NASA ha sido oficialmente comisionada para mandar humanos a Marte. Otros programas espaciales nacionales y privados también han iniciado sus propios esfuerzos, así que es totalmente posible que alguna de estas otras organizaciones pueda ganarle a la NASA en completar misiones a Marte con humanos.

Por ello, es posible que las actividades humanas puedan pronto inaugurar un Antropoceno en Marte. Como el Antropoceno en la Tierra, esta nueva época sería distinguible gracias a marcadores en el registro estratigráfico del planeta.

Planeando nuestra llegada

La siguiente era del emprendimiento espacial determinará la línea de tiempo de la actividad humana en Marte, especialmente porque NASA adoptó un enfoque de mercado descentralizado en 2005 dando contratos a jugadores privados. Desde el 2005, tres cuartas partes del crecimiento de la economía global del espacio ha venido de esfuerzos comerciales. La compañía SpaceX ha declarado que podría aterrizar humanos en Marte en los siguientes 10 a 12 años y se ha asociado con NASA en el proceso de selección del sitio de aterrizaje a través del Acuerdo de la Ley Espacial.

Hay planes para empezar a revisar el Tratado del Espacio Exterior, un acuerdo presentado hace 50 años y firmado por todas las naciones-estado que aspiran al viaje espacial y muchas otras, lo cual proporciona un marco de referencia básico para la ley internacional del espacio. Un tratado actualizado podría ayudar a sentar las bases para el asentamiento humano en Marte y expandir el comercio en la era de los emprendedores espaciales. La presencia humana en Marte probablemente sea una realidad muy pronto, aún con la larga lista de brechas en el conocimiento que necesitamos abordar para comenzar a entender el impacto antropogénico en los procesos y las condiciones geológicamente significativas de Marte.

Microbios mochileros

El momento en que los astronautas pongan un pie en Marte, la contaminación microbiana será inescapable e irreversible.Lo que ya sabemos es que en el momento en que los astronautas pongan un pie en Marte, la contaminación microbiana será inescapable e irreversible. Astronautas que se queden ahí por un largo rato requerirán alguna forma de transporte y de almacenamiento de agua y comida, un suministro de aire continuo y contención y manejo de secreciones y residuos humanos, entre otras cosas.

Estas actividades crean un riesgo inevitable de fugas microbianas de la nave espacial, de los trajes espaciales y de los sistemas de depósito de residuos. Las fugas microbianas y la invasión de especies podrían diseminarse lo suficiente para generar un impacto global en Marte, eventualmente creando sedimentos identificables.

Los módulos para el hábitat humano y los rovers liberarán continuamente microbios al ambiente. Además, las bases para astronautas establecidas en zonas subterráneas del planeta para proteger a los ocupantes humanos contra la radiación y fluctuaciones de temperatura extrema, también protegería a sus ocupantes microbianos de las condiciones esterilizantes naturales de la radiación de la superficie y el ambiente oxidativo.

Alterando el paisaje marciano

La búsqueda de recursos en Marte, y utilizar esos recursos in situ, se sumará a los efectos de los humanos en Marte. Extraer y procesar materias primas marcianas para obtener consumibles para mantener la vida y propulsores transformará la superficie y subsuperficie marciana y dejarán una marca permanente. Las huellas topográficas humanas comenzarán a acumularse, empezando con efectos de escala pequeña como erosión, avalanchas y colapsos de terreno, y eventualmente se extenderán a áreas más grandes: aplanando montañas, acumulando colinas o excavando grandes minas a cielo abierto.

Algunas de estas actividades humanas podrían potencialmente generar nuevas zonas donde organismos terrestres podrían replicarse y donde cualquier vida marciana existente podría florecer.Algunas de estas actividades humanas podrían potencialmente generar nuevas zonas donde organismos terrestres podrían replicarse y donde cualquier podría florecer. La vida como la conocemos requiere agua, así que estas zonas podrían aparecer, por ejemplo, después de perforaciones para exploración de acuíferos subterráneos.

Un tercer posible aspecto del Antropoceno marciano, más allá de la liberación de microbios y los cambios en la superficie terrestre durante el uso del recurso in situ, es la creación y la amplia distribución de materiales nuevos, incluyendo contaminantes. Una estación de campo marciana con tripulación de 4 personas, por ejemplo, requerirá significativamente más energía eléctrica que las actuales misiones robóticas. Utilizar energía nuclear para satisfacer esas necesidades podría producir residuos radiactivos duraderos. Además, si un reactor llegara a explotar durante su operación o fuera destruido durante una entrada a la atmósfera fallida, podría dispersar una firma radioisotópica sobre un área amplia.

Hacer posible la vida humana en Marte requerirá modificaciones significativas y sin precedente al paisaje y al horizonte marciano (Tabla 1). Predecir y entender cómo podrían ocurrir estos cambios en Marte, así como obtener conocimiento sobre la dinámica y la sensibilidad de los paisajes y su respuesta a la fuerza humana a escala global, será central para la interpretación y mitigación de nuestro impacto en Marte.

Construir colonias aumenta el impacto

Una variedad de actividades humanas definen el Antropoceno en la Tierra. Dichas actividades incluyen cambios en erosión y transporte de sedimentos y alteraciones en la composición química de los suelos y la atmósfera asociada con la colonización, la agricultura y la urbanización. Las actividades humanas alteran el ciclo de carbono de la Tierra y los ciclos de varios metales a través del ambiente. Los humanos también introducen especies no nativas e invasoras a nuevos hábitats.

A medida que los humanos empiezan a colonizar Marte, se puede anticipar que ocurrirán cambios similares a un ritmo rápido (Tabla 2). Estos cambios probablemente producirán una huella estratigráfica en los sedimentos y en el hielo que será distinta de la del Amazoniano tardío, que es el periodo actual en el que está Marte.

 

Tabla 2. Argumentos para la existencia de un Antropoceno de la Tierra comparado con la proyección de posibles cambios que generarían los humanos en Marte

Parámetro Tierra (Observado) Marte (Previsto) Cambios atmosféricos Composición carbón negro, esferas de ceniza inorgánica y partículas esféricas carbonaceas producto de la combustión de combustibles fósiles; concentracoines elevedas de dióxido de carbono y de metano emisiones de combustible de propulsión que incluirán partículas de óxido de aluminio y especies de gases de cloruro; residuos de sistemas de mantenimiento de vida, de bases humana y de invernaderos Temperatura aumento de la temperatura media global de 0.6º-0.9ºC desde 1900 al presente. puntos de calor locales, que eventualmente se traducirán en efectos globales si comienza una terraformación Cambios geológicos Uso recursos in situ minería, actividad industrial minería Distribución de materiales nuevos “tecnofósiles”: aluminio elemental, concreto, plásticos “tecnofósiles”: aluminio elemental, concreto, plásticos Contaminantes hidrocarburos aromáticos policíclicos, bifenilos policlorados, plaguicidas, gasolina con plomo; radionúclidos artificiales procedentes de las pruebas de armas termonucleares residuos de bases y vehículos humanos, residuos nucleares duraderos; firmas de radioisótopos ampliamente dispersas procedentes de fallos de reactores Cambios biológicos Extinción de especies tasas de extinción mucho más altas que las tasas de fondo desde 1500; deforestación riesgo de extinción de cualquier microbiota existente en Marte Invasión de especies invasiones de especies trans globales iniciadas por humanos filtración microbiana de astronautas y bases humanas (si Marte siempre ha estado sin vida, estos serían los primeros seres vivos del planeta) Actividades agropecuarias cambios asociados con agricultura y pesca primeros invernaderos en Marte

 

Ya hemos sido testigos de un proceso similar en la Antártica, donde estudios análogos hacen uso del clima, terreno y aislamiento del continente para simular las condiciones y los procesos que los humanos probablemente enfrenten en misiones a Marte. Aunque los humanos en Antártica se dedican principalmente a la investigación científica y deben seguir una política determinada para conservación ambiental, el efecto de la “era de los humanos” ya es visible en el continente, creando una preocupación apremiante.

Fig. 1. ¿Podrían las escalas de tiempo geológicas de Martme y la Tierra converger en un futuro Antropoceno común? Los efectos comienzan el día uno

Todos los impactos anticipados derivados de la exploración humana sucederán mucho antes de que comencemos a alterar Marte a una escala planetaria.Como los humanos aún no han puesto un pie en Marte, sería fácil pensar que tenemos mucho tiempo para pensar en las formas en que manejaremos nuestro impacto en el planeta. Sin embargo, todos los impactos anticipados derivados de la exploración humana sucederán mucho antes de que comencemos a alterar Marte a una escala planetaria.

Con nuestro ritmo actual de progreso, estos esfuerzos de gran escala están solo un paso arriba de la ciencia ficción. La realidad de hoy es que nuestros hijos o nietos verán huellas de astronautas en las arenas rojas de Marte. Y cuando eso suceda, el Antropoceno de Marte comenzará.

Agradecimientos

La investigación que llevó a esta investigación es una contribución del proyecto icyMARS, fundado por el Consejo Europeo de Investigación, Starting Grant 307496

Referencias

Waters, C. N., et al. (2016), The Anthropocene is functionally and stratigraphically distinct from the Holocene, Science, 351(6269), aad2622, https://doi.org/10.1126/science.aad2622.

—Alberto G. Fairén (agfairen@cab.inta-csic.es), Centro de Astrobiología, Instituto Nacional de Técnica Aeroespacial–Consejo Superior de Investigaciones Científicas, Madrid, Spain; tambien del Department of Astronomy, Cornell University, Ithaca, N.Y.

This translation by Edith Emilia Carriquiry Chequer (@eecarry) was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

How Much Carbon Will Peatlands Lose as Permafrost Thaws?

Thu, 06/03/2021 - 12:04

Just as your freezer keeps food from going bad, Arctic permafrost protects frozen organic material from decay. As the climate warms, however, previously frozen landscapes such as peatlands are beginning to thaw. But how much fresh carbon will be released into the atmosphere when peat leaves the deep freeze of permafrost?

In a new study, Treat et al. use a process-based model to explore how different factors may affect the carbon balance in peatlands by the end of this century. The scientists simulated more than 8,000 years of peatland history to ensure accuracy, and they examined six peatland sites in Canada to cover a gradient from spottier southern permafrost zones to continuous permafrost sites north of the Arctic tree line.

Their results reveal great variation, depending on each site’s history. According to the simulations, some areas will release carbon as permafrost thaws or disappears altogether. Others will accumulate and store carbon at greater rates as vegetation responds to warmer temperatures and longer growing seasons. Overall, little carbon—less than 5%—will escape by 2100 compared with how much will remain stored.

Before peat is preserved stably in permafrost, it spends time in an “active layer,” which freezes and thaws seasonally. Unfrozen peat continues to decay, so by the time it is permanently frozen, peat might be highly degraded. When such frozen peat ultimately thaws, limited further decomposition is possible, so the carbon loss is much slower than might be expected. Therefore, most of the carbon that peat will release escapes before it ever enters the permafrost. Accordingly, in simulations of future years, the upper active layer, not deeper or newly thawed peat, continued to release the most carbon.

Previous field studies showed a range of carbon balance outcomes as permafrost thaws, from the release of large amounts of carbon to the storage of additional carbon. This simulation helps explain that variation, linking carbon balance results to specific variables such as site history and active layer depth. Future models could continue to refine the picture by incorporating new variables, such as ice melt and vegetation productivity. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1029/2020JG005872, 2021)

—Elizabeth Thompson, Science Writer

Calculating Human Health Risks with General Weather Data

Thu, 06/03/2021 - 12:02

Weather stations provide detailed records of temperature, precipitation, and storm events. These stations, however, are not always well spaced and can be scattered throughout cities or can even be absent in remote regions.

When direct measurements of weather are not available, researchers have a work-around. They use existing gridded climate data sets (GCDs) at different spatial resolutions that average weather within a specific grid. Unlike monitoring stations, the estimated temperatures in these grid cells are based on a combination of modeled forecasts and climate models as well as on observations (varying from ground monitors and aircraft to sea buoys and satellite imagery). These GCDs are very useful in large-scale climate studies and ecological research, especially in regions without monitoring stations.

But can GCDs be effective in epidemiological studies, for instance, in looking at how adverse temperatures might affect human health and mortality?

In a new study, de Schrijver et al. tested whether GCDs could be useful in studying temperature-related mortality in areas where weather stations are sparse. They compared gridded temperature data with weather station temperatures in two locations—England and Wales and Switzerland—to see whether one data set worked better than the other. These regions have varying topography, heterogeneous temperature ranges, and varying population distributions, all of which lead to pockets of irregular temperatures within an area.

To understand which temperature data would be most helpful in predicting health risks for communities, the researchers compared deaths from exposure to hot or cold temperatures for both GCDs and weather station data. They used weather station data from each country and a high- and low-resolution GCD (local and regional scales) to see which data were better for predicting risk of death from cold or heat.

The team found that both data sets predicted similar outcomes of health impacts from temperature exposure. However, in some cases, high-resolution GCDs were better able to capture extreme heat compared with weather station data when unequal distribution of the population was accounted for. This was especially the case in densely populated urban areas that experience notable temperature differences within them.

The researchers conclude that in cities and areas with rugged terrain, local GCDs might be better than weather station data for epidemiological studies. (GeoHealth, https://doi.org/10.1029/2020GH000363, 2021)

—Sarah Derouin, Science Writer

Studying Arctic Fjords with Crowdsourced Science and Sailboats

Thu, 06/03/2021 - 12:01

In June 2017, Nicolas Peissel led the 13-meter sailboat Exiles out of port in St. John’s, Newfoundland and Labrador, Canada. The vessel sailed north to Greenland and into the remnants of Tropical Storm Cindy. Peissel and several other crew members are aid workers for Doctors Without Borders, but they were on a 3-month scientific—not medical—expedition aboard Exiles.

The expedition explored the feasibility of crowdsourced science using sailboats to expand data collection in fjords affected by the melting Greenland Ice Sheet. Daniel Carlson, an oceanographer at Germany’s Helmholtz-Zentrum Hereon and science officer for the expedition, sailed on Exiles for a month. After he left, the crew of nonscientists continued collecting data. The expedition log and preliminary results were published in April in Frontiers in Marine Science.

“Since you’re spending so much money on a research cruise, there’s usually a push to visit as many fjords as possible. But with the sailboat, you’re able to just stop and investigate things you find interesting.”The melting ice in Greenland is increasing the amount of fresh water in fjords, which changes the salinity and mixing of ocean water. Scientists don’t fully know what impact these changes will have on the marine ecosystem.

To study the contribution of meltwater in the ocean, scientists measure the conductivity, temperature, and depth (CTD) of the water column, but reaching these remote fjords to take measurements on research ships is expensive and treacherous. Ships also often carry several research teams with conflicting experimental needs and schedules. These limitations leave gaps in our understanding of the changing Arctic waters.

“Since you’re spending so much money on a research cruise, there’s usually a push to visit as many fjords as possible,” Carlson said, “but with the sailboat, you’re able to just stop and investigate things you find interesting.” Sailboats also require much less fuel, lessening the environmental impact of Arctic research.

Together, the Exiles crew made 147 CTD measurements. Carlson also took aerial photographs of icebergs with a drone to estimate the rate at which they melt. He said this wouldn’t have been possible on a research cruise with tight schedules and timelines.

Crowdsourcing Science in the Arctic

Although Carlson collected much-needed data on changes occurring in fjords as a result of melting ice, the expedition also demonstrated that crowdsourced science is a viable option for expanding Arctic oceanography research.

“We were extremely happy that we could collaborate with a professional scientist in a scientific institution, but we also wanted to be the citizens that could produce raw, reliable scientific data, and we proved that.”“We were extremely happy that we could collaborate with a professional scientist in a scientific institution,” said Peissel, who is a coauthor on the paper. “But we also wanted to be the citizens that could produce raw, reliable scientific data, and we proved that.” The crew took 98 CTD measurements after Carlson left.

Caroline Bouchard, a fisheries scientist at the Greenland Institute of Natural Resources who wasn’t involved in the study, also uses sailboats for Arctic research. She appreciates their affordability and versatility and would like to see more people with sailboats taking part in research. “It’s not like you can just make your own thing—you need the instruments—but I think there would be interest from citizen scientists,” Bouchard said.

Exiles sails past the Eqi Glacier in Greenland. Credit: Daniel Carlson

Although it takes experienced sailors to navigate in the Arctic, more sailboats than ever have been heading north, which could bring new opportunities for amateur scientists. Peissel said sailors in the Arctic usually have an intense connection to the sea and nature. “These are the people who are more than likely to say ‘Hey, why don’t you put your instruments on board.’”

Following their study’s success, Carlson and Peissel are planning another expedition to the Arctic in 2022. “The scientific discipline, just like humanitarianism, does not uniquely belong to the scientist or the humanitarian,” Peissel said. “Scientific work was historically, and should continue to be, undertaken by members of the general public.”

—Andrew Chapman (@andrew7chapman), Science Writer

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer