EOS

Syndicate content Eos
Science News by AGU
Updated: 6 hours 42 min ago

Esta búsqueda por vida alienígena comienza con la destrucción de bacterias en la Tierra

Wed, 04/14/2021 - 12:28

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

A veces se necesita un poco de destrucción para desbloquear los secretos del universo.

En un laboratorio en el Imperial College de Londres, Tara Salter utilizó temperaturas altas para destruir muestras de bacterias y arqueas, dejando fragmentos moleculares. Con este proceso de pirólisis, Salter intentaba simular lo que podría sucederle a moléculas que chocan con una nave espacial, como insectos en un parabrisas.

Específicamente, Salter estaba simulando una nave espacial viajando a través de plumas tipo géiser de las lunas heladas del sistema solar exterior. Científicos han observado plumas que brotan de la corteza helada de la luna de Saturno, Encélado, y de la luna de Júpiter, Europa, y quieren enviar una nave espacial a través de esas plumas para investigar qué tipos de moléculas están siendo expulsadas de los océanos extraterrestres sumergidos.

“Ser capaces de armar el organismo a partir de la detección de partes pequeñas es la meta.”Cualquier molécula que colisiona con una nave espacial viajando a velocidades de varios kilómetros por segundo sería “aplastada a añicos”, dijo Salter.

Aún si los microbios son parte de lo expulsado por la pluma, la nave espacial muestreando muy probablemente no será capaz de observar organismos completos, solo pedazos de ellos. “Ser capaces de armar el organismo a partir de la detección de partes pequeñas es la meta” de su investigación, dijo Salter. Ella espera que sus muestras de bacterias aplastadas y los fragmentos moleculares que dejan atrás, puedan ayudar a futuros científicos a investigar la posibilidad de vida en uno de estos mundos oceánicos.

Salter presentó su investigación en la reunión AGU en otoño 2020.

Océanos muy lejanos

En el 2005, la nave espacial de la NASA, Cassini, realizó un descubrimiento espectacular: debajo de muchos kilómetros de hielo en la luna de Saturno Encélado, se agitaba un vasto océano de agua líquida. Cassini descubrió este océano casi por accidente cuando voló a través de las plumas tipo géiser de vapor de agua que salían del polo sur de Encélado. Cassini no solo descubrió moléculas de agua en el penacho, también parecía que habían fragmentos de moléculas orgánicas adheridas a los gránulos de hielo volando.

Pero los instrumentos de Cassini no fueron diseñados para distinguir entre moléculas orgánicas grandes, dijo Hunter Waite, director de un programa de espectrometría de masas en el Instituto de Investigación Suroeste en San Antonio.

Ahora que sabemos que una futura nave espacial necesita ser capaz de detectar materia orgánica grande y compleja, estamos preparados. La siguiente misión de la NASA al sistema solar exterior, Europa Clipper, estudiará otra luna que contiene un océano de agua líquida, Júpiter el satélite de Europa. Numerosas observaciones de la nave espacial Galileo y el Telescopio Espacial Hubble indican que, como Encelado, Europa presenta en su superficie plumas. Si esas plumas se originan en el océano interno de la luna o en el reservorio de la subsuperficie, sigue por ser visto.

El espectrómetro de masas de Europa Clipper, supera a Cassini en el hecho que será capaz de detectar y determinar la composición de moléculas orgánicas grandes, dice Waite, quien es también un coinvestigadores en el instrumento. De esa forma, los científicos serán capaces de estudiar exactamente qué material está saliendo de las plumas.

Volando a través de las plumas… en el laboratorio

Para simular una nave espacial volando por las plumas de Europa, Salter tomó especímenes de bacterias extremófilas y se calentaron en una cámara especial a 650ºC, lo que imita la fuerza destructiva de chocar contra una nave espacial. El calor destruye las moléculas hasta cierto punto y, lo que queda es una mezcla de fragmentos. Salter después analizó estos fragmentos con su propio espectrómetro de masas y creó un catálogo.

“Puedes simplificar una bacteria en proteínas, carbohidratos y lípidos”, entre otras cosas, dice Salter. En sus análisis, encontró fragmentos de aminoácidos, cadenas de ácidos grasos que componen lípidos y moléculas que contienen oxígeno, hidrógeno y carbono de los carbohidratos.

“Trabajo como este puede desbloquear gemas ocultas en previos grupos de datos como las mediciones que Cassini realizó de las plumas de Encélado.”Después de analizar los fragmentos, Salter creó una biblioteca de firmas moleculares, una que ella espera expandir y compartir con el resto de los científicos.

“Trabajo como este puede desbloquear gemas ocultas en previos grupos de datos como las mediciones que Cassini realizó de las plumas de Encélado y que también ayudará a informar futuras mediciones de misiones diseñadas para buscar vida en estos océanos extraterrestres”, dijo Morgan Cable, un científico planetario del Laboratorio de Propulsión a Reacción de la NASA en Pasadena, California, quien no estaba involucrado en la investigación.

Sin embargo, “nosotros también necesitamos mantener en mente que nos podemos encontrar con cantidades traza de vida, donde ese espectro de biofirma puede estar escondida bajo una fuerte firma ‘abiótica’”, ella dice.

Salter tiene más planes de destrucción. Quiere destruir células bacterianas utilizando radiación ultravioleta —para imitar la superficie de Europa— y calentar las células en la presencia de agua para ver cómo el agua afecta qué moléculas se preservan.

Del polvo de bacterias pulverizadas, los científicos esperan se compile una biblioteca completa de fragmentos moleculares que ayuden a identificar vida en otro mundo.

—JoAnna Wendel (@JoAnnaScience), Escritora de ciencia

This translation was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Sediment Mismanagement Puts Reservoirs and Ecosystems at Risk

Wed, 04/14/2021 - 12:27

Dams store water flowing down rivers and streams in reservoirs, providing protection from floods. Dams also serve as sources of electrical power, and they provide water for domestic and irrigation uses and flat-water recreation. By design and default, most dams in the United States also store sediment, indefinitely.

Sediment accumulation behind U.S. dams has drastically reduced the total storage capacity of reservoirs. Sedimentation is estimated to have reduced the absolute water storage capacity of U.S. reservoirs by 10%–35%. Consequently, on a per capita basis, the water storage capacity of U.S. reservoirs today is about what it was in the 1940s–1950s, despite there being more dams [Randle et al., 2019]. This comes as no surprise: More than 40 years ago, D. C. Bondurant warned, “It must be recognized, that with few exceptions, ultimate filling of reservoirs is inevitable” [Vanoni, 1975].

Mobilizing and passing sediment through reservoirs to downstream reaches can maintain or restore both reservoir capacity and downstream ecosystems.At the same time, reaches downstream of dams have been deprived of sediment, resulting in declines in the health of downstream habitats and organisms [Ligon et al., 1995]. After rivers and streams deposit their sediments into reservoirs, the remaining clear water is more effective at moving sediment in the channel downstream [Kondolf, 1997]. High-energy “hungry water” releases erode downstream channel beds and banks, leading to incised rivers [Williams and Wolman, 1984], accelerated beach erosion [Dai et al., 2008], and oversimplified channels lacking critical habitat features such as backwaters, connected floodplains and wetlands, pools, riffles, and runs [Kondolf and Swanson, 1993].

Managing reservoir sediments in the United States has historically involved dredging, excavation, and removal of sediment to off-site locations. These approaches are expensive and do not restore sediment continuity with downstream river channels. Alternative management approaches have revealed that mobilizing and passing sediment through reservoirs to downstream reaches can maintain or restore both reservoir capacity and downstream ecosystems.

Here we present recommendations to address the escalating issue of sediment trapping in reservoirs. Without action, continuing accumulation of sediment in reservoirs will further reduce reservoir capacities, increase maintenance costs, reduce reservoir operational flexibility, and increase degradation of downstream environments. In rare cases that foretell what a future of sediment mismanagement might look like, failure to deal with captured sediments has led to catastrophic dam failure [Tullos and Wang, 2013].

The 21-meter outlet structure at Paonia Reservoir is shown under construction in 1961. Credit: U.S. Bureau of Reclamation The Mechanics of Reservoir Sedimentation

Streamflows entering reservoirs are released downstream by way of intakes that are generally located well above the bed of the reservoir, either at the water’s surface on towers or across a surface spillway along the length of the dam. Because sediment transported into a reservoir is heavier than water, it settles to the reservoir bed, reducing the storage space available for water. Many dams also have low-elevation outlets that allow for sediment flushing. These outlets are most common at diversion dams but also exist at many water storage dams, where they were constructed above their respective sediment storage pools. Continued sediment accumulation to reservoirs can either cover or compromise these low-level outlets.

Even when reservoir outlets are built above their respective sediment storage pools, they can become buried in sediments over time. Credit: U.S. Bureau of Reclamation

To prolong reservoir life and recover lost storage volume, low-level outlets or bypass tunnels may need to be constructed to direct newly inflowing or already accumulated sediment through or around a dam. Even with the high costs of modifying existing dams with such features, passing sediment through a reservoir is still less expensive over the life span of the reservoir than dredging and off-site storage [Wang et al., 2018].

Regulatory Obstacles

Efforts to mobilize and route sediment past dams are often delayed by regulatory requirements shaped by the long-standing—though misguided—belief that sediment always negatively affects water quality and increases risks to downstream communities. To increase the sustainability of reservoirs, sediment management regulations need modernizing, informed by the knowledge gained from years of scientific research and monitoring of reservoir and downstream river conditions.

Sediment management in the United States exists within a regulatory environment that is “ever changing and…[continuing] to grow in complexity,” according to a report by the International Commission on Large Dams [2019]. Today, discharging sediment downstream of a dam requires an individual, project-specific federal permit. The 2007 court case Greenfield Mills Inc. et al. versus Robert E. Carter Jr. et al. set the precedent for this requirement, establishing that the flushing of sediments is considered a “discharge of dredged material from a point source” and subjecting the practice to regulation under Section 404 of the Clean Water Act (CWA). In addition to federal regulations, sediment management operations often require additional authorization at the state or local level.

Current regulatory tools are not favorable for sustainable management of reservoir sediments, but there are opportunities for modernizing those tools.Inconsistent interpretation of federal, state, and local permitting processes makes the application process complex and unpredictable. Factors include variations in how different U.S. Army Corps of Engineers (USACE) districts interpret existing permit frameworks and implement their regulatory programs, reflecting differences in regional conditions and in regulatory perspectives across states. The process is also hampered by a lack of consistent and adequate training and knowledge about sediment transport processes and about interactions between sediment movement, river morphology, and ecosystem response among staff at regulatory and resource agencies (e.g., EPA, U.S. Fish and Wildlife Service, NOAA), dam operators, nongovernmental organizations, and permittees.

In addition, examples of poorly designed and timed sediment discharges from reservoirs have caused legitimate concerns about negative effects on downstream ecosystems [Espa et al., 2016]. Taken together, these issues highlight why current regulatory tools are not favorable for sustainable management of reservoir sediments. However, they also reveal opportunities for modernizing those tools and ameliorating reservoir sediment management.

Achieving Sustainable Reservoirs for the Future

Broadly speaking, the following three key challenges characterize the current regulatory framework for authorizing sediment discharges from reservoirs:

The definition of sediment as a pollutant and as fill under Section 404 of the CWA Traditional engineering practices that do not account for current knowledge of geomorphic and ecological processes, as well as a lack of training on and common understanding of these processes Regulations that are simultaneously inflexible (e.g., de minimis) and inconsistent

We identify four interrelated recommendations to address these challenges.

Recommendation 1: Broaden the Interpretation of De Minimis Sediment Release

Individual permits for sediment discharges are not needed when the amount of sediment released is below a de minimis standard. De minimis is a concept established by USACE as a sediment release that approximates the natural load of sediment entering the reservoir. However, it is difficult to flush stored sediment if the load released must be similar in magnitude, composition, and seasonal pattern to that entering a reservoir.

Releases of accumulated sediment loads could be authorized if de minimis standards were based on relevant geomorphic or ecologic criteria and had the goal of preventing degradation of aquatic resources (see recommendation 4). Such a framework would require robust and quantitative understanding of local hydrological, geological, and ecological processes. This type of framework would benefit from establishing a community of practice (COP), and implementing it would require training of permitting staff and practitioners on new processes and tools (recommendation 2).

Recommendation 2: Establish Reservoir Sediment Management Communities of Practice

USACE has previously established COPs to encourage collaboration and efficiency of applications and knowledge transfers across different technical fields (e.g., the Levee Safety COP, which engages a wide range of practitioners and regulators in collecting data, database building, and assessment of tools and policies for increasing levee safety). We recommend establishing a COP for reservoir sediment management. Key efforts of this COP could include the following:

Developing a reference database of existing reservoir sediment management permits under Sections 401 and 404 of the CWA (Section 401 deals with permitting water discharge events with respect to their effects on water quality). Developing a screening tool with specific metrics (e.g., sediment types, management strategy, accumulation and expected release rates). The screening tool would help practitioners and regulators to identify high-risk sites where more careful design and monitoring are needed and to manage impacts to downstream infrastructure and ecosystems. It would also inform the permitting process. Convening and supporting experts to review best practices for defining de minimis criteria (recommendation 1) and for designing sediment discharge operations to minimize operational and economic burdens while maximizing ecological benefits. Convening permitting staff, resource agencies, and practitioners for instruction and training on the intersections of sediment and ecological processes and on evaluating, mitigating, and communicating potential risks of increasing sediment releases to downstream reaches.

New regional general permits that allow regular managed releases of sediment from reservoirs would streamline permitting for projects that cause minimal and predictable adverse environmental impacts to aquatic resources.Recommendation 3: Establish Regional General Permits for Regular Downstream Sediment Releases

We recommend that USACE districts issue regional general permits (RGPs) for sediment management that are specific to geographic areas. Current RGPs authorize desilting flood control channels, maintenance dredging of water bodies, beneficial reuse of dredged sediment, ecological restoration activities, and emergency activities. New RGPs that allow regular managed releases of sediment from reservoirs should be established. This update would streamline permitting for projects that cause minimal and predictable adverse environmental impacts to aquatic resources, allow for regular renewals, and motivate application of best practices for sediment management.

For example, in watersheds that are highly erodible and that naturally experience regular, event-driven sediment pulses, an RGP could allow for multiple sediment discharge events spaced out over repeated episodes. Such multiple smaller releases would generally produce smaller downstream impacts than a single, large sediment release. Ultimately, an RGP will be most effective if it is based on a framework that recognizes local characteristics of catchments, reservoirs, and downstream areas.

Frequent, small, and strategic releases of sediment, such as the one that occurs annually from the reservoir at Fall Creek Dam in Oregon (seen here) to support downstream fish passage, can minimize impacts of sediment management on ecosystems and other reservoir functions compared with infrequent releases of large sediment volumes. Credit: Desirée Tullos

Recommendation 4: Adapt a Flexible and Collaborative Approach Based on Local Conditions

Because sediment management at reservoirs is uncommon in the United States, the permitting process for such projects is particularly convoluted and protracted. A collaborative approach to permitting thus would be beneficial [Ulibarri et al., 2017]. Early and frequent communication among regulators, stakeholders, and permittees could facilitate common understanding of reservoir dynamics on the timelines of geomorphic and ecological processes as well as risk identification.

A collaborative approach would streamline permitting and reduce delays in obtaining CWA Section 401 and 404 permits. It would also assist in identifying points of flexibility in the design and permitting processes that best serve the project and the environment. Recent dam removals in the United States have demonstrated regulators’ flexibility to permit activities that allow moderate short-term degradation while maintaining and protecting existing uses of a waterway (e.g., for aquatic habitat, drinking water supply, channel maintenance, recreation). For example, dam removals in the Pacific Northwest have been timed to avoid negative effects on salmon eggs and to take advantage of seasonal weather conditions.

Biologically based criteria balance the short-term degradation of water quality that aquatic organisms and resources can tolerate with the long-term benefits of restored sediment continuity.Dam removals are very similar to operations aimed at maintaining reservoir capacity in the type and duration of sediment releases; both generate sediment pulses that peak at the end of drawdown and during subsequent storm events, for example. These pulses ultimately benefit ecosystems by reestablishing natural sediment flows and improving environmental conditions [Bellmore et al., 2019].

Acknowledgment of long-term benefits of sediment releases by regulators can boost flexibility in setting timetables for dam managers to achieve regulatory compliance and for establishing water quality criteria aligned with the principle of antidegradation of aquatic resources (i.e., weighing the pros and cons of a proposed activity that could degrade water quality). For example, the CWA Section 401 permit allowing for removal of the J. C. Boyle Dam on Oregon’s portion of the Klamath River—currently scheduled for initiation in 2023—established a compliance period (the deadline by which standards must be met) of 24 months. This provision will allow dam removal operations to avoid liability for water quality violations during sediment releases immediately following the removal.

Another accommodation that recognizes real-world conditions involves transitioning from using water quality criteria simply based on changes from background concentrations to biologically based criteria, such as suspended sediment concentrations that damage fish gills or lead to lethally low dissolved oxygen. Such criteria balance the short-term degradation of water quality that aquatic organisms and resources can tolerate before suffering permanent degradation with the long-term benefits of restored sediment continuity.

Blending knowledge of local ecosystems, weather, geomorphic factors, and trapped sediment allows dam and natural resource managers to design sediment removal programs that minimize negative impacts to the environment and downstream users.

A Shift in Strategy

Achieving sustainable reservoir management requires acknowledging that sediment is not a pollutant, but is, instead, like water, often a beneficial resource that must be wisely managed.Many of the tools that U.S. regulators and managers need for implementing improvements in sediment management already exist. Applying advanced knowledge gained from physical, biological, and environmental sciences will help improve the sustainability of the nation’s constructed reservoirs and its ecosystems. And implementing policy recommendations based on science and practical experience would put the United States more in line with approaches currently being used in Europe and Asia. The shift in regulation strategy proposed here would provide a means to manage sediment that is better able to maintain reservoir integrity in the future under demands associated with climate change, aging infrastructure, and public safety.

Fundamentally, achieving sustainable reservoir management requires acknowledging that sediment is not a pollutant, but is, instead, like water, often a beneficial resource that must be wisely managed. The failure to acknowledge and account for this truth has led to devastating consequences for people and ecosystems in the past, and such consequences will occur more often and become more severe in the future unless we change our approach.

Turf’s Dirty Little Secret

Wed, 04/14/2021 - 12:26

Australian scientists have found that grassy sports fields used for soccer, cricket, and baseball can release a potent greenhouse gas into the environment. A yearlong study at La Trobe University in Melbourne, Vic., suggests that mowing, fertilizing, and applying herbicides to turfgrass sports fields contribute to the release of large amounts of nitrous oxide.

“This study is another indication that urbanization has complex impacts on our environment,” said Amy Townsend-Small, a biogeochemist who was not involved with the research. “Even though most cities are working toward increasing their amount of green space, this doesn’t always help meet climate goals.”

A Greenhouse Gas That Eats Away at the Ozone Layer

Nitrous oxide is the third most emitted greenhouse gas, after carbon dioxide and methane. Although it makes up only 7% of greenhouse gas emissions in the United States, it has 265 times the global warming capacity of carbon dioxide. The gas is also the largest source of ozone-depleting substances from humans.

Turf covers an area 3 times larger than any other irrigated crop in the United States.In soil, nitrous oxide is emitted by microbes digesting chemical compounds for energy. Although the process is natural, humans have cultivated soil conditions that encourage more gas production. Agriculture emits the most nitrous oxide of any sector.

As the world urbanizes, scientists are studying how nitrous oxide emissions are concentrated outside the agricultural realm. A better understanding of how sports fields contribute to emissions could help mitigate those emissions. Although the problem is relatively small, turf’s footprint may be large. In one study by Cristina Milesi of the NASA Ames Research Center, it was calculated that turfgrass covers an area 3 times larger than any other irrigated crop in the United States.

Emissions Similar to High-Intensity Farming

In the latest study, David Riches, a research fellow at La Trobe, and his colleagues installed instruments to measure nitrous oxide and methane on campus fields used for soccer and cricket.

“What we found was we got really quite high emissions in the sports field soils which were comparable to [those] of the high-intensity vegetable production systems we’ve previously been working in,” said Riches. The team monitored conditions for 213 days from autumn to spring on one sports field and intermittently on two others.The field’s emissions were 2.5 times higher than those of an unused lawn nearby.

Applying herbicide to the field caused the largest jump in nitrous oxide emissions. Herbicide likely prohibits new growth and frees up more soil nitrogen for hungry microbes, researchers said.

Aerating, fertilizing, and watering for the oncoming sports season also increased nitrous oxide emissions. Watering decreases the microbes’ access to oxygen, making them produce more nitrous oxide.

Notably, the three sports fields’ emissions were 2.5 times higher than those of an unused lawn nearby. Nitrous oxide averaged around 38 grams of nitrogen per hectare per day at the continuously monitored sports field (data at the other two were intermittent), versus about 9 grams of nitrogen per hectare per day at the lawn.

“You do get these peaks of high emissions in the sports field, which you just don’t get in the lawn,” Riches said.

More Careful Management Could Cut Emissions

One way to reduce emissions could be to water only when a field needs it, Riches said. Another idea might be to dial back the amount of fertilizer and use a slow-release or nitrogen-inhibiting product.

The study was published in Science of the Total Environment in March.

Extrapolating to the rest of Australia, Riches figures that grass playing fields alone do not have a significant impact on greenhouse gas emissions. But the effect could be greater if lawns, parks, gardens, turf farms, roadside vegetation, and other intensively managed green spaces are shown to emit as much as sports fields.

“If you look at all the intensively managed in total, then it might start to become more significant,” Riches said. “Then you might want to do something to mitigate it when you can.”

—Jenessa Duncombe (@jrdscience), Staff Writer

This story is a part of Covering Climate Now’s week of coverage focused on “Living Through the Climate Emergency.” Covering Climate Now is a global journalism collaboration committed to strengthening coverage of the climate story.

Soil Chips Help Scientists Spy on Fungal Navigation

Tue, 04/13/2021 - 12:26

What does a fungus do when confronted with a 10-micrometer-wide corridor? It explores, of course. However, if the passage turns out to be a dead end, fungi species will react differently. Some species retreat, some persist, some wreak havoc. Swedish scientists saw it all happen using so-called “soil chips”—small objects that resemble soil at the scale of the organisms that live in it.

The hypha of a fungus is confronted with an obstacle, followed by a widening of the channel it is exploring. This fungus, Leucoagaricus leucothites, goes around the obstacle and continues on. Other species branch out into the extra space. Credit: Kristin Aleklett

Such “micromodels” are literally opening windows into processes that until recently, could be “studied [only] with a black box approach,” said Edith Hammer, a soil ecologist at Lund University in Sweden, where the soil chip was developed. “You do some treatments and then measure the effects in bulk, assuming the soil is homogeneous at centimeter scale. But soil is super heterogeneous at the micrometer scale,” she said. “It is probably the most complex habitat we have on Earth. [There are] microhabitats that can be very different just a few microns apart from each other. That is likely the reason why soil is also the most species dense habitat on Earth.” Only by unraveling the complexities of this habitat can important questions be answered, for instance, about its ability to store carbon.

It’s hard to look inside soil at any scale, let alone at a resolution that allows direct observation of the comings and goings of bacteria and similar-sized life-forms. To get around that, Hammer turned to a technology that routinely creates objects with details measured in micrometers: photolithography, a process used in the fabrication of integrated circuits for computers but which in this case was used to mimic soil composition.

“One species we call the zombie fungus. It managed to destroy the chip in a few weeks.”To make a soil chip, a pattern that represents the investigated soil structure is drawn on a “photoresist” layer covering a silicon wafer, which serves as the substrate. After developing the wafer, with only the desired parts of the photoresist remaining, it serves as the mold for a silicon-based organic polymer, polydimethylsiloxane (PDMS). The result is a flat block of PDMS imprinted with pores and voids of dimensions similar to those found in soil. After covering the PDMS with glass, the soil chip is ready for life-forms to enter, spied upon through a microscope.

Recently, Hammer’s group published its first results on the behavior of fungi in soil chips. Seven species of fungus had their hyphae, filaments with a growth tip only a few micrometers across, penetrate an “obstacle chip” with a labyrinthine interior. All seven were litter-decomposing species, but they turned out to have very different strategies. Some advanced slowly with multiple hyphae in parallel, while others stormed through the corridors at speeds of over a millimeter a day.

“One species we call the zombie fungus,” Hammer said. “It managed to destroy the chip in a few weeks. But to be fair, the chip did not have an optimal bonding between the glass and the polymer.”

Uncovering the Hidden Lifestyle of Fungi

In actual soil, turning left, right, and back are not the only options for a fungus encountering an obstruction. The two-dimensional character of soil chips makes them less realistic models. But Hammer thinks she can compensate for the lack of the up and down direction in soil chips by increasing the connectivity of their labyrinths.

Lynne Boddy, who studies fungi at Cardiff University in the United Kingdom, has followed Hammer’s work and expects these “lovely little chips” to be important tools in her field, which has been held back by the mostly hidden lifestyle of fungi.

Boddy did note that the Lund University team’s soil chips are “starkly geometric.” “But there’s no reason why you couldn’t make chips that are much more similar to the pore structure of soil, or the vessel structure of wood,” she said.

The same observation came from Philippe Baveye, a soil scientist who, like Boddy, was not involved in Lund University’s soil chip research. He himself has worked with devices similar to soil chips, also made out of PDMS. The chip-like devices Baveye worked with were developed by Leslie Shor of the University of Connecticut. These simulate soil in a different way, by making a computer algorithm prescribe a packing of tiny ellipsoidal particles on the chip, which together simulate the intricate structure of sandy loam soil. According to Baveye, this makes the soil chip more closely mimic the pore space of real soil.

“The geometrical spatial models enable us to single out the principal rules that govern microbial responses.”According to Hammer, both approaches—the realistic and the geometric—are useful. In her soil chips, she argued, fungi are presented with clear, uniform choices, where researchers can observe what the hyphae do when presented with a sudden widening of their channel, or with a bend of more than 90° that seems to lead them back to where they came from. “The geometrical spatial models enable us to single out the principal rules that govern microbial responses,” she said.

In the future, Hammer does intend to include soil chips that closely model a certain type of soil in her research. “One possibility is using a tomography image of soil, [which] maps the interfaces between solids and air, and using this directly to produce [a pattern] for the chip.”

Her group is also bringing the chips closer to soil in a literal sense. A paper that is still in review will describe what happened when a soil chip was buried in the ground for a time and allowed to become part of the environment.

For future observations of life in soil chips, Hammer plans to go beyond optical microscopy. By replacing the glass lids of the soil chips with suitable materials, her group could study the goings-on inside them with infrared absorption, such as Raman spectroscopy or X-rays.

—Bas den Hond (bas@stellarstories.com), Science Writer

Making the Universe Blurrier

Tue, 04/13/2021 - 12:23

When the European Southern Observatory (ESO) selected Cerro Paranal, a 2,664-meter mountain in Chile’s Atacama Desert, to host its Very Large Telescope (VLT), it touted the location as “the best continental site known in the world for optical astronomical observations, both in terms of number of clear nights and stability of the atmosphere above.”

Cerro Paranal remains one of the best observing sites on the planet. Yet it’s not as pristine as it was at the time of its selection, in 1990. A study released last September showed that temperatures have climbed and jet streams are more troublesome, making the VLT’s observations of distant stars, galaxies, and exoplanets a tiny bit fuzzier.

“The main motivation of this study was to raise awareness among the astronomical community that climate change is impacting the quality of observations,” said Faustine Cantalloube, an astrophysicist at Laboratoire d’Astrophysique de Marseille and lead author of the report.

“Long term, we’re concerned about how climate change will affect the viability of certain observing sites.”“As atmospheric conditions influence the astronomical measurements, it is important to be prepared for any changes in the climate,” agreed Susanne Crewell, a coauthor and a professor of meteorology at the University of Cologne. These preparations are especially relevant as ESO is building the Extremely Large Telescope (ELT), a 39-meter behemoth that will be the largest telescope in the world, on a peak about 20 kilometers from Paranal. ELT is expected to be a “workhorse” for decades, said Crewell.

Astronomers are just beginning to consider how those changes are affecting observations or might affect them in the years ahead. Potential problems include reduced “seeing”—the clarity with which a telescope observes the universe—plus greater risk from forest fires and a need for more power-consuming air-conditioning to keep telescope mirrors cool.

“Long term, we’re concerned about how climate change will affect the viability of certain observing sites,” such as Paranal and others in Chile, said Travis Rector, an astronomer at the University of Alaska Anchorage and chair of the American Astronomical Society Sustainability Committee. “Will we enjoy the same quality observing conditions many years down the road?”

Evaluating the VLT as a Test Case

Paranal is the first observatory for which scientists have studied that question. Cantalloube’s team compiled more than 3 decades of weather observations made at the site, including temperature, wind speed and direction, and humidity. The study also included a reanalysis of information from two European climate databases that date to 1980.

The records revealed a temperature increase of 1.5°C over the study period. The change is important because the VLT’s domes are cooled during the day to match the expected ambient temperature at sunset. If the telescope mirrors are warmer than the air temperature, heat waves ripple above them like those above a desert highway on a summer afternoon, blurring the view.

The VLT’s current cooling system was designed to maintain a temperature no higher than 16°C because when the telescopes were designed, sunset temperatures exceeded that value roughly 10% of the time. In 2020, though, they did so roughly one quarter of the time. As a result, Cantalloube said, air-conditioning capacity, as well as cooling capacity for many telescope instruments, will need to be increased in the future as the temperature continues to rise (perhaps by up to 4°C by the end of the century, according to some models).

The study also found that changes in the jet stream cause periodic increases in wind shear in the upper troposphere, particularly during El Niño events, creating a blurring effect known as a wind-driven halo. The VLT’s four component 8-meter telescopes are equipped with adaptive optics, which use lasers and deformable mirrors to create and focus an artificial “guide star” in the upper atmosphere, compensating for most of the blurring. But turbulence from the wind shear is making it tougher for the system to work. That’s particularly troublesome for efforts to image exoplanets, which require both high resolution and high contrast, the study noted.

“Monitoring meteorological parameters on site is one way to make the best out of the telescope time thanks to an adapted observing schedule,” said Cantalloube. For example, “some observations are less affected by humidity and some more, so if we know in advance the atmospheric humidity content, we can schedule observing programs accordingly.”

Cantalloube said her team is continuing to evaluate the Paranal data while expanding its work to study conditions at major observatories in Hawaii, Arizona, and the Canary Islands.Last August, a fire on California’s Mount Hamilton burned one residence and damaged others at Lick Observatory and barely missed some of the telescopes.

Threats on the Ground

Rector notes that climate challenges aren’t limited to the quality of the view, though. “The most obvious threat is forest fires,” he said. “In recent years we’ve seen several major fires come near observatories, especially in California.”

Last August, for example, a fire on California’s Mount Hamilton burned one residence and damaged others at Lick Observatory and barely missed some of the telescopes. A month later, another fire threatened Mount Wilson Observatory, near Pasadena. Siding Spring Observatory in Australia lost its lodge for visiting astronomers and other structures in 2013, and the country’s Mount Stromlo Observatory lost several major telescopes in 2003.

The aftermath of a fire at Siding Spring Observatory, Australia, shows up as brown in this false-color image from NASA’s Terra satellite. The observatory forms a small patch of red speckled with white dots near the center. The image was snapped in February 2013, 3 weeks after the fire. Credit: NASA Earth Observatory image by Jesse Allen; data from NASA/METI/ERSDAC/JAROS, U.S./Japan ASTER Science Team

“Many observatories are remote, they have limited access, so defending them against forest fires can be very difficult,” Rector said. “They’re the most vivid threat.”

Proposed Solutions

One proposed solution to climate change could actually cause more problems for astronomy, Rector said. Some climate scientists have suggested that injecting aerosols into the upper atmosphere could reduce the amount of sunlight reaching the surface, perhaps reversing the warming trend. However, that would also reduce the amount of light from stars and other astronomical objects reaching the surface. “Aerosols are probably best saved as a last-ditch Hail Mary,” Rector said.

Cantalloube and others said that astronomers also must reduce their own carbon footprint by reducing travel, cutting back their reliance on energy-guzzling supercomputers, and taking other steps. “Technological developments can cope with these subtle effects due to climate change,” Cantalloube said. “I’m more concerned about the way round: How can we make our observatories greener?”

—Damond Benningfield (damond5916@att.net), Science Writer

This story is a part of Covering Climate Now’s week of coverage focused on “Living Through the Climate Emergency.” Covering Climate Now is a global journalism collaboration committed to strengthening coverage of the climate story.

Migrant Workers Among the Most Vulnerable to Himalayan Disasters

Mon, 04/12/2021 - 12:47

Search operations are still underway to find those declared missing following the Uttarakhand disaster on 7 February 2021.

“As of now [18 March], we have found 74 bodies and 130 people are still missing,” said Swati S. Bhadauria, district magistrate in Chamoli, Uttarakhand, India. Chamoli is the district where a hanging, ice-capped rock broke off from a glacier and fell into a meltwater- and debris-formed lake below. The lake subsequently breached, leading to heavy flooding downstream.

Migrant workers in the Himalayas are increasingly vulnerable to climate-driven disasters exacerbated by corporate negligence.The disaster is attributed to both development policies in the Himalayas and climate change. And as is common with climate-linked disasters, the most vulnerable sections of society suffered the most devastating consequences. Among the most vulnerable in Chamoli are its population of migrant construction workers from states across India.

Of the 204 people dead or missing, only 77 are from Uttarakhand, and “only 11 were not workers of the two dam companies,” Bhadauria noted. The two dams referred to are the 13.2-megawatt Rishiganga Hydroelectric Project and the 520-megawatt Tapovan Vishnugad Hydropower Plant, which has been under construction since 2005. The flash floods in Chamoli first broke through the Rishiganga project and then, along with debris accumulated there, broke through the Tapovan Vishnugad project 5–6 kilometers downstream.

“Both local people and others from Bihar, Punjab, Haryana, Kashmir, Uttar Pradesh…from all over India work on these two [hydroelectric] projects,” said Atul Sati, a Chamoli-based social activist with the Communist Party of India (Marxist-Leninist) Liberation.

Sati noted that the local community suspects the number of casualties from the Uttarakhand disaster may be higher than reported because not all the projects’ migrant workers—including those from bordering countries like Nepal—have been accounted for by the construction companies and their subcontractors.

The National Thermal Power Corporation is the state-owned utility that owns the Tapovan Vishnugad project. “NTPC has given building contracts to some companies,” Sati explained. “These companies have given subcontracts to other companies. What locals are saying is that there are more [than 204] who are missing. They say there were [migrant] workers from Nepal.”

NTPC and the Kundan Group (the corporate owner of the Rishiganga project) have not responded to repeated requests for comment.

No Early-Warning System

“NTPC did not have a proper early-warning system,” said Mritunjay Kumar, an employee with the government of the east Indian state of Bihar. Kumar’s bother, Manish Kumar, was a migrant worker employed with Om Infra Ltd., an NTPC subcontractor. On the day of the disaster, Manish was working in one of the silt flushing tunnels of the Tapovan project and lost his life in the flooding.

Mritunjay Kumar noted that it “would have taken time” for the floodwater and debris to flow from the meltwater lake to the Rishiganga project and then to the Tapovan project. “Even if workers knew 5 minutes in advance,” he said, “lives could have been saved.”

“If they knew that such disasters would happen, why didn’t they install early-warning systems? Scientists have been warning about climate change and dams and road building activities in the Himalayas for a very long time.”An advance notice “would have given [Tapovan] workers at least 5–6 critical minutes,” agreed Hridayesh Joshi, an environmental journalist from Uttarakhand who reported from Chamoli after the disaster. “Many people made videos; they shouted and alerted people on site. If there was a robust early-warning system, many more lives could have been saved…even if not all, at least some would have escaped.”

“It is true that this was an environmental, climate change driven disaster. But NTPC had not taken any measures to save their workers from such disasters,” Kumar said. “They [NTPC] hadn’t even installed emergency exits for tunnel workers. The only proper exit was a road which faces the river. If NTPC had installed a few temporary iron staircases, many people could have climbed out.”

Kumar noted that the Tapovan project has been under construction since before the 2013 Kedarnath disaster, in which more than 5,000 people lost their lives as rainfall-driven floods ravaged northern India. “If they [NTPC] knew that such disasters will happen, why didn’t they install early-warning systems?” Kumar asked. “Scientists have been warning about climate change and [dam and road] constructions in the Himalayas from a very long time. Obviously, NTPC was aware.”

—Rishika Pardikar (@rishpardikar), Science Writer

This story is a part of Covering Climate Now’s coverage focused on “Living Through the Climate Emergency.” Covering Climate Now is a global journalism collaboration committed to strengthening coverage of the climate story.

Extreme Rainfall Statistics May Shift as U.S. Climate Warms

Mon, 04/12/2021 - 12:47

As the Earth warms, extreme rainfall events are intensifying, thanks in part to the fundamental thermodynamic properties of air. This intensification will likely affect ecosystems and flooding around the world. However, to prepare for it, communities need a clearer understanding of how the timing, duration, and intensity of rainfall extremes will change.

A new study by Moustakis et al. presents a comprehensive assessment of future changes in the statistics of rainfall extremes across the contiguous United States, confirming that extreme events are likely to intensify, and their duration and seasonal timing will shift.

The analysis draws on hourly precipitation data from 3,119 rainfall stations, as well as from high-resolution climate model simulations that are capable of predicting hourly rainfall at a spatial scale of about 4 kilometers. These simulations operate under a scenario in which global greenhouse gas emissions remain high throughout the 21st century.

The results of the study suggest that rainfall extremes will occur more often; on average, what is now a 20-year rainfall event will become a 7-year event across much of the country. Extreme events will also become more intense, with the greatest intensification predicted to occur in the western United States, the Pacific Coast, and the East Coast.

The study also predicts that rainfall extremes will occur more evenly over the course of the year, with the biggest seasonal changes happening in the plains, the Northern Rockies, and the prairies. Meanwhile, the duration of extreme rainfall events is predicted to shorten, with the Pacific Coast experiencing the biggest decrease. These changes could affect the ability of vegetation and soils to absorb water, thus affecting flood risk.

Further observations and model improvements will be needed to refine these predictions, the authors say. Nonetheless, these takeaways from the study could help inform efforts to prepare for future rainfall extremes. (Earth’s Future, https://doi.org/10.1029/2020EF001824, 2021)

—Sarah Stanley, Science Writer

Upward Lightning Takes Its Cue from Nearby Lightning Events

Fri, 04/09/2021 - 13:07

In the chaos of a thunderstorm, upward moving lightning occasionally springs from the tops of tall structures. Scientists don’t fully understand how upward lightning is triggered; it is likely a combination of multiple environmental factors, such as the background electric field and the structure’s height. In a new study, Sunjerga et al. investigate how ambient lightning events near tall structures may trigger upward lightning.

The team created a simplified model of different scenarios that have been observed to occur near a tower before upward lightning. The model was able to explain the mechanism that causes upward lightning to spark from a structure as a result of nearby lightning activity.

According to their simulations, both the relatively slow leader discharge (the precursor paths that lightning will follow) passing above the tower and the much faster return stroke (the bright flash we see as lightning) in the vicinity of the structure can enhance the ambient electric field enough to trigger upward lightning from towers as short as about 30 meters—about 10 stories.

The study confirms the existence of a causal relationship between nearby lightning and upward lightning from towers. Additional research into factors like the local geography, the frequency of each scenario, and the wind speed can help further explain this unusual phenomenon. (Journal of Geophysical Research: Atmospheres, https://doi.org/10.1029/2020JD034043, 2021)

—Elizabeth Thompson, Science Writer

A Space Hurricane Spotted Above the Polar Cap

Fri, 04/09/2021 - 13:07

The spectacular light displays of the polar aurorae are no great mystery. When solar wind hits Earth’s magnetosphere, electrons rain down into the upper atmosphere, which causes bursts of color across the sky in high-latitude belts around the planet. But aurorae over the north polar cap, especially during periods when the solar wind is quiet, have puzzled space weather experts for decades. Now an international team of researchers has found an explanation: space hurricanes.

In a recent paper published in Nature Communications, the team described an event that looked remarkably like lower atmosphere storms that slam into our coasts. On 20 August 2014, arms of plasma more than 965 kilometers (600 miles) across spun around a calm center, raining electrons into Earth’s upper atmosphere above the magnetic North Pole.

“I never conceived of the possibility that there would be a spiral-shaped, circular bright aurora in the middle of the polar cap.”“The whole concept is surprising and exciting,” said Larry Lyons, a professor at the University of California, Los Angeles, and one of the study’s authors. “I never conceived of the possibility that there would be a spiral-shaped, circular bright aurora in the middle of the polar cap.”

Lead author Qing-He Zhang of Shandong University and his students spent 2 years combing through thousands of auroral images taken by the low-Earth-orbiting satellites of the U.S. Defense Meteorological Satellite Program. They found dozens of cases of what looked like space hurricanes in images collected over the past 15 years, but none so clear as the one that occurred in 2014 and lasted about 8 hours.

John Foster, a research scientist at the Massachusetts Institute of Technology’s Haystack Observatory who was not involved in the study, recalls spotting a similar phenomenon over the pole some 50 years ago, but experts couldn’t explain what they were seeing at the time. “In those days, the spacecraft, even though there were a lot of them up in space, they did not have the kind of instrumentation that you would need to really understand what was taking place,” he said. “What makes this event really special is the wide variety of instrumentation that was available in space to look at the characteristics of this phenomenon.”

Zhang’s team was able to combine a wealth of aurora, plasma, and magnetic field data from the space hurricane with a powerful, 3D simulation to reproduce the space hurricane using the solar wind and magnetic field conditions on that day in 2014.

Schematic of a space hurricane in the northern polar ionosphere. Credit: Zhang et al., 2021, https://doi.org/10.1038/s41467-021-21459-y A Tale of Two Types of Hurricanes

Foster cautions that it’s important to remember that although the space hurricane may look a lot like its tropospheric counterpart, the forces driving the two types of hurricanes are totally different.

Space hurricanes are also much less of a risk to humans than the more familiar variety, although “we do have some evidence that it did cause strong and unusual scintillations,” Zhang said. “These are fluctuations of radio waves passing through the ionosphere.” These disturbances could garble satellite communications or navigation. The storm may also heat up and expand the upper atmosphere, changing the density of the highly trafficked region; the change could cause drag and alter the orbit of any satellites or pieces of space debris that pass through it, according to Lyons.

“If you want to know where the space station is going to be a few hours from now, you have to know what kind of an atmosphere it’s going through,” Lyons explains.

For Zhang and his colleagues, the identification of the space hurricane is only the beginning.

“There are several open questions remaining,” he said. “What controls the rotation of space hurricanes? Are these space storms seasonal like their tropical counterparts, perhaps limited to the summer when the Earth’s magnetic dipole is tilted just the right way? And can space hurricanes be forecasted like weather events on Earth?”

—Kate Wheeling (@katewheeling), Science Writer

A Reminder of a Desert’s Past, Before Dingo Removal

Thu, 04/08/2021 - 13:20

As ecologist Mike Letnic trudged up and down the red-orange dunes of the Strzelecki Desert in South Australia, he noticed that his boots sank deep into the sand and his equipment was more likely to be covered in sand when he was on the northern side of what’s known as the Dingo Fence. A 5,614-kilometer barrier, the fence stretches across southeastern Australia and protects sheep flocks from the wild dogs—dingoes are plentiful on the north side of the fence, but very few exist on the southern side.

The contrast intrigued Letnic, a professor at the University of New South Wales’s Centre for Ecosystem Science, and he has dedicated many of his years to studying how the fence and the resulting lack of dingoes on the southern side have affected the desert’s ecosystem. He has documented, for example, how the absence of the large predator has allowed populations of feral cats and foxes to explode, which, in turn, has decimated the native herbivore populations. One such creature is the hopping mouse, which eats the seeds and seedlings of the native shrubbery. In a 2018 study, Letnic and a coauthor flew drones over the dunes and found that the dingoes’ absence on the southern side had allowed shrubs to grow more densely, which altered the dunes’ shape and size. The denser shrub coverage slows the velocity of the wind at ground level and causes the dunes to become taller and the sand to be more compact. “It’s a very windy place,” Letnic said. “And once the shrubs get to a certain density, the wind actually skates across the top of the shrubs.”

A dingo trots by the Dingo Fence. Credit: Nicholas Chu Abundant Kangaroos Gobble Up Grass

“It will rain occasionally, and you get a lot of growth, and that’s when we were able to see the difference in the grazing pressure on each side of the fence.”Letnic’s new study shows that the fence has also caused a different vegetation change—one that is so pronounced it can be seen from space. Using 32 years’ worth of satellite imagery, Letnic and Adrian Fisher, a remote sensing specialist at the University of New South Wales, found that native grasses on the southern side had poorer long-term growth than vegetation on the northern side.

The reason stems from the overabundance of kangaroos on the southern side (kangaroos are the preferred prey of dingoes), which has put tremendous grazing pressure on the native grasses. Letnic and Fisher compared the satellite images with weather data and found that after rainfall, vegetation grew on both sides of the fence, but it did not grow as much or cover more land on the southern side.

“It’s a desert, so there’s not much growth in plants. And then it will rain occasionally, and you get a lot of growth, and that’s when we were able to see the difference in the grazing pressure on each side of the fence,” said Fisher.

Reminder of the Past Landscape

Most dingo research has been conducted using either drone imagery or field studies, but the U.S. Geological Survey Landsat program has allowed for further analyses. A NASA satellite has been taking continuous images of the area since 1988. Satellite imagery, often used for crop or forest studies, traditionally looks strictly at photosynthesizing vegetation, such as plants, trees, and grass. Here Fisher used a model to factor in nongreen vegetation, like shrubs, dry grasses, twigs, branches, and leaf litter. According to Fisher, considering nongreen vegetation was necessary for an arid ecosystem. “Australia is mostly desert, and so to look at all that landscape, we need a good way to factor in the brown vegetation, the dry stuff,” he said.

“In Australia, we’ve been pretty successful at suppressing dingo numbers for more than 100 years.…And that memory of what it was like before is nearly gone.”The dynamics of how humans alter the food webs of ecosystems is an urgent topic and one that’s becoming even more difficult to predict because of climate change, says Sinéad Crotty, an ecologist and project manager at the Yale Carbon Containment Lab who was not involved in the new study. Letnic and Fisher, she said, “do a great job of utilizing multiple lines of evidence across spatial scales to demonstrate the effect of removing apex predators on vegetation and geomorphology.”

Letnic and Fisher said their work is an important reminder of how the area’s ecosystem used to be—one that’s easy to overlook because the fence has been around since the 1880s. “In Australia, we’ve been pretty successful at suppressing dingo numbers for more than 100 years,” said Letnic. “And that memory of what it was like before is nearly gone.”

—Nancy Averett (@nancyaverett), Science Writer

Rethinking Oceanic Overturning in the Nordic Seas

Thu, 04/08/2021 - 13:17

In the remote reaches of the northern Atlantic, a major ocean current brings warm surface water from the tropics toward the Arctic and returns cold deep water toward the equator. This flow of warm water, known as the Atlantic Meridional Overturning Circulation (AMOC), has played a fundamental role in maintaining the mild climate of central Europe and Scandinavia as we know it today.

We also know that changes in its strength seem to have contributed to well-known climate events in recent millennia, and it continues to modulate global climate today. For example, a weakened AMOC may have played a role in causing almost 600 years’ worth of frigid winters in Europe and North America. This period, called the Little Ice Age, lasted roughly from 1300 until 1870 and came on the heels of the Medieval Warm Period (circa 950–1250), when temperatures in the Northern Hemisphere were unusually warm.

Nearly half of the AMOC’s poleward flow of warm, salty waters enters the Nordic Seas—comprising the Greenland, Iceland, and Norwegian Seas. Here the water cools and pools north of the undersea Greenland-Scotland Ridge (GSR), which spans from southeastern Greenland across Iceland and the Faroe Islands to northern Scotland, before spilling back into the deep North Atlantic (Figure 1). A host of important questions remains about the dynamics of the ocean near the GSR and the effects of these dynamics on regulating climate. The more we know about the variability and driving mechanisms of the exchange of waters across the GSR, the better we can explain and predict future changes in this system.

Fig. 1. This simplified view (top) shows the surface flows (red arrows) and deep return flows (blue arrows) that make up the large-scale ocean circulation in the North Atlantic. Color bands on the ocean surface indicate average sea surface temperatures from 1900 to 2019 (data are from the Hadley Centre) and highlight the northward extent of warm waters to higher latitudes. The longitude-depth temperature distribution of the ocean (bottom; data are from the World Ocean Atlas 2018) across the Greenland-Scotland Ridge (GSR, white transect line in the top panel) is also shown. The exchange of waters across the GSR is driven by the rapid loss of heat to the atmosphere over the Nordic Seas. This heat loss causes the waters to sink and build a huge reservoir of cold, dense water that spills back into the deep North Atlantic across the GSR, completing the overturning process.

Here we discuss two recent studies we conducted to address these important questions, and we highlight surprises encountered along the way that point to the importance of continuing research and improved ocean monitoring.

Reconstructing AMOC’s History

Scientists have come to realize that the AMOC has two pathways of overturning circulation. One is open ocean convection in the Irminger and Labrador Seas (Figure 1) that produces the upper layer of North Atlantic Deep Water (NADW). The second involves progressive cooling of warm, salty water from the Atlantic in the Nordic Seas. This cooling results in dense water spilling over the GSR back into the North Atlantic—mainly through two passages, the Denmark Strait between Greenland and Iceland and the Faroe Bank Channel south of the Faroes—and forming a lower layer of NADW. Both pathways play important roles in climate variability over a wide range of timescales, although the interplay between the two is not well understood.

Huge heat losses from the Nordic Seas and production and pooling of very dense water behind the Greenland-Scotland Ridge are fundamental to maintaining a mild climate in northern Europe.Both regions depend upon heat loss to produce water of greater density, but it appears that huge heat losses from the Nordic Seas and the concomitant production and pooling of very dense water behind the GSR are fundamental to maintaining a mild climate in northern Europe. This heat loss produces a healthy supply of NADW that spills back into the global abyss and enables warm, salty water to feed the Nordic Seas [Chafik and Rossby, 2019]. Exploring the intricate processes and routes by which warm, saline North Atlantic water is gradually transformed into this cold, dense water motivates our research.

The strength of the AMOC at the GSR since the mid-1990s has been a major focus of many institutional and international research initiatives, which, among other findings, have helped to reveal the stability of the water mass exchanges (warm inflows and cold deep flows or overflows) across the GSR [Østerhus et al., 2019]. But how did this inflow to the Nordic Seas vary before the 1990s? And how well do we know the deep pathways that the densest water takes before exiting the Faroe Bank Channel to the North Atlantic Ocean?

To investigate the warm upper-ocean flow at the GSR, we used historic hydrographic data dating back to the early 1900s to gain insight into how the Nordic Seas inflow varied since the beginning of modern oceanography [Rossby et al., 2020]. To do this, we constructed a time series of dynamic height difference (which essentially represents the pressure gradient in the upper ocean that determines water flow) between areas north and south of the GSR to measure transport of almost all water entering the Nordic Seas.

This transport time series showed evidence of strong variability in Nordic Seas inflow on multidecadal timescales. We found that the volume of and heat transported in this poleward flow, as measured at the GSR, are strongly coupled to the Atlantic multidecadal variability (AMV), which describes natural patterns of sea surface temperature variability in the North Atlantic that influence climate globally [Zhang et al., 2019].

The inflow of warm water to the Nordic Seas has been quite stable over the past century since the start of modern oceanography.The AMV affects Nordic Seas inflow because deep convection in the northeast Atlantic translates the surface temperature variations down into the upper layers of the ocean, and these variations shape the ocean’s dynamic height field. Coupling between the inflow of Atlantic water into the Nordic Seas and the AMV was so tight that we could find no evidence for long-term or secular weakening or strengthening of this poleward flow (related to anthropogenic warming, for example). In short, the inflow of warm water to the Nordic Seas has been quite stable over the past century since the start of modern oceanography.

This finding is consistent with a previous study that reported stability of the inflow over the past 2 decades [Østerhus et al., 2019] and supports the conclusion that Nordic Seas overturning circulation has been stable over the past 100 years. This stability is surprising given the extraordinary warming presently underway in the Nordic Seas and Arctic Ocean. Understanding the reasons for this apparent disconnect is important and points to the need for improved and even expanded ocean monitoring because the continued stability of this vital ocean circulation system is not guaranteed in the future. It is also unclear how future change may manifest or which early-warning indicators should be relied upon to forecast change [Østerhus et al., 2019].

Discovering a New Flow Route

Another example of where we have had to revise our thinking regarding the pathways of the deep currents in the Nordic Seas came with the recent discovery of an unknown route by which cold water courses its way through the Norwegian Sea. We identified that this new route directs cold deep flows north of the Faroe Islands to the Norwegian slope before turning them south through the Faroe-Shetland Channel and into the deep North Atlantic (Figure 1) [Chafik et al., 2020]. Previously, only direct communication of water from north of the GSR to the Faroe-Shetland Channel was known [Søiland et al., 2008].

This view alters our previous understanding and suggests that we do not yet have a firm grasp of the deep circulation of the Nordic Seas and how it varies over time.We found that which route water takes north of the GSR and how much is funneled each way depend on the prevailing winds [Chafik et al., 2020]. Under weak westerly wind conditions in the Nordic Seas, the densest water that feeds the Faroe Bank Channel comes primarily from north of Iceland. During strong westerly wind conditions, however, more water seems to originate from along the Jan Mayen Ridge, which is located farther north of Iceland and more in the middle of the Nordic Seas. This wind dependence is curious, considering the strong control that bathymetry can exert on the circulation, and requires more study—it hints at deep ocean variability we had not previously appreciated or recognized.

In the same study, we reported on a previously undiscovered deep rapid flow, or deep jet, called the Faroe-Shetland Channel Jet. Remarkably, this jet flows south along the eastern slope of the channel rather than along the western side as has long been assumed. The deep jet is found to be the main current branch in terms of transport that delivers the densest water to the North Atlantic Ocean via the Faroe Bank Channel. This surprising finding, which countered past observations and thinking, required that we carefully recheck our data and analyses, but ultimately, we decided that this deep jet was a real feature. This view thus alters our previous understanding of the deep circulation in the region and suggests that we do not yet have a firm grasp of the deep circulation of the Nordic Seas and how it varies over time.

Monitoring for Change

In the past few years, several workshops have been held to review and identify gaps in our knowledge of exchange between the Atlantic and the Nordic Seas and to advance our understanding of ocean circulation in this region.

All available observational evidence so far indicates that there is no long-term trend in the Nordic Seas meridional overturning circulation to date [Østerhus et al., 2019; Rossby et al., 2020]. Yet meeting attendees agreed that given the substantial ocean warming and freshening (from water runoff from Greenland and precipitation) taking place at higher latitudes, it is essential to continue monitoring the overturning to assess its role in ongoing and future climate change.

Several existing techniques will be useful for this monitoring effort. Satellite altimetry can be used to study flows at the surface and in the upper layers of the ocean. Vessel-mounted acoustic Doppler current profilers can also probe these flows in the upper ocean. Meanwhile, moored sensor arrays track the variability of deep currents. In addition, Lagrangian techniques, specifically using acoustically tracked subsurface floats that drift with ocean currents, have proven very effective at elucidating pathways [e.g., Søiland et al., 2008] and timescales along which subsurface waters flow and gradually disperse or mix with surrounding waters.

The climatic consequences of a potential shutdown of this vital ocean circulation are so enormous that they obligate us to improve our understanding of the Nordic Seas.Floats in particular could help address one area of considerable interest, namely, the degree to which fresh water from the Arctic and Greenland Sea can mix with and dilute warm, saline water from the Atlantic. Such dilution could suppress deep temperature- and density-driven convection, thus weakening or shutting down the overturning in the Nordic Seas and, by extension, the deepest component of the AMOC.

However, most scientists no longer think such a shutdown scenario is likely because observations to date indicate that Arctic and Greenland waters tend to remain trapped around and south of Greenland rather than mixing and diluting the Atlantic water flowing north in the Nordic Seas [Intergovernmental Panel on Climate Change, 2019]. Nonetheless, there is broad agreement that the climatic consequences of a potential shutdown of this vital ocean circulation are so enormous that they obligate us to improve our understanding of the Nordic Seas (and generally about the overturning at higher latitudes) rather than to presume we know enough already about the inner workings of the ocean in this region. These concerns help to explain the rapidly growing interest in the dynamics of the Atlantic’s remote northern reaches.

Acknowledgments

L.C. acknowledges support from the Swedish National Space Agency through the Fingerprints of North Atlantic-Nordic Seas Exchanges from Space across Scales (FiNNESS) project (Dnr: 133/17).

After the Dust Cleared: New Clue on Mars’ Recurring Slope Lineae

Thu, 04/08/2021 - 11:30

Recurring slope lineae (RSL) are dark lines that appear on steep slopes, then lengthen, fade, and reappear, typically annually. Proposed explanations for their formation involve either the flow of liquid or dry sediment, with varying triggering mechanisms. McEwen et al. [2021] report a significant increase in the number of RSL detections following the planet-encircling dust storm on Mars in 2018, compared to previous years. The latitudinal and seasonal range in which RSLs were detected was also expanded compared to previous years. These observations raise a new hypothesis about the potential role of dust mobilization and deposition in forming these features. Such a mechanism does not involve flowing water or brines, and if correct, diminishes the likelihood that RSLs represent modern-day habitable zones.

Citation: McEwen, A. S., Schaefer, E. I., Dundas, C. M., Sutton, S. S., Tamppari, L. K., & Chojnacki, M. [2021]. Mars: Abundant recurring slope lineae (RSL) following the planet‐encircling dust event (PEDE) of 2018. Journal of Geophysical Research: Planets, 126, e2020JE006575. https://doi.org/10.1029/2020JE006575

―A. Deanne Rogers, Editor, JGR: Planets

Past Climate Change Affected Mountain Building in the Andes

Wed, 04/07/2021 - 12:05

Climate change can affect the tectonic processes that deform Earth’s surface to build mountains. For instance, in actively deforming mountain ranges such as the North Patagonian Andes, erosion caused by increased rainfall or glaciers might alter the structure of the mountains to such an extent that internal stresses and strains shift and reconfigure, changing how the terrain is molded.

However, although theoretical evidence supports the influence of climate-driven erosion on mountain building, real-world data are lacking. Now García Morabito et al. present new data that support the theorized feedback between climate and tectonic deformation in the North Patagonian Andes.

Previous research has extensively explored the region’s climatic and geologic history. Still, the timing, duration, and spatial patterns of tectonic deformation have not previously been examined with enough precision to draw strong causal connections between climate change and mountain-building processes.

To fill this gap, the researchers conducted field observations in the North Patagonian Andes, with a focus on the foreland basin that lies just east of the mountains and holds signatures of their tectonic history. Key to the analysis was dating of basin rocks and structures according to uranium-lead ratios and beryllium isotope levels. This dating enabled the researchers to analyze deformation at the level of individual faults.

Combined with previously collected data, the new observations revealed a clearer picture of the region’s past: A period of widespread deformation and uplift appears to have occurred from about 13 to 7 million years ago. Then, deformation decreased in the foreland, coinciding with the onset of glaciation in the mountains.

In the past few million years, as glacial erosion intensified, structural reconfiguration occurred, with foreland deformation coming to a halt while fault activity within the mountains increased. These findings match theoretical predictions, supporting the impact of climate change on mountain-building processes. (Tectonics, https://doi.org/10.1029/2020TC006374, 2021)

—Sarah Stanley, Science Writer

Descifrando las causas de la actividad de los huracanes en el pasado

Wed, 04/07/2021 - 12:03

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Los pronósticos sobre la frecuencia de los huracanes en un mundo que se calienta siguen siendo poco claros. Aunque los científicos creen que el cambio climático aumentará la intensidad de las tormentas, los datos sobre si el clima provocará más huracanes en el futuro son inciertos. Para las comunidades costeras, entender las tendencias de los huracanes a largo plazo es importante: La Oficina de Presupuesto del Congreso estima que los ciclones tropicales cuestan a la economía estadounidense 54,000 millones de dólares al año.

Para comprender mejor el papel del clima en la actividad de los huracanes en el pasado y en el futuro, Wallace et al. investigaron si el clima explica los patrones de ocurrencia de huracanes a largo plazo registrados en núcleos de sedimentos. Utilizando como referencia capas arenosas de los núcleos de la isla de Andros Sur, en las Bahamas, los autores desarrollaron un modelo para imitar los patrones de los huracanes registrados en los sedimentos a lo largo de miles de años. A partir de la misma simulación climática, generaron 1,000 “pseudoregistros” diferentes, cada uno de los cuales representaba una historia teórica de huracanes en un solo lugar.

Cada registro individual contenía intervalos de actividad activa y tranquila de huracanes que se asemejaban a los patrones reales en los núcleos de sedimentos de las Bahamas. Si el clima fuera el responsable de estos intervalos, los periodos de actividad y de calma deberían haber ocurrido aproximadamente en el mismo momento en todos los pseudoregistros. Sin embargo, los investigadores descubrieron que los intervalos se produjeron en momentos muy diferentes en cada registro, lo que les llevó a concluir que los patrones de huracanes durante el último milenio observados en los núcleos de sedimentos eran más probablemente el resultado de la aleatoriedad que de las variaciones climáticas. Esto no quiere decir que los huracanes se produzcan al azar, señalan los investigadores, sino que el clima no explica claramente el patrón observado en ningún registro sedimentario.

Los autores concluyen que, si la aleatoriedad determina los registros individuales de paleohuracanes, ningún registro de una sola localización puede implicar al clima como impulsor de los patrones de las tormentas. Los resultados ponen de manifiesto la necesidad de recopilar datos más amplios para determinar el papel del clima en la actividad de los huracanes a largo plazo. (Geophysical Research Letters, https://doi.org/10.1029/2020GL091145, 2021)

—Aaron Sidder, Escritor de ciencia

This translation was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Relating Seismicity and Volcano Eruptions

Wed, 04/07/2021 - 11:30

Earthquakes are known to precede and coincide with volcanic activity, being triggered by multiple mechanisms (such as stress and pore pressure changes) related to the magmatic processes. Thus it might be possible to use seismicity to forecast impending volcanic eruptions and detect ongoing eruptions at remote and submarine locations that may lack permanent close monitoring.

Pesicek et al. [2021] offer a global quantitative perspective on the problem. They examine 870 volcanoes with confirmed Holocene eruptions (those that happened within the past 11,700 years) in relation to the earthquakes with magnitude equal to or above 4 within 30 kilometers of the vent. The occurrence of such an earthquake by itself does not tell much (only 1 percent of events are followed by an eruption and only 11 percent of eruptions are preceded by earthquakes), but additional properties of volcanos and seismicity may have improved predictive power. In particular, more than half of the examined multiple-event swarms with an increased percentage of non-double-couple earthquake mechanisms preceded eruptions.

The results suggest that combining individual volcano properties and multiple earthquake statistics may inform and improve multi-disciplinary approaches to probabilistic volcano forecast and detection.

Citation: Pesicek, J. D., Ogburn, S. E., & Prejean, S. G. [2021]. Indicators of volcanic eruptions revealed by global M4+ earthquakes. Journal of Geophysical Research: Solid Earth, 126, e2020JB021294. https://doi.org/10.1029/2020JB021294

―Ilya Zaliapin, Associate Editor, JGR: Solid Earth

Tropical Carbon and Water Observed from Above

Tue, 04/06/2021 - 12:42

Satellite observations show how tropical forest carbon fluxes respond to changes in water from climate variability. A recent article in Reviews of Geophysics focuses on satellite-derived information on terrestrial carbon, and water storage and fluxes, with specific reference tropical regions. Here, the some of authors answer our questions about what has been observed and what this tells us about the tropical carbon cycle and its interaction with climate variability.

How have human activities in the tropics altered the carbon balance, and what have been some of the major the impacts on Earth’s climate?

Tropical forests contribute significantly more than other forests to the year-to-year variability of global terrestrial carbon balance. Human activities have both direct and indirect effects on the tropical forest carbon balance.

Direct effects include large-scale deforestation and fragmentation of tropical forests for clearing land for cultivation and grazing or extracting wood for timber and fire; these are largely propelled by economic drivers.

Indirect effects are mostly due to climate variations that introduce widespread anomalies of rainfall, resulting in severe and frequent droughts. These climate variations further impact the carbon cycling of tropical ecosystems, often by reducing forest productivity and carbon uptake and increasing tree mortality and carbon emissions.

Distribution of above ground live biomass carbon density and uncertainty. Credit: Worden et al. [2021], Figure 3What insights do observations from satellites offer into the tropical carbon cycle?

Geographically, tropical forests cover a vast region across the globe that, unlike forests in the northern hemisphere, remain less managed, with limited access from ground and air.

Observations from satellites since the early 1980s have provided information about changes of climate and forest cover, allowing scientists to understand and model changes of tropical forest carbon cycle. Now, a new generation of satellites equipped with advanced technologies are providing measurements of forest structure, carbon stocks, productivity, and other carbon and water fluxes that have substantially changed our understanding the role of tropical forests in local and global climate.

New and emerging patterns from these measurements suggest that climate is increasingly becoming as important as deforestation in affecting the capability of tropical forests to take up atmospheric carbon dioxide.

These forests show more vulnerability to water stress as perceived in the past. This vulnerability is also not uniform across tropics and varies across continents and regions suggesting other hidden factors that moderate how changes in water variability affect carbon cycling.

The top panel is total emissions from forest disturbance by combining the land use activities and fires derived from the Landsat time series. The bottom panel is the average difference between two periods. Credit: Worden et al. [2021], Figure 4How are the carbon cycle and the water cycle connected? 

The tropical carbon and water cycles are composed of many interlinked parts. Carbon is traded for water from the roots through plant stomata, and this transfer of carbon for water in turn depends on radiation, atmospheric humidity and CO2, soil moisture, and nutrients. Carbon is allocated to different parts of a tree (e.g., leaves, trunk, roots), while water is typically stratified in the soil depending on soil type, rooting depth, and amount of rainfall. The allocation of carbon to these different parts of a tree depends on the amount of water and where it is located.

The constellation of satellites now in orbit provide information on many parts of this interlinked system, including photosynthesis, atmospheric humidity, water variability in the soil column, rainfall, evapotranspiration, fires, and the total exchange of carbon dioxide between surface and atmosphere. Ecosystem models and measurements of forest structure and hydraulics are needed to piece together the puzzle of this interlinked problem.

Satellite observations have played a critical role in quantifying carbon and water cycles across the tropicsFrom these measurements, we have learned that the carbon and water cycles in the forests of the Amazon are distinctly different than the forests of the Congo region and those in Asia, although following the same basic physical and ecological processes. Satellite observations have played a critical role in quantifying these variations of carbon and water cycles across the tropics.

How might this information be used to predict future trends?

The future distribution of carbon (trees, plants, roots, litter, soil carbon) and water in the tropics depends on the sensitivity of the different carbon and water pools to environmental drivers such as temperature, rainfall, and CO2 fertilization, convolved with the changes in temperature, rainfall, and CO2 that might occur with climate change. Satellite data can help quantify this sensitivity so that, given possible changes in these environmental drivers, we have an estimate of the future state of tropical carbon and water.

What are some limitations of the current modeling system in understanding the carbon and water cycles?

Different models can have the same amount of photosynthesis and respiration but for very different reasons because they allocate carbon and water differently to different pools (e.g. roots, leaves, soil moisture) or because they have slightly different methods for modeling the transfer of carbon and water through these pools. This equifinality leads to different sensitivities of atmospheric CO2 to changes in water, temperature, and the CO2 fertilization effect. Most models do not have the flexibility to change the carbon and water pools and corresponding exchange processes when confronted with new data, especially data that have different information about carbon and water states such as photosynthesis, total water storage, fire emissions, above ground biomass, and net biosphere exchange.

This reduced flexibility is a result of the extensive number of model parameters, which need to be “tuned” to ensure that it reproduces current observed atmospheric and terrestrial states such as leaf area index. Consequently, while we can learn about possible combinations of processes controlling the carbon and water cycle from existing state-of-the art models, it is challenging to learn about the most accurate combination from existing models. Ultimately, optimizing model parameters to ensure models match the wide array of in-situ and satellite measurements is a key to solving the equifinality problem. For present-day models, their large computational costs and complexity of the models have made parameter “optimization” exceedingly difficult.

What could be done to improve future models examining the relationship between the carbon and water cycles?

Satellite and ground observations, augmented with data collected from aircraft, can now measure a wealth of different carbon and water state variables or fluxes, which can in turn infer the processes and reservoirs that control the carbon and water cycles. However, models are needed that can integrate these new, extensive data sets not just for matching the state variables but also to update the processes controlling these state variables. These models need to be set up in such a way so that carbon and water processes and reservoirs can be adjusted to fit observations in a statistically robust (Bayesian) manner. New computational methods—including parallel computing and state-of-the-art model-data fusion techniques—are increasingly being used, along with satellite measurements spanning multiple decades—to tackle the challenge of model parameter optimization.

–John Worden, (john.r.worden@jpl.nasa.gov, 0000-0003-0257-9549), Sassan Saatchi, and Anthony Bloom, Jet Propulsion Laboratory / California Institute for Technology, USA

Rare Wintertime Thunderstorms Recorded over the U.S. Gulf Coast

Tue, 04/06/2021 - 12:42

As fierce winter storms pummeled much of North America in February, lightning danced over the Gulf Coast. “Thundersnow”—thunderstorm activity during a winter snowstorm—is rare, and researchers are now poring over data from the Houston Lightning Mapping Array network to better understand this elusive phenomenon.

Most thunderstorms tend to occur in spring and summer, and atmospheric science provides an explanation: Warmer conditions are conducive to lifting parcels of air, which transport water vapor upward. This convection is critical to the formation of thunderclouds, said Tim Logan, an atmospheric scientist at Texas A&M University in College Station. “Storms need energy to develop.”

A Boost from the Cold

Because temperatures are lower in winter, there’s less convection. That makes for far fewer wintertime thunderstorms. But they’re possible if something physically forces air upward, said Logan. Advancing cold fronts can provide that boost because they tend to shove air out of the way—and upward—via displacement, he said. “Winter season thunderstorms need dynamical lifting.”

When a winter storm spawns a thunderstorm, the result is known as “thundersnow” or “thundersleet,” depending on the type of precipitation it accompanies. Wintertime thunderstorms are elusive, said Christopher Schultz, an atmospheric scientist at Marshall Space Flight Center in Huntsville, Ala., not involved in the new research. A “very conservative” guess is that they’re about a thousand times less common than their warm-weather counterparts, he said. “It’s a rare phenomenon.”

But earlier this year, Logan and his colleagues had the opportunity to study thundersnow occurring nearly in their own backyards.

Thundersnow in the Lone Star State

“There was lightning observed within 5 miles of my house.”Starting just before Valentine’s Day, winter storms swept over a wide swath of North America. They dumped record-setting amounts of snow and ice, sent temperatures plummeting to unprecedented lows, and left hundreds of thousands of people without power. The Houston area was hit on 14 and 15 February. Logan, who was working from home in College Station, monitored reports of thundersnow in the area. “There was lightning observed within 5 miles of my house,” he said.

Logan and his colleagues are keen to understand how wintertime thunderstorms differ from the storms more commonly observed in the spring and summer. To do so, they’ve been analyzing data from the Houston Lightning Mapping Array.

The network, directed by Logan, consists of 12 solar-powered sensors spread around Houston. Antennas detect radio frequency emissions from lightning, and the measurements are then fed into software that pinpoints the altitude, latitude, and longitude of the lightning. “It gives you a three-dimensional view of where the lightning initiates and how it moves through the atmosphere,” said Logan.

No Lower than Usual

Logan and his collaborators focused on 835 flashes of lightning detected during the February storms by the Houston Lightning Mapping Array. The researchers found that the flashes originated at an altitude of roughly 9 kilometers. That’s surprisingly high, said Logan. Ice, a critical ingredient of thunderstorms, would have been forming at lower than normal altitudes during February’s storm, so it’d make sense if lightning were also occurring at lower altitudes. “It was actually at what’s considered a normal height,” said Logan.

The team also investigated the thunderstorms’ electrical nature using data from both the Houston Lightning Mapping Array network and the National Lightning Detection Network. Lightning can be classified as negative or positive: Negative lightning, by far the most common, transfers a net negative charge. Positive lightning does the opposite.

More Positive in the Winter

Logan and his colleagues found that roughly 30% of the lightning they analyzed was positive. That’s significantly higher than the normal fraction of about 10%. However, that result isn’t wholly surprising, Logan and his collaborators suggested. Wintertime thunderclouds often contain more ice crystals than usual, and those particles tend to take on a positive charge.

“To see something like this here over the Gulf Coast is a treat.”But there are downsides to positive lightning. It’s more likely to be associated with severe weather like hail and tornadoes, and it also often delivers a stronger punch, said Schultz. “Positive flashes are generally more powerful.”

The Houston Lightning Mapping Array—and other lightning detection networks—will continue to stand sentry for thundersnow. It’s a fascinating phenomenon, said Logan, but it’s unlikely to be spotted again over the Houston area this century. “To see something like this here over the Gulf Coast is a treat.”

—Katherine Kornei (@KatherineKornei), Science Writer

Chasing Carbon Unicorns

Mon, 04/05/2021 - 13:06

In the past few months, many governments have announced net zero carbon emission targets. These targets update the nationally determined contributions (NDCs) at the heart of the Paris Agreement. Many private corporations, including BP and Shell, have also set net zero targets.

Net zero describes the goal of removing as much carbon dioxide from the atmosphere as is emitted. Net zero buildings, sometimes called zero-energy buildings, achieve this goal on a small scale. Net zero homes, offices, and even factories often use technologies such as heat pumps, high-efficiency windows and insulation, green roofs, and solar panels. Some net zero building concepts are integrated in the Leadership in Energy and Environmental Design (LEED) standards, an international certification program.

The net zero targets outlined by NDCs and corporations are much more ambitious in scale and include nature-based solutions like planting more trees to sequester carbon, developing carbon capture and storage technologies, and investing in carbon offsets (largely defined as a reduction in carbon emissions made by one party to compensate for emissions made by another).

But net zero targets described by NDCs and businesses are “deceptions” and “distractions,” according to a new report by Friends of the Earth International (FoEI). The new report does not evaluate the efficacy of net zero buildings or LEED certification.

“Net zero is a trick because the assumption is that you can emit carbon so long as you have some solution to sequester the carbon,” said Meena Raman, legal adviser and senior researcher at the Third World Network (TWN).

“Corporations, especially those in the Global North that are already making billions off the climate crisis, get to take cover under ‘net zero’ to continue polluting,” added Jaron Browne, organizing director at the Grassroots Global Justice Alliance (GGJ).

The FoEI report was prepared with support from TWN and GGJ.

A recent, unrelated commentary published in Nature supports the same conclusions: “Sometimes the [net zero] targets do not aim to reduce emissions, but compensate for them with offsets.”

The FoEI report notes misrepresentations of science and technology, as well as the prominent presence of politics in determining net zero targets.

Science: The Carbon Cycle

A foundational fallacy in net zero targets, the FoEI report claims, rests in a misrepresentation of the carbon cycle.

The carbon cycle can be divided into two parts based on timescale. One is the biogenic cycle, in which carbon circulates between the atmosphere, land, and oceans. The other is the slower, nonbiogenic cycle in which carbon circulates between fossil fuels stored underground and the atmosphere. The biogenic cycle can occur within hours, days, and years. The nonbiogenic cycle takes hundreds of thousands, even millions, of years.

Net zero targets conflate the two cycles, the FoEI report claims. Targets assume all the carbon that’s already circulating in the atmosphere as well as all the carbon that will be emitted by fossil fuels can be safely and effectively sequestered. In other words, carbon dioxide (CO2) emitted from fossil fuel use is in addition to “the carbon that is already cycling between the active pools. We are putting significant stress on all these pools by pushing them to take up additional fossil CO2.…We cannot just stuff the geosphere (i.e., CO2 from the burning of fossil fuels) into the biosphere,” the report says.

Technology: Carbon Sequestration

Net zero targets rest on carbon capture and storage technologies. These technologies include direct air capture, bioenergy capture, mineralization, and enhanced weathering.

The report identifies carbon sequestration technologies as “carbon unicorns, fanciful imaginings of how we might solve the climate crisis without needing to eliminate the burning of fossil fuels.”The FoEI report identifies fundamental questions about whether such technologies can actually be developed at the required scale, identifying them as “carbon unicorns, fanciful imaginings of how we might solve the climate crisis without needing to eliminate the burning of fossil fuels” while warning that there are “no saviour ecosystems around the planet, nor fairy godmother technologies, that will suck up continued fossil fuel emissions.”

The new report is not the first to identify such uncertainties in determining emission targets. “It is irresponsible to base net zero targets on the assumption that uncertain future technologies will compensate for present day emissions,” read a December 2020 report by a group of 41 scientists.

“Technologies [for negative emissions] exist but not at the required scale,” said Joeri Rogelj, lead author of the Nature commentary and director of the Grantham Institute for Climate Change at Imperial College London. “Technology that captures CO2 and then stores it…needs to capture CO2, clean it so that it’s pure, compress it, and then pump it through pipelines and into the ground. It is cheaper to just put CO2 into the atmosphere. The technology doesn’t exist at scale because putting CO2 into the atmosphere is not penalized.”

Politics: Where Is the Land?

“Even if you assume that carbon can be sequestered—and this is a fallacy—where is the land?”Carbon sequestration technologies require considerable infrastructure and real estate. According to the FoEI report, ecosystem restoration needs to be carried out over 678 million hectares—about twice the land area of India—to achieve net zero emissions by 2030.

“Even if you assume that carbon can be sequestered—and this is a fallacy—where is the land?” Raman asked.

Critics fear the burden will fall on the Global South. Wealthy nations and multinational corporations have created a carbon market in which they can continue to develop emissions-intensive infrastructure. Carbon offsets aim to invest in ecosystem restoration in regions that are already low emitters.

“Carbon offset schemes are being talked about for lands where Indigenous communities and forest-dwelling communities live,” Raman explained. “What does this mean for these peoples’ land rights?”

“There are fears that Indigenous lands will be taken over by governments or private corporations, and strong assertions are being made for [Indigenous] people to determine what will happen to their land,” said Victoria Tauli-Corpuz, a development consultant and former U.N. special rapporteur on the rights of Indigenous Peoples.

Even residents of developed nations are not immune from the effects of the carbon market. “I live very close to the Richmond oil refinery in Northern California,” said Browne, “where Chevron has had devastating impacts on Laotian communities and working-class Black and Brown communities, [such as] high rates of asthma. The refinery has exploded many times. You can also look at communities living in the Bakken region in North Dakota where air and water have been impacted by fracking.”

“So the companies that are polluting our air, land, and water get to continue polluting under the cover of the [carbon] offsets they’re creating through the dispossession and displacement of Indigenous and forest-dependent communities,” Browne said.

Real Zero

Although critical of net zero targets, the FoEI report supports “real zero” targets that entail fossil fuel phase outs and investments in ecosystems and people who are dependent on such ecosystems.

“Local communities have to be allowed to participate [in ecosystem restoration efforts]. This participation can range from having a say in how their lands are going to be used to sharing benefits and incentives in terms of policies and resources to continue using their own systems of protecting their land, forests, or even their marine resources,” Tauli-Corpuz said.

Real zero targets align with the Just Transition movement among climate activists, a “unifying and place-based set of principles, processes, and practices that build economic and political power to shift from an extractive economy to a regenerative economy.”

“Rich countries should have already gotten to real zero,” Raman said. “This is what Just Transition means—that they [the Global North] phase out fossil fuels and finance technologies that can assist developing countries in going low carbon.”

—Rishika Pardikar (@rishpardikar), Science Writer

A Vital Resource Supporting Antarctic Research

Mon, 04/05/2021 - 13:06

Antarctica comprises numerous unique environments, from the high Antarctic plateau to the deep subglacial bed, iceberg-congested coastal waters, and sparse rock outcrops and soils. Overall, the region is a key barometer of global environmental change: Interactions among the ice, ocean, atmosphere, biosphere, and lithosphere have implications for global sea level, ocean-atmosphere circulation, and biodiversity. Research in Antarctica spans efforts to study everything from particle physics at the South Pole to the extremes of life in the frozen ground, and this work is crucial for building scientific knowledge at scales from the cellular to the universal.

Given the unique and challenging conditions of the polar regions, physical, chemical, and biological data collected there are typically acquired with substantial logistical effort and financial expense.Understanding the processes that govern Antarctica’s interacting systems requires that data characterizing often fast-changing environmental conditions be placed into context with longer-term records and that point data from the field be integrated with larger-scale airborne and satellite observations. Given the unique and challenging conditions of the polar regions, physical, chemical, and biological data collected there, including temporal snapshots of environmental states that cannot be reproduced, are typically acquired with substantial logistical effort and financial expense. Preservation of these data is thus a critical need for the present and the future.

A Disjointed Approach, Historically

In the spirit of supporting collective stewardship of Antarctica, signatory countries of the Antarctic Treaty in 1959 agreed to make scientific observations from Antarctica open and freely available to everyone around the world. In 1998, the Scientific Committee on Antarctic Research (SCAR), which represents the international research community, adopted NASA’s Global Change Master Directory (GCMD) to serve as a central catalog of information about Antarctic research data sets. Each country is responsible for hosting its own data resources, but all countries contribute basic metadata describing their collected data to the Antarctic Master Directory (AMD) portals of the GCMD [SCAR, 2011].

In most signatory countries, researchers concentrated at dedicated national research centers such as the British Antarctic Survey, Germany’s Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, the Korea Polar Research Institute, and others conduct Antarctic research. These centers also provide national-scale data management, including registration within the AMD. In the United States, however, Antarctic research is conducted by researchers at universities and national laboratories across the country. This work is coordinated through the U.S. Antarctic Program (USAP) with funding from multiple agencies, including the National Science Foundation (NSF).

A scientist monitors data acquisition aboard the icebreaker R/V NB Palmer in the Southern Ocean near Antarctica during a research cruise (NBP2002) in 2020. Credit: Frank Nitsche

Historically, individual scientists in the United States were responsible for ensuring that their data were publicly accessible and registered within the AMD. Some Antarctic scientists can archive their data in disciplinary data repositories that serve the broader science community, such as the Incorporated Research Institutions for Seismology (IRIS) for seismology data or GenBank for DNA sequencing, but few appropriate disciplinary repositories exist. This lack of repositories often resulted in data being publicly hosted only on individual scientists’ websites or not at all.

Further, AMD registrations from the U.S. academic community were highly heterogeneous. Often, these registrations were completed as part of final funding reports, before publications and data archiving were complete, and scientists lacked incentives to go back and update records at a later date. These limitations led to significant gaps in the preservation and archiving of Antarctic research data sets produced by the U.S. academic community and to incomplete cataloging of these data sets. The resulting information gaps made it difficult for Antarctic researchers to reliably search for, find, and use these data.

Managing Antarctic Research Data

Antarctic research spanning a wide range of disciplines is supported by a variety of field observations, as well as sample collection, laboratory measurements, remotely sensed observations, and model experiments. Most of the resulting data are researcher- or project-based data and are diverse and unique data products. This diversity contrasts with the large-volume, more standardized data collections that form key observational infrastructure for some disciplinary communities, such as seismometer data managed by IRIS. Standards for heterogeneous researcher-based data are minimal, and managing these kinds of data is challenging.

Current best practices for research data stewardship center around a life cycle approach lasting from experiment design through data acquisition, processing, archiving, and publication and extending to data reuse and archiving of derivative products. Considering this data life cycle perspective helps ensure that provenance and integrity of data are maintained, which is essential for supporting the reproducibility and reuse of published research.

Information about the context of a project for which data are acquired is relevant for many aspects of the data life cycle, especially for researcher-based data products, but this information is often missing in data archives. For example, the underlying science goals of a project motivate the types of data that are acquired and how they are processed. Preserving information about the original goals and motivations informs whether data may be suitable for applications other than those for which they were the originally intended.

Many Antarctic science projects are conducted as multidisciplinary investigations. Researchers work within shared field camps or on shared research cruises to best leverage the logistically complex planning and high costs of working in Antarctica. The project context of these multidisciplinary efforts links the resulting complementary data sets, describing connections between data types collected and informing understanding of the temporal aspects of the data (e.g., what investigator X measured during snowmobile transect Y) that are relevant for their future reuse.

A Central Home for U.S. Antarctic Research Data

Since its beginnings as a data coordination center in 2007, the U.S. Antarctic Program Data Center has evolved to provide a comprehensive suite of data management services.Since its beginnings as a data coordination center in 2007, the USAP Data Center (USAP-DC) has evolved to provide a comprehensive suite of data management services for the NSF-supported U.S. Antarctic research community. It is the only repository supporting the full spectrum of research conducted by NSF’s U.S. Antarctic Program, and it is designed specifically to host researcher-based data products of all sizes and disciplines, as well as to preserve links to other NSF-supported, disciplinary-focused repositories. Hosted data sets at USAP-DC include data from Antarctic studies spanning biological, atmospheric, space, ocean, and solid Earth science research, as well as the collection of glaciology data assembled by the Antarctic Glaciological Data Center, which operated from 1999 to 2016.

USAP-DC assists investigators in life cycle data management (Figure 1) through services that support the following:

data management planning during NSF proposal creation data set submission tools used to gather standardized metadata and detailed information about data acquisition and processing methods long-term preservation and data publication with the assignment of digital object identifiers (DOIs) through the DataCite system project registration within the AMD and tools to help update information about data sets, field programs, publications, and more Fig. 1. The USAP-DC takes a life cycle approach to research data management (the top row of blue boxes shows life cycle steps), supported by a variety of data services (below). DMP stands for data management plan.

The data center offers Web interfaces to browse, search, and retrieve data, as well as application programming interfaces that enable others to design their own tools to access data of interest. USAP-DC data sets can also be discovered through external registries, including the AMD and DataOne, and through Google data set searches facilitated by Web-accessible metadata and Schema.org protocols.

In response to Antarctic science community and funding agency management needs, USAP-DC recently designed and launched a new project catalog. This registry of USAP projects and research products is designed to further support life cycle data management. Scientists are encouraged to register their projects when their award is initiated, and project pages are updated throughout their duration as data are archived and as publications become available. Data sets may be submitted to the USAP-DC or to external disciplinary repositories; in the latter case, links to externally hosted data sets are provided on the USAP-DC project pages.

Information on data sets and publications associated with research projects is also harvested through automated tools and Web services whenever possible, for example, using the Crossref system. This automation minimizes burdens on investigators by enabling USAP-DC to identify relevant publications long after a funding award ends without scientists needing to provide publication information themselves.

The consolidation of information about USAP research products over the life cycle of a project will make it simpler for individual researchers, especially in the context of large collaborative research groups, to keep track of data sets and related information produced during their projects.

All projects are registered by USAP-DC within the AMD by the end of the project funding periods so that USAP project information is fully integrated with this registry of other Antarctic data from the broader international community (Figure 2). Although data submission and project registration within the USAP-DC are designed for the NSF-funded U.S. academic community, access to the project catalog and data is open to the entire Antarctic research community.

Fig. 2. Data management for the USAP-DC feeds into the Antarctic Master Directory, which links to data resources from the broader international community. A Growing Resource

In recent years, the data stewardship community—and, increasingly, journal publishers like AGU—has embraced findable, accessible, interoperable, and reusable (FAIR) data principles [Wilkinson et al., 2016]. These principles are intended as guidelines for data repositories to enhance reusability of their data collections, for example, by designing new infrastructure to optimize data usage, thereby facilitating new scientific insights from existing data.

New approaches for supporting reusability of researcher-based data collections like the USAP-DC Antarctic collection are particularly important, given the great diversity of data types, documentation, and file formats and the uniqueness of much of the content. USAP-DC is further enhancing reusability by ensuring that rich descriptive information about project contexts is also preserved. Another focus is to automate data documentation whenever possible to ease burdens of data submission on scientists and to grow the collection, which increases its value for new science applications.

In the future, more automated data harmonization and synthesis using researcher-based data collections will be possible with emerging approaches for interoperability of nonstandardized data and for automated generation of metadata from text documentation [e.g., Wilkinson et al., 2017]. With ongoing contributions from the Antarctic scientific community, the growing resource of multidisciplinary research data hosted at USAP-DC will be available for these new applications and to support new areas of discovery about this critical region in our rapidly changing world.

Acknowledgments

Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of NSF.

Probing the Age of the Oldest Ocean Crust in the Pacific

Mon, 04/05/2021 - 11:30

The geomagnetic polarity time scale (GPTS) is based on the marine magnetic anomalies, the striping pattern of strong and weak magnetic signals recorded by the ocean crust. Strong signals correspond to normal polarity and weak signals to reverse polarity. Adjacent normal and reverse stripes are numbered, backward in time from C1 (top = today) to C34, of which the normal portion is a long period of normal polarity in the Mid Cretaceous. Older magnetic stripes are referred to as the M-sequence (Mesozoic sequence) starting with M0 below the very long C34 anomaly.

The current GPTS extends from today backward in time down to magnetic anomaly M29 (approximately 157 million years ago). Older oceanic crust is rare and typically has subdued magnetic anomaly patterns that are difficult to correlate. Thus, pre-M29 time scales remain controversial because the marine magnetic anomaly data is of rather poor quality. This complicates analysis of important features of the geomagnetic field: the reversal frequency and the expression of the Mesozoic dipole low (also termed Jurassic Quiet Zone).

Tominaga et al. [2021] extend the geomagnetic polarity time scale down to the Mid Jurassic (M44, about 170 million years ago) based on a composite of the Japanese lineation set they published previously, and a new, highly detailed, multiscale magnetic anomaly profile of the Hawaiian lineation set, both from the western Pacific Ocean.

The weak anomaly portion M41-M39 is best expressed in the Japanese profile and argued to represent the onset and maximum expression of the Mesozoic dipole low or the core of the Jurassic Quiet Zone, an important long-lasting low-intensity feature of the geomagnetic field. From the midwater reference profile, the Jurassic reversal frequency in the M29-M44 time span (157-170 million years ago) appears to have been about 19 reversals per million years, i.e. extraordinarily high, and double the previous estimates for that period.

Citation: Tominaga, M., Tivey, M. A., & Sager, W. W. [2021]. A new middle to Late Jurassic Geomagnetic Polarity Time Scale (GPTS) from a multiscale marine magnetic anomaly survey of the Pacific Jurassic Quiet Zone. Journal of Geophysical Research: Solid Earth, 126, e2020JB021136. https://doi.org/10.1029/2020JB021136

―Mark Dekkers, Associate Editor, JGR: Solid Earth

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer