EOS

Syndicate content Eos
Science News by AGU
Updated: 22 hours 22 min ago

Meet the Mysterious Electrides

Thu, 02/05/2026 - 14:22

This story was originally published by Knowable Magazine.

For close to a century, geoscientists have pondered a mystery: Where did Earth’s lighter elements go? Compared to amounts in the Sun and in some meteorites, Earth has less hydrogen, carbon, nitrogen and sulfur, as well as noble gases like helium—in some cases, more than 99 percent less.

Some of the disparity is explained by losses to the solar system as our planet formed. But researchers have long suspected that something else was going on too.

Recently, a team of scientists reported a possible explanation—that the elements are hiding deep in the solid inner core of Earth. At its super-high pressure—360 gigapascals, 3.6 million times atmospheric pressure—the iron there behaves strangely, becoming an electride: a little-known form of the metal that can suck up lighter elements.

Electrides, in more ways than one, are having their moment.

Study coauthor Duck Young Kim, a solid-state physicist at the Center for High Pressure Science & Technology Advanced Research in Shanghai, says the absorption of these light elements may have happened gradually over a couple of billion years—and may still be going on today. It would explain why the movement of seismic waves traveling through Earth suggests an inner core density that is 5 percent to 8 percent lower than expected were it metal alone.

Electrides, in more ways than one, are having their moment. Not only might they help solve a planetary mystery, they can now be made at room temperature and pressure from an array of elements. And since all electrides contain a source of reactive electrons that are easily donated to other molecules, they make ideal catalysts and other sorts of agents that help to propel challenging reactions.

One electride is already in use to catalyze the production of ammonia, a key component of fertilizer; its Japanese developers claim the process uses 20 percent less energy than traditional ammonia manufacture. Chemists, meanwhile, are discovering new electrides that could lead to cheaper and greener methods of producing pharmaceuticals.

Today’s challenge is to find more of these intriguing materials and to understand the chemical rules that govern when they form.

Electrides at High Pressure

Most solids are made from ordered lattices of atoms, but electrides are different. Their lattices have little pockets where electrons sit on their own.

Normal metals have electrons that are not stuck to one atom. These are the outer, or valence, electrons that are free to move between atoms, forming what is often referred to as a delocalized “sea of electrons.” It explains why metals conduct electricity.

The outer electrons of electrides no longer orbit a particular atom either, but they can’t freely move. Instead, they become trapped at sites between atoms that are called non-nuclear attractors. This gives the materials unique properties. In the case of the iron in Earth’s core, the negative electron charges stabilize lighter elementsat non-nuclear attractors that were formed at those super-high pressures, 3,000 times that at the bottom of the deepest ocean. The elements would diffuse into the metal, explaining where they disappeared to.

In an experiment, scientists simulated the movement of hydrogen atoms (pink) into the lattice structure of iron at a temperature of 3,000 degrees Kelvin (2,727 Celsius), at pressures of 100 gigapascals (GPa) and 300 GPa. At the higher pressure (right) an electride forms, as indicated by the altered distribution of the hydrogen observed within the iron lattice — these would represent the negatively charged non-nuclear attractor sites to which hydrogen atoms bond, forming hydride ions. Duck Young Kim and his coauthors think that the altered hydrogen distribution at higher pressure in these simulations is good evidence that an electride with non-nuclear reactor sites forms within the iron of Earth’s core. Credit: Knowable Magazine, adapted from I. Park et al./Advanced Science 2024

The first metal found to form an electride at high pressure was sodium, reported in 2009. At a pressure of 200 gigapascals (2 million times greater than atmospheric pressure) it transforms from a shiny, reflective, conducting metal into a transparent glassy, insulating material. This finding was “very weird,” says Stefano Racioppi, a computational and theoretical chemist at the University of Cambridge in the United Kingdom, who worked on sodium electrides while in the lab of Eva Zurek at the University at Buffalo in New York state. Early theories, he says, had predicted that at high pressure, sodium’s outer electrons would move even more freely between atoms.

The first sign that things were different came from predictions in the late 1990s, when scientists were using computational simulations to model solids, based on the rules of quantum theory. These rules define the energy levels that electrons can have, and hence the probable range of positions in which they are found in atoms (their atomic orbitals).

Simulating solid sodium showed that at high pressures, as the sodium atoms get squeezed closer together, so do the electrons orbiting each atom. That causes them to experience increasing repulsive forces with one another. This changes the relative energies of every electron orbiting the nucleus of each atom, Racioppi explains—leading to a reorganization of electron positions.

This graphic shows alternative models for metal structures. At left is the structure at ambient conditions, with each blue circle representing a single atom in the metallic lattice consisting of a positively charged nucleus surrounded by its electrons. The electrons can move freely throughout the lattice in what is known as a “sea of electrons.” Earlier theories of metals at high pressures assumed a similar structure, with even greater metallic characteristics (top, right), but more recent modeling shows that in some metals like sodium, at high pressure the structure changes (bottom, right) to a system in which the electrons are localized (dark blue boxes) between the ionic cores (small light blue circles)—an electride. This gives the structure very different properties. Credit: Knowable Magazine, adapted from S. Racioppi and E. Zurek/Ar Materials Research 2025

The result? Rather than occupying orbitals that allow them to be delocalized and move between atoms, the orbitals take on a new shape that forces electrons into the non-nuclear attractor sites. Since the electrons are stuck at these sites, the solid loses its metallic properties.

Adding to this theoretical work, Racioppi and Zurek collaborated with researchers at the University of Edinburgh to find experimental evidence for a sodium electride at extreme pressures. Squeezing crystals of sodium between two diamonds, they used X-ray diffraction to map electron density in the metal structure. This, they reported in September 2025, confirmed that electrons really were located in the predicted non-nuclear attractor sites between sodium atoms.

Just the Thing for Catalysts

Electrides are ideal candidates for catalysts—substances that can speed up and lower the energy needed for chemical reactions. That’s because the isolated electrons at the non-nuclear attractor sites can be donated to make and break bonds. But to be useful, they would need to function at ambient conditions.

Several such stable electrides have been discovered over the last 10 years, made from inorganic compounds or organic molecules containing metal atoms. One of the most significant, mayenite, was found by surprise in 2003 when material scientist Hideo Hosono at the Institute of Science Tokyo was investigating a type of cement.

Mayenite is a calcium aluminate oxide that forms crystals with very small pores—a few nanometers across—called cages, that contain oxygen ions. If a metal vapor of calcium or titanium is passed over it at high temperature, it removes the oxygen, leaving behind just electrons trapped at these sites—an electride.

Unlike the high-pressure metal electrides that switch from conductors to insulators, mayenite starts as an insulator. But now its trapped electrons can jump between cage sites (via a process called quantum tunnelling)—making it a conductor, albeit 100 to 1,000 times less conductive than a metal like aluminum or silver. It also becomes an excellent catalyst, able to surrender electrons to help make and break bonds in reactions.

By 2011, Hosono had begun to develop mayenite as a greener and more efficient catalyst for synthesizing ammonia. Over 170 million metric tons of ammonia, mostly for fertilizers, is produced annually via the Haber-Bosch process, in which metal oxides facilitate hydrogen and nitrogen gases reacting together at high pressure and temperature. It is an energy-intensive, expensive process—Haber-Bosch plants account for some 2 percent of the world’s energy use.

The company estimates that this will avoid 11,000 tons of CO2 emissions annually—about equal to the annual emissions of 2,400 cars.

In Haber-Bosch, the catalysts bind the two gases to their surfaces and donate electrons to help break the strong triple bond that holds the two nitrogen atoms together in nitrogen gas, as well as the bonds in hydrogen gas. Because mayenite has a strong electron-donating nature, Hosono thought mayenite would be able to do it better.

In Hosono’s reaction, mayenite itself does not bind the gases but acts as a support bed for nanoparticles of a metal called ruthenium. First, the nanoparticles absorb the nitrogen and hydrogen gases. Then the mayenite donates electrons to the ruthenium. These electrons flow into the nitrogen and hydrogen molecules, making it easier to break their bonds. Ammonia thus forms at a lower temperature—300 to 400° C—and lower pressure—50 to 80 atmospheres—than with Haber-Bosch, which takes place at 400 to 500° C and 100 to 400 atmospheres.

In 2017, the company Tsubame BHB was formed to commercialize Hosono’s catalyst, with the first pilot plant opening in 2019, producing 20 metric tons of ammonia per year. The company has since opened a larger facility in Japan and is setting up a 20,000-ton-per year green ammonia plant in Brazil to replace some of the nation’s fossil-fuel-based fertilizer production. The company estimates that this will avoid 11,000 tons of CO2 emissions annually—about equal to the annual emissions of 2,400 cars.

There are other applications for a mayenite catalyst, says Hosono, including a lower-energy conversion of CO2 into useful chemicals like methane, methanol or longer-chain hydrocarbons. Other scientists have suggested that mayenite’s cage structure also makes it suitable for immobilizing radioactive isotope waste in nuclear power stations: The electrons could capture negative ions like iodine and bromide and trap them in the cages.

Mayenite has even been studied as a low-temperature propulsion system for satellites in space. When it is heated to 600°C in a vacuum, its trapped electrons blast from the cages, causing propulsion.

Organic Electrides

The list of materials known to form electrides keeps growing. In 2024, a team led by chemist Fabrizio Ortu at the University of Leicester in the UK accidentally discovered another room-temperature-stable electride made from calcium ions surrounded by large organic molecules, together known as a coordination complex.

“You put something in a milling jar, you shake it really hard, and that provides the energy for the reaction.”

He was using a method known as mechanical chemistry—“You put something in a milling jar, you shake it really hard, and that provides the energy for the reaction,” he says. But to his surprise, electrons from the potassium he had added to his calcium complex were not donated to the calcium ion. Instead, what formed “had these electrons that were floating in the system,” he says, trapped in sites between the two metals.

Unlike mayenite, this electride is not a conductor—its trapped electrons do not jump. But they allow it to facilitate reactions that are otherwise hard to get started, by activating unreactive bonds, doing a job much like a catalyst. These are reactions that currently rely on expensive palladium catalysts.

The scientists successfully used the electrideon a reaction that joins two pyridine rings—carbon rings containing a nitrogen atom. They are now examining whether the electride could assist in other common organic reactions, such as substituting a hydrogen atom on a benzene ring. These substitutions are difficult because the bond between the benzene ring carbon and its attached hydrogen is very stable.

There are still problems to sort out: Ortu’s calcium electride is too air- and water-sensitive for use in industry. He is now looking for a more stable alternative, which could prove particularly useful in the pharmaceutical industry to synthesize drug molecules, where the sorts of reactions Ortu has demonstrated are common.

Still Questions at the Core

There remain many unresolved mysteries about electrides, including whether Earth’s inner core definitely contains one. Kim and his collaborators used simulations of the iron lattice to find evidence for non-nuclear attractor sites, but their interpretation of the results remains “a little bit controversial,” Racioppi says.

Sodium and other metals in Group 1 and Group 2 of the periodic table of elements—such as lithium, calcium and magnesium—have loosely bound outer electrons. This helps make it easy for electrons to shift to non-nuclear attractor sites, forming electrides. But iron exerts more pulling power on its outer electrons, which sit in differently shaped orbitals. This makes the increase in electron repulsion under pressure less significant and thus the shift to electride formation difficult, Racioppi says.

Electrides are still little known and little studied, says computational materials scientist Lee Burton of Tel Aviv University. There is still no theory or model to predict when a material will become one. “Because electrides are not typical chemically, you can’t bring your chemical intuition to it,” he says.

“The potential is enormous.”

Burton has been searching for rules that might help with predictions and has had some success finding electrides from a screen of 40,000 known materials. He is now using artificial intelligence to find more. “It’s a complex interplay between different properties that sometimes can all depend on each other,” he says. “This is where machine learning can really help.”

The key is having reliable data to train any model. Burton’s team only has actual data from the handful of electride structures experimentally confirmed so far, but they also are using the kind of modeling based on quantum theory that was carried out by Racioppito create high-resolution simulations of electron density within materials. They are doing this for as many materials as they can; those that are confirmed by real-world experiments will be used to train an AI modelto identify more materials that are likely to be electrides — ones with the discrete pockets of high electron density characteristic of trapped electron sites. “The potential,” says Burton, “is enormous.”

—Rachel Brazil (@rachelbrazil.bsky.social), Knowable Magazine

“This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.” Read the original article here.

Snowball Earth’s Liquid Seas Dipped Way Below Freezing

Wed, 02/04/2026 - 13:53

Earth froze over 717 million years ago. Ice crept down from the poles to the equator, and the dark subglacial seas suffocated without sunlight to power photosynthesis. Earth became an unrecognizable, alien world—a “snowball Earth,” where even the water was colder than freezing.

In Nature Communications, researchers reported the first measured sea temperature from a snowball Earth episode: −15°C ± 7°C. If this figure holds up, it will be the coldest measured sea temperature in Earth’s history.

For water to be that cold without freezing, it would have to be very salty. And indeed, the team’s analysis suggests that some pockets of seawater during the Sturtian snowball glaciation, which lasted 57 million years, could have been up to 4 times saltier than modern ocean water.

“We’re dealing with salty brines,” said Ross Mitchell, a geologist at the Institute of Geology and Geophysics of the Chinese Academy of Sciences. “That’s exactly what you see in Antarctica today,” he added, except that snowball Earth’s brines were a bit colder than even the −13°C salty slush of Antarctica’s ice-covered Lake Vida today.

Past Iron

The Sturtian snowball was a runaway climate catastrophe that occurred because ice reflects more sunlight than land or water. Ice reflected sunlight, which cooled the planet, which made more ice, which reflected more sunlight and so on, until the whole world ended up buried under glaciers that could have been up to a kilometer thick.

This unusual time left behind unusual rocks: Rusty red iron formations that accumulated where continental glaciers met the ice-covered seas. To take snowball Earth’s temperature, the team devised a new way to use that iron as a thermometer.

Scientists used information about the iron in formations like this one to estimate the temperature of Earth’s ocean 717 million years ago. Credit: James St. John/Flickr, CC BY 2.0

Iron formations accumulate in water that’s rich in dissolved iron. Oxygen transforms the easily dissolved, greenish “ferric” form of iron into rusty red “ferrous” iron that stays solid. That’s why almost all iron formations are ancient, relics of a time before Earth’s atmosphere started filling with oxygen about 2.4 billion years ago, or from the more recent snowball Earth, when the seas were sealed under ice. Unable to soak up oxygen from the air or from photosynthesis, snowball Earth’s dark, ice-covered seawater drained of oxygen.

Iron-56 is the most common iron isotope, but lighter iron-54 rusts more easily. So when iron rusts in the ocean, the remaining dissolved iron is enriched in the heavier isotope. Over many cycles of limited, partial rusting—like what happened on the anoxic Archean Earth—this enrichment grows, which is why ancient iron formations contain isotopically very heavy iron compared to iron minerals that formed after Earth’s atmosphere and oceans filled with oxygen.

Snowball Earth’s iron is heavy, too, even more so than iron formations from the distant, preoxygen past. The researchers realized that temperature could be the explanation: Iron minerals that form in cold water end up istopically heavier. We don’t know exactly how hot it was when the ancient iron formations accumulated, but it was likely warmer than during snowball Earth, when glaciers reached the equator. Using a previous estimate of 25°C for the temperature of Archean seawater, the team calculated that the waters that formed the snowball Earth iron formations would likely have been 40°C colder.

“It’s a very interesting, novel way of getting something different out of iron isotope data,” said geochemist Andy Heard of the Woods Hole Oceanographic Institution, who was not involved in the study. “It’s a funny, backwards situation to be in where you’re using even older rocks as your baseline for understanding something that formed 700 million years ago.”

In part because of that backward situation, Heard thinks the study is best interpreted qualitatively as strong evidence that seawater was really cold, but maybe not that it was exactly −15°C.

The team also analyzed isotopes of strontium and barium to determine that snowball Earth’s seawater was up to 4 times saltier than the modern ocean. Jochen Brocks of the Australian National University, who wasn’t involved in the study, said the researchers’ results align with his own salinity analysis of snowball Earth sediments from Australia based on a different method. Those rocks formed in a brine that Brocks thinks was salty enough to reach −7°C before freezing. Another group reaching a similar conclusion using different methods makes that extreme scenario sound a lot more plausible, he said.

“It was very cool to get the additional confirmation it was actually very, very cold,” he said.

—Elise Cutts (@elisecutts.bsky.social), Science Writer

Citation: Cutts, E. (2026), Snowball Earth’s liquid seas dipped way below freezing, Eos, 107, https://doi.org/10.1029/2026EO260048. Published on 4 February 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Tsunamis from the Sky

Tue, 02/03/2026 - 14:26
Editors’ Vox is a blog from AGU’s Publications Department.

Meteorological tsunamis, or meteotsunamis, are long ocean waves in the tsunami frequency band that are generated by traveling air pressure and wind disturbances. These underrated phenomena pose serious threats to coastal communities, especially in the era of climate change.

A new article in Reviews of Geophysics explores all aspects of meteotsunamis, from available data and tools used in research to the impacts on coastal communities. Here, we asked the authors to give an overview of these phenomena, how scientists study them, and what questions remain.

In simple terms, what are meteorological tsunamis or “meteotsunamis”?

Meteotsunamis are tsunami-like waves that are not generated by earthquakes or landslides, but by atmospheric processes.

Meteotsunamis are tsunami-like waves that are not generated by earthquakes or landslides, but by atmospheric processes. Their formation requires a strong air pressure or wind disturbance—typically characterized by a pressure change of 1–3 hectopascals over about five minutes—that propagates at a “perfect” speed, allowing long ocean waves to grow. In addition, coastal bathymetry must be sufficiently complex to amplify the incoming waves.

Meteotsunamis are less well known and, fortunately, are generally less destructive than seismic tsunamis. Nonetheless, they can reach wave heights of up to 10 meters and can be highly destructive. One of the most damaging events occurred on June 21, 1978, in Vela Luka, Croatia, where damages amounted to about 7 million US dollars at the time. Meteotsunamis can also cause injuries and fatalities, as unfortunately occurred on January 13, 2026, during the recent Argentina meteotsunami.

What kinds of hazards do meteotsunamis pose to humans and society?

Meteotsunamis are characterized by multi-meter sea level oscillations and, at times, strong currents. As a result, they can flood waterfront areas and households, while strong currents may break ship moorings and disrupt maritime traffic, as occurred in 2014 in Freemantle, Australia. An even greater danger comes from rip currents, which can sweep swimmers away from shore. A notable example is the July 4, 2003, meteotsunami that occurred under clear skies along the beaches of Lake Michigan and claimed seven lives.

Figure 1. Photos from the 1978 Vela Luka meteotsunami, with labeled eyewitness wave height and household’s damage inventory. Credit: Vilibić et al. [2025], Figure 12

How do scientists observe, measure, and reproduce meteotsunamis?

Much of the information on meteotsunamis comes from post-event observations. Following exceptionally strong events, scientists often visit affected locations to conduct field surveys, interview eyewitnesses, collect photos and videos, and estimate the extent and height of the meteotsunami along the coast. More precise information comes from coastal tide gauges and ocean buoys, as well as meteorological observations with at least minute-scale resolution.

Unfortunately, standard atmospheric and oceanic observing systems do not commonly operate at such high temporal resolution. For example, one of the oldest national networks—the UK tide gauge network operating for decades—still uses 15-minute sampling intervals. At the same time, most national meteorological services measure atmospheric variables at 10-minute or even hourly resolution, which is insufficient for meteotsunami research. Nevertheless, some oceanic and meteorological networks do provide appropriate sampling intervals, and even data from school-based or amateur networks can be valuable for research.

In addition, numerical modeling of meteotsunamis is now standard practice and includes both atmospheric and oceanic components. However, accurately reproducing meteotsunami-generating atmospheric processes—and thus meteotsunamis themselves—remains challenging. Addressing this issue and developing more accurate, high-resolution models is a key task for the modeling community.

Why has research on meteotsunamis shifted from localized to a global approach?

Figure 2. Map with known occurrences of meteotsunamis. Size of the star is proportional to the meteotsunami intensity. Credit: Vilibić et al. [2025], Figure 4

The strength of meteotsunamis strongly depends on coastal bathymetry. Within a specific bay, wave heights can reach several meters, while just outside the bay they may be only a few tens of centimeters. For this reason, meteotsunamis were historically observed and studied mainly at individual locations, known as meteotsunami hot spots. Over the past few decades, however, advances in monitoring and modeling capabilities, along with easier global dissemination of scientific results, have revealed that the same phenomenon occurs worldwide. Moreover, the recent availability of hundreds of multi-year, minute-scale sea level records has enabled researchers to conduct global studies and quantify worldwide meteotsunami patterns.

What are the primary ways that meteotsunamis are generated?

The generation of a strong meteotsunami requires (i) an intense, minute-scale air-pressure or wind disturbance that propagates over long distances (tens to hundreds of kilometers), (ii) an ocean region where energy is efficiently transferred from the atmosphere to the ocean, for example through Proudman resonance—a process in which long ocean waves grow strongly when the speed of the atmospheric disturbance matches the speed of tsunami waves, and (iii) coastal bathymetry capable of strongly amplifying long ocean waves. Funnel-shaped bays are particularly prone to meteotsunamis. These events can also be generated by explosive volcanic eruptions, such as the Hunga Tonga–Hunga Haʻapai eruption in January 2022, which produced a planetary-scale meteotsunami.

How is climate change expected to influence meteotsunamis?

At present, this is not well understood. Only two published studies exist, and both suggest a possible increase in meteotsunami intensity in the future due to an increased frequency of atmospheric conditions favorable for meteotsunami generation. However, no global assessment is currently available, as climate models are still unable to reliably reproduce the kilometer- or sub-kilometer-scale processes required to simulate meteotsunamis.

What are some of the recent advances in forecasting meteotsunamis?

Some progress has been made, but effective forecasting and early-warning systems for meteotsunamis remain far from operational. Improvements in atmospheric numerical models—currently the main source of uncertainty in meteotsunami simulations and forecasts—are expected in the coming decades, particularly through the development of new parameterization schemes that better represent turbulence-scale processes.

How does your review article differ from others that have covered meteotsunamis?

Our review introduces a new class of meteotsunamis generated by explosive volcanic eruptions.

The most recent comprehensive review of meteotsunamis was published nearly 20 years ago, making this review a timely synthesis of the substantial advances made over the past two decades. In addition, our review introduces a new class of meteotsunamis generated by explosive volcanic eruptions, such as the Hunga Tonga–Hunga Haʻapai event in January 2022. Such events were previously only sporadically noted, as the last comparable eruption occurred in 1883 with the Krakatoa volcano. Finally, recent findings show that meteotsunamis—much like seismic tsunamis—can radiate energy into the ionosphere, where it can be detected using ground-based GNSS (Global Navigation Satellite System) stations. This discovery opens a new avenue for future meteotsunami research.

What are some of the remaining questions where additional research efforts are needed?

Many challenges remain in the observation, reproduction, and forecasting of meteotsunamis. Most are closely linked to technological advancements, such as (i) the need for dense, continuous, minute-scale observations of sea level and meteorological variables across the ocean and over climate-relevant time scales, (ii) increased computational power, since sub-kilometer atmosphere–ocean models require enormous resources, potentially addressable through GPU acceleration or future quantum computing, and (iii) the development of improved parameterizations for numerical models at sub-kilometer scales. Ultimately, extending research toward climate-scale assessments of meteotsunamis is essential for accurately evaluating coastal risks associated with sea level rise and future extreme sea levels, which currently do not account for minute-scale oscillations such as meteotsunamis.

—Ivica Vilibić (Ivica.vilibic@irb.hr, 0000-0002-0753-5775), Ruđer Bošković Institute & Institute for Adriatic Crops, Croatia; Petra Zemunik Selak (0000-0003-4291-5244), Institute of Oceanography and Fisheries, Croatia; and Jadranka Šepić (0000-0002-5624-1351), Faculty of Science, University of Split, Croatia

Editor’s Note: It is the policy of AGU Publications to invite the authors of articles published in Reviews of Geophysics to write a summary for Eos Editors’ Vox.

Citation: Vilibić, I., P. Zemunik Selak, and J. Šepić (2026), Tsunamis from the sky, Eos, 107, https://doi.org/10.1029/2026EO265002. Published on 3 February 2026. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A Mid-Ocean Ridge in the Norwegian Sea Pumps Out Hydrogen

Tue, 02/03/2026 - 13:35

Roughly half a century ago, the burgeoning field of marine cartography revealed a curious sight: Mid-ocean ridges punctuate the seafloor, their geographic highs running over our planet like the seams on a baseball. These features mark where Earth’s tectonic plates are diverging and magma is upwelling.

Researchers sent a remotely operated vehicle (ROV) to a mid-ocean ridge system deep in the Norwegian Sea and discovered unusually high levels of molecular hydrogen dissolved in the hydrothermal fluids there. That hydrogen, which can help fuel microbial activity, is likely arising from the degradation of organic matter, the team concluded. These results were published in Communications Earth and Environment.

Pulling Apart

Many of our planet’s mountain ranges are built by the convergence of tectonic plates. But there are also regions on Earth where tectonic plates are diverging. In those places, magma from the planet’s interior is rising toward the surface. Many of those so-called spreading sites happen to be located in ocean basins, and the result is a mid-ocean ridge: a range of underwater volcanoes.

Thanks to their volcanic origin and underwater locales, mid-ocean ridges are characterized by a chemically potent amalgam of seawater, seafloor sediments, and magmatic material. But relatively few mid-ocean ridge systems have been explored in detail, partly because many lie beneath thousands of meters of water. “There’s still much more to learn about these systems,” said Alexander Diehl, a geochemist at MARUM – Center for Marine Environmental Sciences at the University of Bremen in Germany.

In 2022, a team led by MARUM researchers studied the Knipovich Ridge system off the coast of Svalbard. This mid-ocean ridge is known for being particularly slow spreading—its tectonic plates are diverging at only about 14 millimeters per year. (Fingernails grow about twice as fast.) Slow-spreading sites tend to get less research attention than fast-spreading sites, said Diehl. The reason is the latter tend to have larger supplies of upwelling magma and therefore more hydrothermal venting, he explained.

The 2022 cruise aboard the R/V Maria S. Merian revealed previously unknown fluid escape sites—including iconic black smokers—and an array of microbes that thrived in the utter absence of sunlight. Researchers used an ROV to collect hydrothermal fluids emanating from four vent sites along the Knipovich Ridge. Unfortunately, however, the sampling devices aboard the vehicle were not gas tight, and some of the dissolved gases escaped. “The concentrations of volatiles were not quantified correctly,” said Diehl.

A Second Chance

“They maintain pressure inside the sampler not only during recovery but also in the laboratory.”

But 2 years later, scientists got a second chance to visit the Knipovich Ridge. Diehl was one of the researchers who joined a 2024 cruise, again aboard R/V Maria S. Merian, to revisit the slow-spreading site. This time, the team brought gas-tight devices known as isobaric fluid samplers. “They maintain pressure inside the sampler not only during recovery but also in the laboratory,” said Diehl.

Diehl and his colleagues collected 160-milliliter samples of hydrothermal fluids from several vent sites on the Knipovich Ridge at a depth of roughly 3,000 meters. The team then analyzed the samples on board the R/V Maria S. Merian. The team recorded high levels of silica, alkaline pH levels, and low concentrations of metals like iron and manganese consistent with other hydrothermal systems where fluids circulate through sediments. But to their surprise they also noted unusually high levels of molecular hydrogen. There was more than twice the highest amount that had ever been recorded in any sediment-hosted hydrothermal vent.

Hydrogen is important to many life-forms in the deep ocean that don’t receive sunlight, said Jeff Seewald, a geochemist at the Woods Hole Oceanographic Institution in Woods Hole, Mass., not involved in the research. “A lot of organisms can use it.” (Seewald developed the concept for the isobaric fluid samplers that Diehl and his colleagues used on their 2024 cruise.)

A Double Whammy

Finding so much hydrogen on the Knipovich Ridge baffled Diehl and his team. High concentrations of hydrogen typically arise in hydrothermal systems dominated by ultramafic rocks from the mantle, whereas the vents that Diehl and his colleagues studied were surrounded by terrestrial sediments sloughed off from the fjords of Svalbard.

Diehl and his team ran computer simulations and found that the high concentrations of molecular hydrogen could be explained by terrestrial sediments. The culprit, the researchers concluded, was the degradation of organic matter entrained in those sediments. Those reactions likely played a role in producing much of the hydrogen the team measured.

“You could potentially generate a significant amount of hydrogen, which could then be utilized by microbes.”

The hydrothermal system on the Knipovich Ridge is a powerhouse of hydrogen production, Seewald said. Finding similar systems on ocean worlds could have implications for life beyond Earth, he added. “You could potentially generate a significant amount of hydrogen, which could then be utilized by microbes.”

In the future, Diehl hopes to join another cruise to return to the Knipovich Ridge. It’s a fascinating site to visit, even if only vicariously, he said. “It’s a lot of fun to sit behind the pilots of the ROV.”

—Katherine Kornei (@KatherineKornei), Science Writer

Citation: Kornei, K. (2026), A mid-ocean ridge in the Norwegian Sea pumps out hydrogen, Eos, 107, https://doi.org/10.1029/2026EO260045. Published on 3 February 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Nationwide Soil Microbiome Mapping Project Connects Students and Scientists

Tue, 02/03/2026 - 13:33

Just 1 gram of soil can host billions of microorganisms and thousands of species of bacteria, fungi, and viruses, some of which drive essential processes like nutrient cycling. Because soil is home to nearly 60% of all living organisms, from microbes to mammals, some researchers have described it as the most biodiverse habitat on Earth. Soil microbes can also affect human heath, including by harboring pathogens and contributing to the development of antibiotic resistance.

As the climate continues to change, soil and its many inhabitants are facing changes, too. Yet by some estimates, about 99% of soil microorganisms have not yet been studied.

“Soil is one of the last frontiers on Earth.”

“Soil is one of the last frontiers on Earth,” said biologist Ava Hoffman, a senior scientist at the Fred Hutch Cancer Center in Washington State.

A group of educators, researchers, and students from dozens of institutions have teamed up to create the first-of-its-kind soil microbiome map of the United States. Though the effort is in its preliminary stages, researchers have already cataloged more than 1,000 previously unknown strains of bacteria and other microbes. The team discussed the work in a commentary published in Nature Genetics.

By collecting samples from 40 sites across the country and analyzing them with DNA sequencing tools used in human genomic study, researchers are working to build a broader understanding of the microbial “dark matter” in the soil under our feet. At the same time, the project is connecting faculty and students into a nationwide network of soil researchers.

Soil Brings Us Together

During the early days of the COVID-19 pandemic, when community and connection were lacking, the members of the Genomic Data Science Community Network (GDSCN) met virtually. They wanted to create a research project that would excite faculty and students about genetics and data without requiring too much lab equipment, and they wondered how that might be done.

It would be done by sampling soil, said Hoffman, one of the study’s authors. “It was really a way to get faculty from all over the place involved and able to answer the questions they were interested in.”

The GDSCN created the BioDiversity and Informatics for Genomics Scholars (BioDIGS) initiative to address some of the knowledge gaps in soil biodiversity as well as train students and faculty in genomic data science by including participants from a range of institutions, from research-focused universities to community colleges.

Students at United Tribes Technical College collect soil samples at their campus in Bismarck, N.D. Credit: Emily Biggane

To take part in the project, participants are sent preassembled soil collection kits. Participants obtain permission to sample soil from their chosen sites—such as college campuses, parks, urban corridors, hiking trails, and spaces with local significance—and follow a specific protocol for sample collection. Students and faculty members then capture the GPS coordinates and images from each site and choose 16–24 sampling spots within a 100-meter area.

After collecting the soil, participants send their samples to Johns Hopkins University. From there, the samples are routed to labs at the Johns Hopkins School of Medicine and Cold Spring Harbor Laboratory in New York for genome sequencing and to the University of Delaware for chemical testing. Resulting data are uploaded to national research databases.

“One thing that is important is to bridge that disconnect between a sample as a data point on screen and its place of being: where it came from, how it got to the lab, and its story,” said cellular and molecular biologist Emily Biggane, one of the study’s authors and a research faculty member at the United Tribes Technical College’s Intertribal Research and Resource Center in North Dakota. “That connection is really important for our students. The land is something that’s honored and celebrated. Our students are very interested in learning about the soil that supports us.”

Unearthing Information

The soil sites sampled in the project ranged from the playgrounds and parks of Baltimore to a former Superfund site in Georgia, from urban Seattle to land under development at a college campus in Bismarck, N.D. “Understanding how different clades of bacteria vary across all our sites and how they vary with things like heavy metal concentration and pH and climate—that’s been pretty cool to see,” Hoffman said.

Continued sampling across these sites—and others that may become part of later incarnations of the project, as it continues to grow—can also help researchers understand how soil microbial communities respond to the effects of climate change. “Repeated sampling across sites in North America may help us to discover fragile soil ecosystems where microbial communities are undergoing rapid change,” Marie Schaedel, a soil microbiologist at Oregon State University who was not involved in the research, said in an email to Eos.

“At the end of the day, documenting soil biodiversity is not a problem that a single scientist can solve. We need a ton of people to do this.”

“Citizen science research like this benefits both science and society. It increases the amount of data on microbiomes in diverse soil habitats,” said Schaedel. “It also has the potential to motivate the next generation of researchers by making the research accessible and personal.”

While this project advances understanding of soil biodiversity, education is an important aspect of the work as well. More than 100 students participated in the first round of soil collection and research. Through hands-on sampling, data analysis, and interdisciplinary collaboration, students are gaining an understanding of the ways that ecology, climate, and human health intersect through soil, Hoffman said. The more microbial and bacterial genomes that are assembled, the greater the chance of discovering the next pathogen or the next cure is, she added. “At the end of the day, documenting soil biodiversity is not a problem that a single scientist can solve. We need a ton of people to do this.”

—Rebecca Owen (@beccapox.bsky.social), Science Writer

Citation: Owen, R. (2026), Nationwide soil microbiome mapping project connects students and scientists, Eos, 107, https://doi.org/10.1029/2026EO260046. Published on 3 February 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Visualizing and Hearing the Brittle–Plastic Transition

Tue, 02/03/2026 - 13:30
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Journal of Geophysical Research: Solid Earth

The deformation of Earth materials can occur either in a “brittle” manner, mediated by fractures whose propagation radiates elastic waves, or through “intracrystalline plasticity,” governed by the motion of crystalline defects and generally considered to be largely aseismic. However, within the “brittle–plastic transition,” these mechanisms are expected to coexist. Moreover, if intracrystalline defect propagation is sufficiently rapid and accompanied by stress release, it may also theoretically generate elastic waves.

O’ Ghaffari et al. [2026] present the first experiments in which optical, mechanical, and acoustic measurements are acquired simultaneously during the propagation of intracrystalline defects (twin boundaries) in calcite single crystals. High-speed imaging, reaching up to 12,500 frames per second, is combined with multiple ultrasonic sensors sampling up to 50 million samples per second, allowing deformation processes to be resolved across a wide range of spatial and temporal scales.

The experiments capture the evolution of both brittle microcracks and crystal-plastic twins as they propagate through the crystal. Direct comparison of image sequences and acoustic records demonstrates that these two deformation mechanisms generate distinct ultrasonic signals. In particular, subtle differences in waveform characteristics are linked to the physical nature of the defect source. This distinction provides a new basis for separating brittle and plastic deformation signals in acoustic emission data. The results have important implications for laboratory studies and for interpreting acoustic monitoring data in geological and other semi-brittle materials.

Citation: O’ Ghaffari, H., Peč, M., Cross, A. J., Mittal, T., & Mok, U. (2026). Brittle and crystal-plastic defect dynamics of calcite single crystals. Journal of Geophysical Research: Solid Earth, 131, e2025JB032846. https://doi.org/10.1029/2025JB032846

Marie Violay, Associate Editor, JGR: Solid Earth

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The 2 February 2026 landslide on the Ionian motorway between Arta and Amfilochia

Tue, 02/03/2026 - 07:55

An unusual failure has occurred on a cut slope adjacent to a key road in Greece.

On 2 February 2026 a major, fascinating landslide occurred on the A5 Ionian motorway between Arta and Amfilochia in Greece. The location appears to be [39.07754, 21.09861]. The news site ekathimerini has a story providing the details, which includes this extraordinary image of the aftermath of the landslide:-

The 2 February 2026 landslide on the Ionian motorway in Greece. Image from ERT via ekathimerini.

I believe that the Google Earth image below shows the configuration of the site in 2023:-

Google Earth image showing the site of the 2 February 2026 landslide on the Ionian motorway in Greece.

So, this is a large cut slope that appears to have been formed in about 2015 (based on Google Earth imagery). The failure is quite complex, with most of the landslide moving as a large block (which has fractured in the late stage of movement). There is a large displacement on the far side of the landslide (in the photograph view), so there has been some rotation around an approximately vertical axis. The landslide does not appear to have been conventionally rotational.

To me, this suggests failure on an existing plane of weakness in the slope. The news report indicate that the landslide occurred after heavy rainfall.

This is a Google Streetview of the landslide site from September 2023:-

Google Streetview image showing the site of the 2 February 2026 landslide on the Ionian motorway in Greece.

It appears that the slope has rockbolts, which suggests that there was an awareness of the potential for instability. Perhaps they were insufficiently long to prevent this failure? The presence of the rockbolts may explain why the landslide moved as a predominantly intact block, though.

The Ionian motorway is now closed. There are similar slopes along the road, so the investigation of this failure may have wider implications.

Return to The Landslide Blog homepage Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Coral Diversity Drops as Ocean Acidifies

Mon, 02/02/2026 - 13:44

At a natural underwater laboratory off the coast of Papua New Guinea, researchers examined what happens to a diverse reef ecosystem as it experiences gradually increasing levels of ocean acidification. They found that as the pH decreased, complex branching corals, soft corals, and young corals died off. In their place grew hardy boulder corals and non-calcium-based algae.

One thing the team didn’t find: a specific tipping point at which corals began to die off.

“That was something we really hoped to be able to detect from the data,” said Sam Noonan, a coral reef ecologist at the Australian Institute of Marine Science (AIMS) in Townsville and lead researcher on a new study reporting the work. “Do you have this increase in acidification and everything seems fine, and then species start falling off a cliff? But that was not the case at all. With every little increase, we saw a smooth decline.”

These observations, which took place near a volcanic seep that leaks carbon dioxide (CO2) into the ocean from the seafloor, provide a preview of how reefs around the world could respond as the ocean absorbs increasing quantities of atmospheric CO2.

Researchers placed instruments like this one at 37 locations along the volcanic seep to measure the water’s pH. Credit: © AIMS | Katharina Fabricius, CC BY 3.0 AU A Natural Coral Laboratory

The ocean is the world’s largest carbon sink. As atmospheric CO2 concentrations continue to rise, the ocean absorbs more and more of that carbon, which makes seawater more acidic. Oceanographers and marine ecologists have observed for decades that falling marine pH levels disturb delicate marine ecosystems, like coral reefs, around the world.

Coral reef scientists have observed in laboratory settings that acidic seawater makes it harder for corals to build the carbon-based limestone skeletons that support complex branching corals.

“Even the most advanced of these experiments, however, cannot fully capture the incredible complexity of a real-world coral reef, where biodiverse flora and fauna are interacting in an ever changing array of environmental conditions,” said Ian Enochs, a coral ecologist at NOAA’s Atlantic Oceanographic and Meteorological Laboratory in Miami.

To overcome those limitations, Noonan and his AIMS colleagues traveled to Milne Bay on the southeastern coast of Papua New Guinea, which is home to a diverse and thriving coral reef ecosystem. It’s also home to a volcanic seep that releases nearly pure CO2 gas from vents in the seafloor.

A reef like this one is “a natural laboratory that allows us to understand how real coral reefs respond to acidification.”

A reef like this one is “a natural laboratory that allows us to understand how real coral reefs respond to acidification,” Enochs said. Enochs was not involved with the new research.

The scientists spent more than a decade measuring the ambient properties of the seawater throughout the reef and documenting, via a proxy called aragonite saturation, how acidity changes on the basis of proximity to a seep. Aragonite saturation levels across the seep match values predicted to occur by 2100 under a wide range of carbon emission scenarios.

The team set up 37 monitoring stations at locations along the reef that experience gradually rising levels of CO2. Those stations measured seawater properties like temperature, light exposure, current, and, of course, acidity. Divers documented coral diversity, the abundance of juvenile corals, and the types of algae that grew around each of the stations.

In laboratory experiments, “you have a control reef, and then you have an acidified reef, and it’s just A versus B,” Noonan said. “In this study, we have 37 stations across this gradient to look at community change on a continuum. There’s no data out there like that.”

In locations along the reef where ocean pH was at ambient levels, like this location hundreds of meters away from the volcanic seep, the reef exhibited high structural complexity, abundant branching corals and soft corals, and many small young corals. This location was used as a control site. Credit: © AIMS | Katharina Fabricius, CC BY 3.0 AU

At stations more than 500 meters (1,640 feet) from the volcanic seep, the reef hosted a diverse array of complex branching corals, soft-bodied corals, and juvenile corals. Closer to the seep, stations recorded progressively lower pH levels and the complex and delicate corals died off. The only surviving corals were hardier boulder corals (genus Porites), which have thick layers of tissue between the water and their skeletons. There were also fewer juvenile corals and more non-carbon-based algae as acidity rose.

“You can visually see it when you’re swimming around these systems,” Noonan said, and the data back up those observations. “It seems that some species are more susceptible than others. Those with a really high surface area and a thin tissue layer seem to be really affected.”

“Those species that are most affected seem to be the most ecologically important.”

“The problem is those species that are most affected seem to be the most ecologically important,” he added. “They’re the ones that provide shelter for the literally millions of species that live on coral reefs. All the fish and little crustaceans, they all rely on these things for habitat, and they’re the ones that are really starting to drop out first.”

These results were published in Communications Biology in November 2025.

An Ongoing Problem

“This paper is important because it offers another glimpse into the future of reefs under acidification, one that is entirely independent from prior experiments and other investigations of similar sites,” Enochs explained. “What the authors found, however, is remarkably similar to what we’ve observed in our experimental tanks, and at other naturally acidified sites from all over the world.”

“It’s the similarity of these stories that gives these findings the greatest power, parallel lines of evidence all pointing to the same thing.”

“It’s the similarity of these stories that gives these findings the greatest power, parallel lines of evidence all pointing to the same thing,” Enochs added.

Millions of people depend on reef ecosystems to support fisheries, feed coastal communities, protect coastal infrastructure from waves and storm surge, and sustain tourism and local economies. What’s more, “lower coral cover means less shelter for the exceptional biodiversity of a reef, and a loss of species, many of which are still unknown to science,” Enochs said. “When I read this paper and I see how acidification impacts these reefs, I think about what it could mean for other reef ecosystems and the communities they support.”

Noonan said that this volcanic seep is a simple proxy for ocean conditions under a future climate scenario, but it’s not a perfect one. Sunlight and temperature were pretty constant across the reef, which was good for isolating the effects of CO2 but not realistic for most reef ecosystems.

Future work could consider those additional variables to see whether there is a true acidification tipping point for corals. But Noonan also brought up a more concerning possibility.

“This has been ongoing since the Industrial Revolution, so perhaps there were tipping points and we’re already past them.”

“This has been ongoing since the Industrial Revolution, so perhaps there were tipping points and we’re already past them,” he suggested. There’s no way to know, as scientists lack data on past ocean acidification.

Regardless, “these changes are ongoing and occurring now,” he added. “We’re starting to detect significant, statistical changes in these communities at [acidification] values that we’re expecting within the next 20 to 30 years on coral reefs. It’s not end of the century stuff.”

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

Citation: Cartier, K. M. S. (2026), Coral diversity drops as ocean acidifies, Eos, 107, https://doi.org/10.1029/2026EO260047. Published on 2 February 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

How the Rise of a Salty Blob Led to the Fall of the Last Ice Age

Mon, 02/02/2026 - 13:39

There are a few things scientists know for sure about how Earth grows warmer: For instance, when there’s more carbon dioxide (CO2) in the atmosphere, that CO2 traps heat. This means that during an ice age, less CO2 is present in Earth’s atmosphere.

“One of the fundamental questions in our field was, ‘Where did that CO2 go during ice ages, and where did it come from when the planet warmed?’”

“One of the fundamental questions in our field was, ‘Where did that CO2 go during ice ages, and where did it come from when the planet warmed?’” said Ryan Glaubke, a paleoceanographer and postdoctoral researcher at the University of Arizona.

Scientists had their suspicions: The ocean was the obvious culprit because it’s enormous and is known to exchange CO2 with the atmosphere. But for CO2 to be stored in the ocean for long periods, it would need to be in cold, salty, dense water far beneath the ocean’s surface. Until now, scientists had no way to prove that salinity levels in the deep ocean were linked to changes in atmospheric CO2 over the scale of ice ages.

Now, new research published in Nature Geoscience seems to confirm what many researchers have long thought was the case: A giant “blob” of salty ocean water kept carbon dioxide locked deep in the ocean during the last ice age, and the blob released that CO2 during an upwelling event 18,000 years ago.

Unusual Upwelling

During his graduate studies at Rutgers University, Glaubke and his fellow researchers collected sediment cores from the seafloor. Sediment cores are long, thin cylinders of mud with successive layers that reflect periods in Earth’s history.

Normally, when scientists collect sediment cores, they use them to learn about past conditions near the ocean’s surface. Single-celled creatures called foraminifera (or forams, for short) live and build their shells near the ocean’s surface. When these creatures die and sink to the ocean floor, their shells become part of the seafloor sediment and provide a record of the composition of the upper ocean.

This team, however, gathered sediment cores from an unusual site on the boundary of the Indian and Southern oceans. In this spot, off the coast of Western Australia, waters from deep in the ocean upwell to the surface.

“It’s really hard to look at the bottom of the ocean from the surface,” said Liz Sikes, a paleoceanographer at Rutgers, a coauthor of the paper, and Glaubke’s former Ph.D. adviser. “But the thing is, these planktic forams are in a place in the ocean where the water that’s at the surface has just returned to the surface and it still retains most of its deep-water qualities.”

Gathering sediment cores from this location meant the scientists could gain an understanding not just of how the upper ocean changed in the past but of how the waters that rose from the bottom of the ocean had also changed.

“What we found, rising from the deep ocean to the surface, was not only this geochemical fingerprint for old carbon that remained at the bottom of the ocean, but at the exact same time, we see this increase in upper ocean salinity by around 2 parts per thousand, which is a very large scale change,” Glaubke said. “That is one of the really important contributions of this paper, I think, which is that it provides this support for this ‘salty blob’ kind of retention mechanism.”

From Glacial to Interglacial

Patrick Rafter, a chemical oceanographer who did not contribute to this paper but was involved with measuring the radiocarbon levels in the collected sediment cores, said he was already convinced that salinity must play an important role in the rate of global ocean overturning, so the results were “not surprising” to him. He noted that the study was rigorous and careful, in that the researchers replicated their anomalous findings with multiple planktic species.

“It’s like any kind of mystery: The more evidence you get supporting it, the more likely you are to think maybe it’s real.”

“It’s like any kind of mystery: The more evidence you get supporting it, the more likely you are to think maybe it’s real,” he said. “So far, the evidence that exists suggests this is a solid finding that we should consider when trying to explain glacial-interglacial climate change.”

Furthermore, the upwelling waters of the Southern Ocean help sustain a global conveyor belt of currents, including the Atlantic Meridional Overturning Circulation. During an ice age, these currents tend to be more sluggish. The strengthening of these currents is an important piece in moving the planet out of an ice age.

“We make the argument that not only is this water mass releasing carbon to the atmosphere and kind of warming the planet, but the salt that then gets entrained in the global conveyor belt probably played a really important role in flipping that switch from glacial mode to interglacial mode,” Glaubke said. “So there’s this dual contribution that the salty blob might be making to ending the last ice age.”

—Emily Gardner (@emfurd.bsky.social), Associate Editor

Citation: Gardner, E. (2026), How the rise of a salty blob led to the fall of the last ice age, Eos, 107, https://doi.org/10.1029/2026EO260044. Published on 2 February 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The 28 January 2026 landslide at the Rubaya coltan mine complex in the Democratic Republic of Congo

Mon, 02/02/2026 - 07:35

The Landslide Blog is written by Dave Petley, who is widely recognized as a world leader in the study and management of landslides.

Whilst I was skiing in the French Alps last week, there were a couple of significant landslides. The highest profile event was the vast and intriguing landslide at Niscemi in Sicily (located at [37.14176, 14.38524]), which is a failure on a very large scale. The rear scarp is about 1.2 km long, for example.

In terms of loss of life, the more consequential event occurred at the Rubaya coltan mine complex in the Democratic Republic of Congo on 28 January 2026 (approximate location is [-1.55938, 28.88349]). Reuters has a good report about this event – obtaining good information is very challenging as the area is not controlled by the government. The mining news site Discovery Alert reports that at least 227 people were killed and 20 were injured, but further people were thought to be buried in the debris. It is likely that the final death toll will not be determined.

This is the second massive landslide at the Rubaya complex in less than a year – a landslide on 19 June 2025 is thought to have killed over 300 people.

Al Jazeera has a report from May 2025 that describes the desperate conditions under which the artisanal miners at Rubaya work.

APT has a video on Youtube that apparently shows the aftermath of the landslide at Rubaya:-

This is a still from that video:-

The aftermath of the 28 January 2026 landslide at the Rubaya mining complex in the DRC. Still from a video posted to Youtube by APT.

Assuming that the landslide is the large area on the centre right of the image, it is easy to see how mining on the lower slope can trigger instability. Note also the area on the left of the image, where there is a large tension crack across the slope.

As yet, satellite imagery of the Rubaya area is not available since the landslide, so it is not yet possible to identify the exact location of the failure. I will keep an eye on this over the coming days.

Return to The Landslide Blog homepage Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Partial Shutdown Over DHS Funding Ensnares Education, Health

Sat, 01/31/2026 - 05:01
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

Update, 3 February: After the House of Representatives voted 217 to 214 to approve the appropriations package earlier today, President Trump has signed the legislation into law, ending the partial government shutdown. The package includes five spending bills—which fund FEMA, the Department of Education, and the Department of Health and Human Services among other agencies—and a 2-week extension of DHS funding.

31 January: The U.S. government entered a partial shutdown Saturday at 12:01 Eastern time after Congress failed to resolve a showdown over funding for the Department of Homeland Security (DHS) and restrictions on Immigration and Customs Enforcement (ICE). The DHS appropriation was tied into a six-bill package that also included funding for the Departments of Defense, Education, Health and Human Services (HHS), Housing and Urban Development, Labor, State, Transportation, and Treasury.

Senate leaders and the White House struck a deal late Thursday evening to split the DHS spending bill away from the other five bipartisan appropriations bills. Friday evening, the Senate passed the amended appropriations package ahead of the shutdown deadline; it will continue to negotiate the DHS bill for 2 weeks.

However, any changes to the spending bills, including splitting them apart, also need to be passed by the House of Representatives, which is on recess until Monday 2 February. Until the House votes on the five-bill package, the agencies included in that package will remain shut down, as will DHS. (ICE will continue to operate during the shutdown due to money allocated in the One Big Beautiful Bill Act of 2025.)

“We may inevitably be in a short shutdown situation…but the House is going to do its job,” House Speaker Mike Johnson (R-La.) told reporters Thursday evening, suggesting that the House will act quickly to pass the amended five-bill package and avoid significant financial impacts.

What’s Shut Down for Science?

DHS runs the Federal Emergency Management Agency (FEMA), which is currently helping coordinate state-level responses to the massive winter storm that impacted millions of people across southern and eastern U.S. states over the past week. The DHS spending bill, which includes FEMA funding, has not been agreed upon or passed. Experts have said that FEMA would have enough money in its Disaster Relief Fund to continue to respond to storm-related impacts during a partial shutdown, at least for a few weeks.

During the most recent shutdown, which lasted 43 days this past fall, the Department of Education furloughed 87% of its employees. Under its shutdown contingency plan, the department states that it will continue to disburse Pell Grants and Federal Direct Student loans, and borrowers will still be required to make payments. States, schools, and other grantees will be able to access funds. However, no new grants will be issued, and its barebones Office of Civil Rights will pause reviews and investigations.

 
Related

During the fall 2025 shutdown, HHS furloughed 41% of its employees. According to its contingency plan, the department will maintain the minimal level of readiness for all health hazards, including pandemics and extreme weather response. Drug and medical device reviews will continue, as will disease outbreak monitoring and support to Medicare, Medicaid, and other healthcare programs. Data collection, validation, and analysis, grant oversight, and some CDC communications will cease.

Democrats are pushing for increased oversight and restrictions on ICE’s activities throughout the country after federal agents killed two people, Renee Macklin Good and Alex Pretti, in Minneapolis in January and engaged in other actions toward immigrants that have sparked national outrage. Democrats’ immigration demands have not been agreed to by Republicans or the White House.

Appropriations bills funding other science-related agencies, including the Environmental Protection Agency, National Science Foundation, NASA, NOAA, and the U.S. Geological Survey, have already become law. These agencies will continue to run during the current partial government shutdown.

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Pollution Is Rampant. We Might As Well Make Use of It.

Fri, 01/30/2026 - 14:21

When representatives of 197 countries ratified the Montreal Protocol to phase out ozone-depleting substances in 1987, they probably didn’t anticipate creating a new method for estimating the age of groundwater.

But the Montreal Protocol paved the way for a chemical called trifluoroacetic acid, or TFA, to become widespread in the atmosphere, and therefore in rainwater. Because the concentration of TFA has increased steadily since 1987, it’s a helpful tool for gaining a rough idea of how recently an aquifer has been recharged—which is what is meant by “groundwater age.”

Using TFA as a quick and easy tracer is one of several research techniques that rely on the mass amounts of anthropogenic material that enter the environment every moment of every day. Scientists are using pollution to study processes both small-scale and worldwide, from the history of a single bird’s nest to the history of humans on this planet.

Novel Tracers

TFA is one of thousands of per- and polyfluoroalkyl substances (PFAS), which are also known as “forever chemicals” because they take thousands of years to degrade. Fortunately, TFA seems to be much less toxic than the long-chain PFAS, such as perfluorooctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA), that have been associated with human health problems.

TFA’s omnipresence is a side effect of the move from using ozone-depleting chlorofluorocarbons (CFCs) in refrigerants. The alternative refrigerants, originally thought to be less harmful than CFCs, have consequences of their own, however, making this a case of what scientists have called “a regrettable substitute.”

Cyclists ride in front of a bus on a rainy evening in Copenhagen, Denmark, where scientists have used the concentration of trifluoroacetic acid (TFA) in rain to estimate the age of groundwater. Credit: Kristoffer Trolle/Wikimedia Commons, CC BY 2.0

When modern refrigerants evaporate into the atmosphere, they break down into TFA, which then falls to the ground in the rain, explained environmental geochemist Christian Nyrop Albers from the Geological Survey of Denmark and Greenland.

Groundwater becomes drinking water, so part of Albers’s job is to screen groundwater for pollutants. But to convince politicians they need to regulate a pollutant, he and his colleagues need to show that the substance is entering groundwater because of how it’s used today, not in decades past. So they need to know how old the groundwater is.

More sophisticated methods “are not always very easy to use, or they are very expensive or time-consuming.”

“There are many sophisticated methods for that, but they are not always very easy to use, or they are very expensive or time-consuming,” Albers said. The gold standard is to measure the decay of a substance called tritium into helium, but only a few labs in the world have the capacity to do the test, and the water sample must be stored for 6 months to see the decay.

Measuring TFA is not as precise as measuring tritium decay, and those using the technique have to be cognizant of any farms in the area, because agricultural chemicals can also release TFA into the groundwater and affect results. But measuring TFA is fast and easy, so “we use it on a regular basis now,” Albers said. He and his colleagues recently published the method, and a research group in Germany has begun using it, too.

In general, PFAS in the environment are the “subject of huge amounts of discussion,” said environmental radiochemist Andy Cundy from the University of Southampton, who was not involved in developing the method. “As the measurement of PFAS becomes more routine, I think we will see more and more people using PFAS as tracers,” he added.

Plastic Cuts Both Ways Among the nests Auke-Florian Hiemstra analyzed was a common coot’s nest containing plastic dating back to the 1990s. Credit: Auke-Florian Hiemstra

More than 460 million metric tons of plastic are produced each year, with that number growing all the time. When it’s used as food packaging, plastic often comes with an expiration date stamped on it. Auke-Florian Hiemstra of the Naturalis Biodiversity Center in Leiden, Netherlands, is a nidologist, or a scientist who studies birds’ nests. He used those expiration dates to trace the history of birds’ nests found along the canals in Amsterdam. In the past, carbon-14 dating has been applied to some very old nests, but using plastic proved to be a far easier process.

Scientists used trash to date the construction of birds’ nests in Amsterdam. Click image for larger version. Credit: Auke-Florian Hiemstra

“This one bird nest that we found turned out to be like a history book,” Hiemstra said. The trash within it ranged from face masks from the COVID-19 pandemic to a candy bar wrapper advertising the 1994 FIFA World Cup. Of course, a piece of plastic’s expiration date doesn’t correspond exactly to the date when a bird incorporated it into its nest, but finding several pieces from the same time frame is suggestive. To increase confidence in the method, the researchers integrated their findings with the archives of Google Street View, which showed the presence of the nest at various points in time.

But even as plastic opens opportunities to estimate the ages of some natural materials, it may make it harder to tell the ages of others. That’s because plastic is derived from long-dead plants and animals that have negligible amounts of the carbon-14 isotope that’s used for carbon dating. Plastic carbon may dilute natural carbon and make materials appear older than they are.

This could be problematic for the study of ocean processes. One way of measuring how long it’s been since water was at the surface relies on carbon-14 dating. If 1% of the carbon in a sample of water is from microplastics—a conservative estimate given that up to 5% of ocean carbon is from plastic in some samples—then that would make the sample appear 64 years older than it actually is, calculated environmental oceanographer Shiye Zhao from the Japan Agency for Marine-Earth Science and Technology.

Ocean circulation proceeds over thousands of years, so adding 64 years doesn’t change the overall picture by very much. But the amount of plastic is always increasing, so “think about this in a future scenario,” Zhao said. Especially in plastic hot spots, the material could obscure the study of ocean circulation substantially.

“That could be an issue as more microplastics enter the ocean,” said Cundy.

The Anthropocene

Anthropogenic pollution can help scientists understand how nature is responding to other aspects of human influence.

We’re living in a period that’s colloquially called the Anthropocene because markers of human activity are obvious in environmental records worldwide. Although no formal date has been agreed upon, scientists have proposed a range of dates for when the Anthropocene began. One definition suggests that the period began in the mid-20th century and is marked by many human-made substances, such as plastic, that are evident in geological strata, including ice and sediment cores. But one of the most ubiquitous and reliable candidate markers for the start of the Anthropocene is plutonium-239. Atomic bomb tests conducted in the 1940s and 1950s were the main sources of plutonium-239, which went flying into the atmosphere and around the globe, depositing a layer across Earth and “labeling the entire planet,” said Cundy.

Having a marker for when anthropogenic activities began to affect the geological record is a powerful research tool.

Having a marker for when anthropogenic activities began to affect the geological record is a powerful research tool because it provides a benchmark against which scientists can measure how nature has responded since, said environmental geochemist Agnieszka Gałuszka from Jan Kochanowski University of Kielce, in Poland.

In a study of pollen in paleoecological records from across North America, for example, scientists looked at how the diversity of plant species has changed since the mid-20th century and compared that with previous time periods. They found that rates of species appearing and disappearing have been higher at any other time since the end of the last ice age, about 13,000 years ago. That’s probably because of land use changes, as well as of introduction of pests and invasive species to the continent, all driven by humans.

Likewise, in a study of peatlands in the Izery Mountains of Europe, researchers investigated how coal burning has affected microorganisms since the mid-1960s. By analyzing microbial communities, scientists discovered that amoebae picked up titanium, aluminum, and chromium from inorganic coal residue and incorporated these elements into their shells. “It was quite shocking news to all of us,” Gałuszka said.

Identifying pollutants as markers of the plausible start of the Anthropocene has led scientists to ask, “What has been the change over time?” said Cundy. “And, importantly, what have been the causes of that change over time? Is it human induced, or is it natural?”

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2026), Pollution is rampant. We might as well make use of it., Eos, 107, https://doi.org/10.1029/2026EO260039. Published on 30 January 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Our Ocean’s “Natural Antacids” Act Faster Than We Thought

Fri, 01/30/2026 - 14:21
Source: AGU Advances

Earth’s ocean absorbs carbon dioxide from the atmosphere, helping to temper the impact of climate change but increasing ocean acidity. However, calcium carbonate minerals found in the seabed act as a natural antacid: Higher acidity causes calcium carbonate to dissolve and generate carbonate molecules that can neutralize the acid.

For many years, researchers have thought of this carbonate dissolution buffer mainly as a very slow process because most ocean carbonates lie in deep-ocean sediments. There, the effects of their dissolution won’t reach the atmosphere for hundreds or thousands of years—long after many effects of acidification are already felt by ecosystems.

However, calcium carbonate also exists in more than 60% of the seabed of the shallower waters of continental shelves. New research by van de Velde et al. suggests that shelf carbonate dissolution may play a previously underappreciated climate feedback role on much faster timescales.

To explore the potential importance of shallow carbonate dissolution, the researchers analyzed high-precision ocean carbonate chemistry observations collected over 25 years in continental shelf waters off the southeastern coast of New Zealand.

They found that in the study area, calcium carbonate buffering has occurred in shallow shelf waters for at least the past 25 years and that this climate feedback process operates on annual to decadal timescales—orders of magnitude faster than in the deep ocean. Additional biogeochemical modeling suggested that this continental shelf carbonate dissolution is driven by an increase in dissolved carbon dioxide resulting from anthropogenic carbon dioxide emissions.

Similar dissolution feedback may occur in continental shelf waters around the world, in which case, shelf carbonate dissolution may have been accelerating globally since the 1800s. Furthermore, the researchers calculated that this process could account for up to 10% of the current discrepancy between state-of-the-art model predictions of ocean carbon dioxide uptake and real-world measurements.

Further research will be needed to explore the global role of shelf carbonate dissolution and how it should be incorporated into climate models. Such knowledge could have key implications for proposed efforts to combat climate change by deliberately boosting ocean alkalinity, the authors say. (AGU Advances, https://doi.org/10.1029/2025AV001865, 2025)

—Sarah Stanley, Science Writer

Citation: Stanley, S. (2026), Our ocean’s “natural antacids” act faster than we thought, Eos, 107, https://doi.org/10.1029/2026EO260013. Published on 30 January 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Alligators May Boost Carbon Storage in Coastal Wetlands

Thu, 01/29/2026 - 14:17

The vital role apex predators play in maintaining healthy ecosystems is well-documented, but research published in Scientific Reports suggests predators might also influence the global carbon cycle. The study found that across coastal wetlands in the southeastern United States, soils store more carbon where American alligators are present, linking predator recovery to enhanced carbon retention in some of the planet’s most efficient natural carbon sinks.

Wetland carbon storage (so-called “blue carbon”) is facilitated by wetlands’ waterlogged, oxygen-poor soils, which slow decomposition and allow organic material to accumulate over time. Scientists know that when wetlands are drained or degraded, stored carbon can be released into the atmosphere as carbon dioxide. Less well understood is how biological interactions within these habitats shape carbon dynamics. The new study adds to a growing body of evidence showing that animals—particularly apex predators—can influence vegetation, soils, sediment flows, and nutrient cycles at scales large enough to affect the planet’s carbon budget.

“What we found was a positive correlation between alligator abundance and carbon sequestration in specific habitats,” said Christopher Murray, an ecologist at Southeastern Louisiana University and lead author of the study. “Where we have more alligators, from small populations to much larger populations, we actually see higher carbon sequestration.”

Across the alligator’s native range, wetlands stored an average of 0.16 gram more carbon per square centimeter in the top 10 centimeters of soil when alligators were present.

Murray and his colleagues at Southeastern and the Louisiana Universities Marine Consortium analyzed soil carbon data from the Smithsonian’s Coastal Carbon Network. From that database, the team selected 649 continuous soil cores from tidally influenced wetlands in 13 states. They compared those carbon measurements with data on alligator presence, density, and nesting patterns assembled from state wildlife agencies and long-running monitoring programs.

Across the alligator’s native range, wetlands stored an average of 0.16 gram more carbon per square centimeter in the top 10 centimeters of soil when alligators were present. That surface layer reflects relatively recent carbon accumulation over roughly the past 6 decades. This period overlaps with the recovery of alligator populations following the Endangered Species Protection Act of 1966.  

The researchers attribute the observed patterns to a combination of physical ecosystem engineering and trophic cascades, or actions by predators that reverberate through multiple layers of a food web. As apex predators, alligators may suppress herbivore populations that otherwise damage vegetation and disturb soils, potentially allowing denser plant growth and greater carbon burial. Alligators also modify wetland landscapes directly. By digging dens, carving channels, and creating small ponds, they reshape hydrology, redistribute sediments and nutrients, and create localized microhabitats where organic carbon can accumulate and persist.

Trophic Effects

At a continental scale—spanning a wide range of coastal wetland types across multiple states—the study found no statistically significant difference in carbon storage between sites with and without alligators. The authors suggest that this reflects substantial ecological variability across regions, including differences in vegetation, geomorphology, hydrology, and food web structure, which can mask the influence of any single predator species when ecosystems are analyzed collectively.

An American alligator rests on a fallen tree. Research suggests that wetlands within the alligator’s native range store more carbon in surface soils when alligators are present. Credit: Emil Siekkinen

“Originally, I was surprised by that finding,” said Murray. The team’s original hypothesis predicted higher carbon sequestration wherever alligators were present, consistent with trophic cascade theory. The absence of a clear continental-scale signal, Murray said, made it obvious to him, “later on, that there’s a different apex predator that is working in those habitats.”

When the analysis was narrowed to the alligator’s native range, thereby reducing ecological variability, the pattern became clearer. At these regional scales, wetlands with alligators consistently stored more carbon, suggesting that in ecosystems where they occupy the top trophic position, alligators may exert a detectable influence on wetland carbon dynamics.

“Apex predators like crocodilians have a critical role in the function of our world.”

 “This study is important because it links an apex predator directly to wetland soil carbon stocks, moving beyond theory to show that food web structure can shape carbon outcomes at ecosystem scales,” marine ecologist and Blue Carbon Lab director Peter Macreadie, who was not involved in the study, wrote in an email. “It also challenges prevailing blue carbon approaches by showing that long-term carbon storage depends not only on vegetation and sediments, but on maintaining intact trophic interactions.”

Such trophic effects help explain how sea otters maintain kelp forests by controlling sea urchins and why wolves have been linked to forest regeneration through changes in large herbivore behavior. The alligator study suggests that similar processes may operate in coastal wetlands, where predator presence supports vegetation growth, soil stability, and carbon retention.

The study does not establish causation, and Murray emphasized that long-term exclusion experiments would be needed to directly test how changes in alligator populations affect carbon accumulation over time. Even so, the findings suggest that predator recovery may have consequences for the climate that are rarely considered in conservation planning. Murray said that the implications of this work extend beyond carbon accounting, however. “Apex predators like crocodilians have a critical role in the function of our world,” he said. “And they should be respected rather than feared.”

—Emil Siekkinen, Science Writer

Citation: Siekkinen, E. (2026), Alligators may boost carbon storage in coastal wetlands, Eos, 107, https://doi.org/10.1029/2026EO260038. Published on 29 January 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Insights for Making Quick Clay Landslides Less Quick

Thu, 01/29/2026 - 14:17

In countries of the far north, a particular kind of natural disaster can strike almost without warning. Quick clay landslides, in which previously solid soil suddenly liquefies, can carry away houses and farms and bury towns and roads. The slides occur when salts leach out from clay soils that were previously beneath sea level, eventually bringing the soils’ stability beneath a critical threshold and making them vulnerable to potential triggering events.

“If we can understand how these salts are doing it, maybe we can find something else that does the same thing.”

Striking examples in Norway include buildings sliding sideways into the sea near the northern town of Alta in 2020 and the Verdal landslide in 1893, in which 3 square kilometers of land broke loose in the central Norwegian municipality, killing 116 people and burying 105 farms. Quick clay (often called sensitive clay in North America) is also found in Alaska, as well as Canada, Finland, Russia, and Sweden, where governments often attempt to stabilize at-risk soils. Doing so can be expensive and environmentally harmful, leading researchers to seek better ways of making quick clay safe again.

In new research published in the Journal of Colloid and Interface Science, Norwegian researchers dove down to the microscopic scale to provide new insights into how different kinds of salts contribute to the mechanical strength of quick clay. The findings could reveal novel ways to make at-risk soils safe from slides, said study coauthor Astrid de Wijn, a materials scientist at the Norwegian University of Science and Technology (NTNU).

“If we can understand how these salts are doing it, maybe we can find something else that does the same thing,” she said.

For Want of Salt

The key statistic for quick clay risk is the marine limit—the line dividing soils that were previously below sea level from those that remained above it. In high-latitude countries like Norway, melting glaciers at the end of the last ice age, around 10,000 years ago, caused a process of unburdening and uplift called isostatic rebound that brought some previously submerged areas above water. The marine limit varies from place to place but can be more than 200 meters above current sea levels in the south of Norway and includes significant portions of the country.

The soil “will behave like sort of a sour cream. It just pours out of the landslide crater.”

Soils beneath the marine limit were infused with salts from the sea, which they’ve gradually lost over time from groundwater leaching. Those salt ions act as electrochemical binders between clay molecules, helping strengthen them, said Jean-Sébastien L’Heureux, a geotechnical engineer and technical expert on quick clay at the Norwegian Geotechnical Institute who was not involved with the research.

Without the salts to hold them, the microscopic particles of clay look more like a house of cards, stacked haphazardly with nothing binding them together. It is in this state that regular clay becomes quick clay, where even small perturbations like minor earthquakes or construction projects can cause devastating landslides. Previously solid soil “will behave like sort of a sour cream,” L’Heureux said. “It just pours out of the landslide crater.”

The main way to prevent such catastrophes is to stabilize the soil, a process that to date has typically involved injecting lime and cement to act as a binder. The technique is effective but environmentally unfriendly because of the large amounts of carbon dioxide (CO2) it creates. Coming up with an equally effective, more sustainable method is the goal of the Sustainable Stable Ground (SSG) project run by NTNU, which de Wijn and her coauthor, NTNU chemist Ge Li, are part of.

Using molecular dynamics simulations that re-create how clay molecules act at the nanoscale, the two researchers were able to compare how different salt cations affected the clay’s strength. The key difference was between divalent cations like magnesium chloride (MgCl2) and calcium chloride (CaCl2) and monovalent ones like sodium chloride (NaCl) and potassium chloride (KCl), Li said. Divalent cations enhance interactions between clay particles to a greater extent and stick out more, increasing friction. That means they enhance clay strength more than monovalent cations do and could offer a blueprint for future chemical stabilizers in quick clay.

In Search of Better Solutions

Finding a truly effective, affordable, and sustainable means of stabilizing quick clay will likely take some time, however. Priscilla Paniagua, a geotechnical engineer at the Norwegian Geotechnical Institute not affiliated with the paper, noted that simply adding more salt, as some projects have attempted to do, is unlikely to be effective, as current technology makes it difficult to scale. What’s more, the salt will simply leach out from the soils again, Li noted.

Some teams have proposed using materials like biochar or ash to stabilize soils, approaches that work well in the lab but have yet to be scaled up, Paniagua said. Another issue is that some proposed stabilization methods would increase only the remolded strength of quick clay, or its strength after it has liquefied and begun moving.

“It means that it won’t be quick [clay], but…you’re not increasing the full stability of the slope,” L’Heureux said. Such approaches would mitigate the impact of a quick clay landslide but wouldn’t prevent it from occurring.

Though challenges remain, Li and de Wijn remain hopeful that a better solution for quick clay is possible. Li said their modeling work is informing small-scale lab experiments testing how various materials affect soil strength. New proposals for stabilizers include polymers that enhance clay binding and even CO2 injected into the soil to help lime solidify, de Wijn said.

Today, better maps of quick clay landslide risk give local governments and developers more information about where it’s safe to build and where it isn’t. But with many soils destabilized, scientists note, the risk of landslides remains.

—Nathaniel Scharping (@nathanielscharp), Science Writer

Citation: Scharping, N. (2026), Insights for making quick clay landslides less quick, Eos, 107, https://doi.org/10.1029/2026EO260040. Published on 29 January 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

As Some Soils Warm, Microbes Stockpile Essential Nutrients

Wed, 01/28/2026 - 14:07

As high-latitude soils warm, microbes in the soil change how they handle nutrients like nitrogen. Normally, these microbes are nitrogen recyclers, pulling it from the soil and turning it into inorganic forms—like ammonium and nitrates—that plants can absorb. But a new study published in Global Change Biology suggests that with rising temperatures, microbes are changing their strategy. They take up more nitrogen for themselves while reducing the amount they release back into the environment. This change alters the flow of nitrogen through the ecosystem, potentially slowing vegetation growth and affecting the rate at which our planet warms.

These findings come from experiments carried out in subarctic grasslands near Hveragerði, Iceland. In 2008, earthquakes rerouted groundwater in an area that had been warmed by geothermal gradients, creating patches of soil heated between 0.5°C and 40°C above normal temperatures. The event turned the region into a natural laboratory where researchers could study how ecosystems respond to long-term warming under natural conditions.

Earlier research in this location had already shown that in warming soils, microbes become highly active while plants are dormant. As a result, nitrogen-containing compounds released into the soil by the microbes were lost, either by leaching into groundwater or by escaping into the atmosphere as the potent greenhouse gas nitrous oxide.

An abandoned greenhouse near the experimental sites in Iceland serves as a reminder that climate change is having an especially strong effect on high-latitude soils. Credit: Sara Marañón Jiménez

In this work, scientists added nitrogen-15 to the soil, which they could track to determine how much the plants had used up and what they did with it. Researchers found that after the initial nutrient loss, microbes became more conservative in their handling of nitrogen, recycling nitrogen internally rather than absorbing more from the ground. At the same time, microbes stopped releasing ammonium, a nitrogen-rich by-product of their normal metabolism that is usable by plants—the microbial equivalent of urine, said study coauthor Sara Marañón Jiménez, a soil scientist at the Centre for Ecological Research and Forestry Applications in Spain.

Nitrogen Heist

This change in nitrogen cycling has important consequences for the whole ecosystem. On the one hand, it has a positive effect because it prevents further nitrogen loss.

“The study shows that nitrogen is not released as inorganic nitrogen, but it seems to go directly in an organic loop,” said Sara Hallin, a soil microbiologist at the Swedish University of Agricultural Sciences in Uppsala who was not involved in the study. “You could say that it’s a positive aspect, and so it’s more beneficial for the ecosystem if that nitrogen is sort of retained.”

“If microorganisms start immobilizing nitrogen, it could lead to competition between microbes and plants.”

On the other hand, microbes’ nutrient-hoarding behavior might reduce nitrogen availability for plants. “There’s a delicate feedback between plants that take nitrogen, make photosynthesis, and put carbon in the soil as organic matter and microorganisms that take this organic matter, recycle it, and release nitrogen in forms the plants can use,” Marañón Jiménez said. “If microorganisms start immobilizing nitrogen, it could lead to competition between microbes and plants.”

The team is now working on a study to determine what exactly happens to soil at the very early stage of warming, before nutrients have been lost. “This way we hope to recover the first chapters, to see what we’ve been missing,”

To this end, they transplanted bits of normal soils into heated areas to study the process in detail from the very beginning. “Soils exposed to [soil] temperature increases showed the same nutrient loss after 5 years [as] after 10 years,” Marañón Jiménez said, suggesting that most of the nutrient loss occurs early on.

A Greenhouse Time Bomb

Climate models may be underestimating how the loss of nitrogen and carbon from cold soils is contributing to global warming, researchers said. Disruptions to nutrient cycling at these latitudes could represent a previously overlooked source of greenhouse gas emissions.

Arctic soils store massive amounts of carbon, built up over thousands of years from plant material that microbes cannot fully break down. This partially decomposed organic matter accumulates, forming one of the largest carbon reservoirs on Earth. As temperatures rise, scientists expect microbes to become more active, accelerating decomposition and releasing much of this stored carbon into the atmosphere as carbon dioxide.

“As biomass is lost from the microbial mass, that means there’s less storage capacity for carbon and nitrogen in the soil, leading to poorer soils where plants can’t grow as well.”

Researchers had hoped warmer temperatures would allow plants to grow more vigorously, absorbing some of the extra carbon released by Arctic soils.

The new findings call this idea into question. “It’s a chain reaction,” Marañón Jiménez explained. “As biomass is lost from the microbial mass, that means there’s less storage capacity for carbon and nitrogen in the soil, leading to poorer soils where plants can’t grow as well, and plants cannot compensate emissions by absorbing more carbon.”

Studying these geothermally heated soils could yield confusing results, though. “It’s not really the way global warming works,” Hallin said. Global warming includes increases in air temperature, she explained, whereas the plants in the current study had only their root system in a warmer climate, not their aboveground shoot system. “That could potentially cause some effects [the researchers] are not accounting for,” she said.

Finally, the authors of the new study also warn that not all soils have the same response to warming. The Icelandic soils in this study are volcanic and rich in minerals, unlike the organic peat soils that dominate many Arctic regions. Deep peatlands in Scandinavia and northern Russia store vast amounts of carbon and may behave differently, highlighting the need for similar long-term studies across a wider range of Arctic landscapes.

—Javier Barbuzano (@javibar.bsky.social), Science Writer

Citation: Barbuzano, J. (2026), As some soils warm, microbes stockpile essential nutrients, Eos, 107, https://doi.org/10.1029/2026EO260043. Published on 28 January 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Which Countries Are Paying the Highest Price for Particulate Air Pollution?

Wed, 01/28/2026 - 14:06
Source: GeoHealth

Polluted air causes an estimated 7 million deaths worldwide each year, according to the World Health Organization. Much of the mortality comes from PM2.5, particulate pollution smaller than 2.5 micrometers in diameter that can enter the lungs and bloodstream and cause respiratory and cardiovascular problems. In addition to particles emitted directly into the atmosphere, ammonia (NH3), nitrogen oxides (NOX), and sulfur dioxide (SO2), which are emitted by factories, ships, cars, and power plants, are all precursors that can contribute to the formation of PM2.5. The effects of particulate pollution are not evenly distributed, however.

Oztaner et al. model the consequences of air pollution across the Northern Hemisphere by region, offering a more granular look at where targeted mitigation policies could be the most beneficial. Using the multiphase adjoint model of EPA’s Community Multiscale Air Quality (CMAQ) modeling platform, the authors assessed the benefits of mitigating various pollutants from the perspective of both lives and money saved. Monetary values of air pollution impacts were calculated using a well-established method used by international agencies, although the method introduces ethical concerns because it assigns values to lives partly based on different countries’ per capita gross domestic products (GDP).

Overall, they found that a 10% reduction in all modeled emissions could save 513,700 lives and $1.2 trillion each year in the Northern Hemisphere.

The largest mortality reductions came from China and India, where cutting emissions would save 184,000 and 124,000 lives, respectively, each year. The largest cost savings were found in China, followed by Europe and North America. Health benefits also varied by type of emissions and sector. NH3 causes more issues in China, whereas NOX is relatively more harmful in Europe than in other places. Across the Northern Hemisphere, the agricultural sector contributes most to particulate and precursor pollution, with a 10% reduction in agriculture-related emissions projected to save 95,000 lives and an estimated $290 billion. This is followed by the residential and industrial sectors.

The authors note that caution is warranted when comparing results across similar studies, in part because the link between pollutant concentrations and health outcomes is not always linear and in part because different regions may have different methodologies when accounting for emissions by sector. Also, their study focuses only on PM2.5-related mortality and does not consider other pollutants, such as ozone. Overall, they suggest their work offers a meaningful reference for comparing the effects of different pollutant mitigation strategies in the Northern Hemisphere. (GeoHealth, https://doi.org/10.1029/2025GH001533, 2026)

—Nathaniel Scharping (@nathanielscharp), Science Writer

Citation: Scharping, N. (2026), Which countries are paying the highest price for particulate air pollution?, Eos, 107, https://doi.org/10.1029/2026EO260026. Published on 28 January 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Wildfire Smoke Linked to 17,000 Strokes Annually in the United States

Tue, 01/27/2026 - 15:25

Smoke from wildfires may be responsible for 17,000 strokes each year in the United States, new research suggests.

The study, published in European Heart Journal, examined various sources of particulate matter smaller than 2.5 micrometers in diameter (about 30 times smaller than the width of a human hair). Also known as PM2.5, such particles are so small that they can be inhaled and enter the bloodstream, where they have been linked to an array of health effects, including decreased lung function, cardiovascular diseases, and even neurological disorders. But the new study seems to indicate that PM2.5 from wildfires is particularly harmful.

“The longer you’re exposed to smoke, the greater your stroke risk.”

Scientists examined a cohort of about 25 million people over the age of 65 who were covered by Medicare, a federal health insurance program. Between 2007 and 2018, about 2.9 million of those people experienced a stroke. The researchers calculated the average amount of wildfire smoke, as well as nonsmoke PM2.5, that each study participant was exposed to over the course of each year on the basis of participants’ zip codes.

After 1, 2, or 3 years of exposure to nonsmoke PM2.5, the participants’ risk of stroke didn’t change much.

“But for smoke, this picture is very different,” said Yang Liu, a health and environmental scientist at Emory University and corresponding author of the paper. “It’s like you are seeing some kind of a dose-response effect: The longer you’re exposed to smoke, the greater your stroke risk.”

More specifically, the study found that an increase of 1 microgram per cubic meter in the average concentration of wildfire smoke was associated with a 1.3% increase in stroke risk. Researchers found that Medicaid-eligible individuals (those who qualify for the program have limited income and resources) were especially vulnerable to the effects of wildfire smoke.

Unique Harms of Wildfire Smoke

The researchers input air quality data from several sources, including satellites, ground-based air monitors, and low-cost sensors such as PurpleAir devices, into a machine learning framework. The framework was used to estimate the daily wildfire smoke PM2.5 and nonsmoke PM2.5 concentrations across the contiguous United States at a 1-kilometer resolution. The team then used this information to calculate the average exposure rates within each zip code over 1, 2, and 3 years.

Their model and subsequent analyses of the findings were also designed to control for other factors that could affect stroke risk, including meteorology (extreme heat can increase stroke risk), access to care, Medicaid eligibility, and substance abuse disorders.

Jennifer Stowell, a geohealth scientist at the University of Maryland, said this was an “important” study.

“I really like where this paper has gone because they’ve characterized exposure slightly differently,” she said. “Rather than looking at more acute exposure, they looked at up to 3 years of exposure prior to a stroke. Also, other studies, for the most part, rely on emergency department data. So the fact that this is data in addition to that, from doctors’ offices and all sorts of things, is a big plus.”

The study did not establish the reason for the link between wildfire smoke exposure and stroke risk, but previous studies have suggested that inhaling pollutants can cause oxidative stress that affects the function of the endothelial cells (those lining the blood and lymphatic vessels) and of the cardiovascular system as a whole.

The study’s findings are also in line with previous research: A 2021 study suggested that PM2.5 from wildfires is up to 10 times more harmful than PM2.5 from other sources, such as ambient pollution.

“It all comes down to what [materials] wildfires are burning,” Stowell said. “There is a lot of organic matter, chemicals, and particles that we don’t normally see in air pollution from traffic or from industry that can be emitted during a fire. This is especially true if that fire burns any sort of man-made structures. Then, you start getting some highly toxic, synthetic emissions that we don’t normally breathe.”

Only a Small Part of the Picture

In a world where wildfires are growing both more frequent and more severe, Liu said he hopes a study like this will help guide future research, noting the importance of a large-scale epidemiological study to complement lab-based research.

“Policymakers can look at the disease burden numbers and say, ‘Wow, it may be worthwhile to spend more money on firefighting, or forest management, because it’s a huge disease burden.’”

“I think its real burden is going to be much, much larger than what we show in this paper.”

Liu said he wasn’t at all surprised by his team’s findings because stroke is only one part of the overall picture of how smoke affects overall health.

“I think its real burden is going to be much, much larger than what we show in this paper,” Liu said. In fact, he noted that the study focuses only on the fee-for-service Medicare population and doesn’t account for the more than 40% of the Medicare population enrolled in private insurance.

“So even for the overall Medicare population, or just the elderly population in the U.S., we are underreporting the burden, maybe by half,” he said.

—Emily Gardner (@emfurd.bsky.social), Associate Editor

Citation: Gardner, E. (2026), Wildfire smoke linked to 17,000 strokes annually in the United States, Eos, 107, https://doi.org/10.1029/2026EO260042. Published on 27 January 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

What Americans Lose If Their National Center for Atmospheric Research Is Dismantled

Tue, 01/27/2026 - 14:15

Americans set few everyday expectations for science, but they are fundamental: We expect the weather forecast to be right, we expect science and technology that allow weather hazards to be anticipated within reason, and we expect public services to protect our lives and livelihoods from such hazards—floods, fires, tornadoes, and hurricanes.

NCAR is not just another research center. It is purpose-built critical infrastructure designed to integrate observations, modeling, supercomputing, and applied research in ways that no single university, agency, or contractor can replicate on its own.

Well, the fulfillment of those expectations is in real doubt now that the Trump administration plans to dismantle the National Science Foundation’s (NSF) National Center for Atmospheric Research (NCAR), a federally funded institution that underpins critical science that Americans rely on. Administration officials have argued that NCAR’s work can simply be redistributed to other institutions without loss. But NCAR is not just another research center. It is purpose-built critical infrastructure designed to integrate observations, modeling, supercomputing, and applied research in ways that no single university, agency, or contractor can replicate on its own.

Although Congress rejected the administration’s proposed funding cuts to NSF, the most recent spending bill did not include explicit language protecting NCAR as a unified entity.

As a result, the center remains vulnerable—not through outright defunding, but through fragmentation. The administration could try to cut interagency contracts that NCAR relies on to fund its staff, lay off staff, and relocate critical capabilities. NSF has already outlined plans to restructure NCAR, including moving its supercomputer to another site and transferring or divesting research aircraft it operates. These risks would hollow out the institution itself, breaking apart integrated teams, disrupting continuity in projects, and weakening the unique collaborative model at NCAR that accelerates scientific progress in weather, water, climate, and space weather.

This distinction matters. NCAR’s value does not lie solely in the science it produces, but in how that science is organized, sustained, and shared across the nation.

The following are five of the many ways Americans will lose the benefits of scientific research if plans to dismantle NCAR unfold, and two ways we can work to prevent it.

1. Air Travelers Will Lose Protection

Every day, millions of Americans board airplanes expecting to arrive safely at their destinations. What most passengers never see is the science working behind the scenes to keep flights safe through better understanding of atmospheric conditions such as turbulence and microburst winds.

Turbulence alone is the leading cause of injuries on U.S. commercial flights and cargo operations, and NCAR research has played a central role in reducing that risk by improving how turbulence is detected, predicted, and avoided. NCAR scientists helped develop advanced forecasting techniques that allow pilots and dispatchers to reroute aircraft away from dangerous air currents before passengers are ever put at risk.

In addition to safety, NCAR research has reduced the $100 million financial strain severe turbulence costs the U.S. aviation system every year through aircraft damage, inspections, medical costs, and delays.

NCAR’s contributions to aviation safety extend well beyond turbulence. In the 1970s and 1980s, NCAR scientists led research that identified and explained microbursts, a poorly understood weather phenomenon consisting of powerful downdraft winds produced by thunderstorms. Microbursts had caused multiple fatal airline crashes during takeoff and landing, and NCAR findings convinced the Federal Aviation Administration (FAA) and international aviation authorities to develop radar warning systems to detect these threats. Since these tools have been deployed, fatal U.S. airline crashes caused by microbursts have effectively been eliminated.

Dismantling NCAR and moving this work elsewhere would break the integrated system that makes aviation safety research effective in the first place. NCAR uniquely brings together long-term observational data, advanced modeling, specialized instrumentation, and direct operational partnerships with agencies like the FAA under one roof. Fragmenting that capacity across multiple institutions would disrupt decades of trusted, public service relationships with the aviation community, making it harder and slower to translate research into real-world protections for pilots and passengers. With millions of people in the sky every day, this is not a risk we should take.

2. Food Security and the U.S. Agricultural Economy Will Be Put at Risk

Agriculture contributes hundreds of billions of dollars annually to the U.S. economy, and food security remains a national priority, making NCAR’s research crucial to this weather-sensitive sector. Drought, heat waves, and floods are recurring stresses that affect what crops farmers can grow, as well as food prices for consumers.

NCAR’s long-standing collaborations, integrated modeling and computing capacity, and role as a trusted public service institution are what allow farmers to rely on consistent, decision-ready information year after year.

NCAR research is directly relevant to food security. For example, NCAR scientists are working in conjunction with universities in Kansas and Nebraska and the U.S. Department of Agriculture to develop CropSmart, a next-generation system that aggregates weather forecasts, crop data, soil conditions, and other inputs into actionable, decision-ready information for farmers, agribusinesses, and agricultural officials. Early projections from CropSmart suggest that if advanced decision support systems like this were adopted on even half of irrigated farms in a state like Nebraska, farmers could save up to 1 billion cubic meters of water and $100 million in irrigation energy costs annually while also cutting about a million tons of greenhouse gas emissions per year.

If NCAR is broken up, we lose this economic opportunity and the myriad ways it supports U.S. agriculture. NCAR’s long-standing collaborations, integrated modeling and computing capacity, and role as a trusted public service institution are what allow farmers to rely on consistent, decision-ready information year after year.

All the agricultural tools housed, supported, or innovated by NCAR would be put at risk, leaving farmers with fewer early warnings, less reliable guidance, and greater exposure to weather extremes. These losses would translate to the food on our tables having a higher price tag, which inevitably increases food insecurity, already a significant problem in the United States.

3. U.S. National Security and Military Readiness Will Be Weakened

The U.S. military depends on weather and climate intelligence to operate safely, effectively, and strategically. From flight operations and naval deployments to training exercises and base infrastructure, weather conditions shape nearly every aspect of defense readiness. When forecasts are wrong or incomplete, missions can be delayed, equipment can be damaged, and personnel and our national defense are put at risk.

Accurate environmental intelligence reduces risk, lowers costs, and strengthens national security.

NCAR’s research and operational tools provide the environmental intelligence that defense planners, operators, and test authorities rely on to keep us safe. Accurate, NCAR-enhanced forecasts have saved the U.S. Army millions of dollars by reducing weather-related test cancellations and avoiding needless mobilization costs. NCAR weather forecasting tools have been used for defense-related purposes, including anti-terrorism support at the Olympic games, protection of the Pentagon, support for firefighters, and analysis of exposure of our military personnel to toxins.

The strategic value of this work is reflected in the breadth of defense agencies that rely on NCAR today. NCAR maintains active partnerships and contracts with the Air Force, the Army Corps of Engineers, the National Ground Intelligence Center, the Defense Threat Reduction Agency, and the Army Test and Evaluation Command. These relationships exist for a simple reason: Accurate environmental intelligence reduces risk, lowers costs, and strengthens national security.

Dismantling NCAR is a national security threat. Defense agencies rely on specialized, mission-critical environmental products and expertise that are developed, maintained, and refined through streamlined, long-standing relationships with NCAR scientists. These capabilities cannot be replaced quickly without disruption, and even short gaps in trusted weather and environmental intelligence would increase operational risk for current and future missions. Protecting NCAR is an investment in military readiness, operational efficiency, and the safety of those who serve.

4. Americans in Disaster-Prone Areas Will Have Less Time to Prepare for, and Evacuate from, Extreme Weather

Since 1980, weather hazards have cost the United States thousands of lives and more than $3.1 trillion. In 2025 alone, disasters cost nearly 300 lives and $115 billion in damages to homes and businesses. And these weather hazards are expected to worsen because of our changing climate.

A 2010 study from the National Academies of Sciences, Engineering, and Medicine found that public weather forecasts and warnings deliver roughly $31.5 billion in annual economic benefits in the United States. These gains in preparedness and economic benefit would not have been possible without sustained scientific research from NCAR.

Hurricane forecasting provides a clear example of how NCAR research has secured the safety and mitigated the economic losses of residents and businesses. Since 1980, hurricanes have caused nearly $3 trillion in damages in the United States.

For decades, NCAR scientists have worked to develop and refine instruments and methods to collect real-time hurricane observations and improve our understanding of storm behavior. By the 1980s, data and modeling advances emerging from NCAR research were being used operationally by NOAA, contributing to a roughly 20%–30% improvement in the accuracy of hurricane track forecasts compared to earlier decades.

NCAR continues to enhance forecasting capabilities for hurricanes, as well as their associated flood risks, through the center’s sophisticated flood risk model. Today, the model is used operationally by the National Weather Service in more than 3,800 locations serving 3 million people.

If NCAR’s role in advancing forecast science is weakened by dismantling it, these gains in disaster preparedness will be put in jeopardy. Forecast improvements do not happen automatically; they require sustained research, coordination, and testing. If NCAR’s research capabilities to develop and improve weather forecasting disappear, the United States will face a major public safety risk.

5. Americans Lose a Unique Source of National Pride

NCAR was never designed to serve a select few. It was built with public investment to serve the nation as a whole.

NCAR was never designed to serve a select few. It was built with public investment to serve the nation as a whole. From its founding, NCAR embraced the idea that understanding the Earth system—its atmosphere, oceans, land, and ice—requires collaboration across institutions, disciplines, and generations, not isolated efforts working in parallel.

That collaborative model is embedded in how NCAR operates. It is stewarded by a consortium of more than 120 colleges and universities across the United States, representing a wide range of regions, institutional types, and scientific strengths. This structure allows knowledge, tools, and expertise to flow across the country, connecting large research universities with smaller institutions, federal agencies with academic scientists, and fundamental research with real-world applications for the public and private sectors. The result is a shared national capability that no single institution could sustain on its own.

There is something deeply American in that collaborative vision, a belief that publicly funded science should be openly shared, collectively advanced, and used to strengthen the common good. NCAR represents what is possible when a nation chooses to invest in science as a public good.

For more than 6 decades, NCAR has shown that open, collaborative science can save lives, support economic resilience and national defense, and expand opportunity across generations. Preserving and celebrating NCAR are choosing a future where shared knowledge, innovation, and public-serving science continue to thrive.

What We Must Do Now

This moment demands more than concern—it requires action.

First, NSF is requesting feedback regarding its intent to restructure NCAR. Feedback “will be used to inform NSF’s future actions with respect to the components of NCAR and to ensure the products, services, and tools provided in the future align with the needs and expectations of stakeholders to the extent practicable.”

Respond, and inform NSF about the value and benefits of all of NCAR, not only its constituent parts. Readers can submit comments through 13 March.

Second, Congress ultimately holds the authority to fund and protect NCAR, and lawmakers need to hear clearly that dismantling it would put the health, safety, and financial stability of Americans at risk. By October 2026, Congress will address the funding of NSF for next year; we must actively and consistently reach out to our congressional representatives now and throughout the year.

Readers can contact their members of Congress through easy-to-use resources provided by AGU and the Union of Concerned Scientists.

Author Information

Carlos Martinez (cmartinez@ucs.org) is a senior climate scientist with the Climate & Energy Program at the Union of Concerned Scientists.

Citation: Martinez, C. (2026), What Americans lose if their National Center for Atmospheric Research is dismantled, Eos, 107, https://doi.org/10.1029/2026EO260041. Published on 27 January 2026. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Rocks Formed by Microbes Absorb Carbon Day and Night

Tue, 01/27/2026 - 14:14

On every continent, unassuming rocks covered in a thin, slimy layer of microbes pull carbon from the air and deposit it as solid calcium carbonate rock. These are microbialites, rocks formed by communities of microorganisms that absorb nutrients from the environment and precipitate solid minerals. 

“We’re going to learn some critical information through this work that can add to our understanding of carbon cycling and carbon capture.”

A new study of South African coastal microbialites, published in Nature Communications, shows these microbial communities are taking up carbon at surprisingly high rates—even at night, when scientists hypothesized that uptake rates would fall. 

The rates discovered by the research team are “astonishing,” said Francesco Ricci, a microbiologist at Monash University in Australia who studies microbialites but was not involved in the new study. Ricci said the carbon-precipitating rates of the South African microbialites show that the systems are “extremely efficient” at creating geologically stable forms of carbon.

The study also related those rates to the genetic makeup of the microbial communities, shedding light on how the microbes there work together to pull carbon from the air.

Microbes that rely on photosynthesis live primarily in the top layer of a microbialite, while microbes with metabolisms that don’t require sunlight or oxygen reside deeper within. Credit: Thomas Bornman

“We’re going to learn some critical information through this work that can add to our understanding of carbon cycling and carbon capture,” said Rachel Sipler, a marine biogeochemist at the Bigelow Laboratory for Ocean Sciences in Maine. Sipler and her collaborator, Rosemary Dorrington, a marine biologist at Rhodes University in South Africa, led the new study.

Measuring Microbialites

Over several years and many visits to microbialite systems in coastal South Africa, Sipler and the research team measured different isotopes of carbon and nitrogen to study the microbial communities’ metabolisms and growth rates. They found that the structures grew almost 5 centimeters (2 inches) vertically each year, which translates to about 9–16 kilograms (20–35 pounds) of carbon dioxide sequestered every year per square meter (10.7 square feet) of microbialite. 

Results showed the microbialites absorbed carbon at nearly the same rates at night as they did during the day. Both the nighttime rates and the total amount of carbon precipitated by the system were surprisingly high, Ricci said.

 “Different organisms with different metabolic capacities work together, and they build something amazing.”

The traditional understanding of microbialite systems is that their carbon capture relies mostly on photosynthesis, which requires sunshine, making the high nighttime rate so surprising that Sipler and the team initially thought it was a mistake. “Oh, no, how did we mess up all these experiments,” she remembers thinking. But further analysis confirmed the results.

It makes sense that a community of microbes could work together in this way, Ricci said. During the day, photosynthesis produces organic material that fuels other microbial processes, some of which can be used by other organisms in the community to absorb carbon without light. As a result, carbon precipitation can continue when the Sun isn’t shining.

 “Different organisms with different metabolic capacities work together, and they build something amazing,” Sipler said.

Future Carbon Precipitation

The genetic diversity of the microbial community is key to creating the metabolisms that, together, build up microbialites. In their experiments, the research team also found that they were able to grow “baby microbialites” by taking a representative sample of the microbial community back to the lab. “We can form them in the lab and keep them growing,” Sipler said.

The findings could inform future carbon sequestration efforts: Because carbon is so concentrated in microbialites, microbialite growth is a more efficient way to capture carbon than other natural carbon sequestration processes, such as planting trees. And the carbon in a microbialite exists in a stable mineral form that can be more durable across time, Sipler said.

Additional microbialite research could uncover new metabolic pathways that may, for example, process hydrogen or capture carbon in new ways, said Ricci, who owns a pet microbialite (“very low maintenance”). “They are definitely a system to explore more for biotechnological applications.”

Sipler said the next steps for her team will be to continue testing the microbial communities in the lab to determine how the microbialite growth rate may vary under different environmental conditions and to explore how that growth can be optimized. 

“This is an amazing observation that we and others will be building on for a very long time,” she said.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

Citation: van Deelen, G. (2026), Rocks formed by microbes absorb carbon day and night, Eos, 107, https://doi.org/10.1029/2026EO260037. Published on 27 January 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer