EOS

Syndicate content Eos
Science News by AGU
Updated: 2 years 26 weeks ago

Rapidly Increasing Chance of Record-Shattering Heat Extremes

Tue, 08/03/2021 - 12:40

In recent years, heat waves have broken long-standing records by large margins. In spring 2020, Siberia saw exceptional temperatures, and Europe experienced an extreme heat wave in 2003 that killed more than 70,000 people. Now a new study published in Nature Climate Change has found that the probability of extreme record-shattering events is increasing at an alarming rate. These events are unprecedented in the observational record and nearly impossible without climate change. The researchers warned that many places in the world have not yet seen anything close to the intensity of heat waves now possible but should expect them in the coming decades.

Sudden Record-Shattering Events

As the climate warms, you would expect heat waves to break previous records, but not necessarily by large margins. But when Erich Fischer at the Institute for Atmospheric and Climate Science at ETH Zürich in Switzerland and his colleagues looked at large climate model ensembles, they found that simulated events in the near future broke historic records by very large margins. Somewhat surprisingly, the simulations often did not show the intensity of heat waves steadily increasing. Instead, the simulations showed stagnant decades with unbroken or marginally broken records, followed by a sudden record-shattering event.

Image of the Pacific Northwest’s heat forecast on 25 June 2021. Credit: Felton Davis, CC BY 2.0

For example, a heat wave over central North America simulated by the models hit temperatures 18°C higher than the summer mean temperature for 1986–2005. The hottest week of the simulated event broke previous average weekly temperature highs in the simulation by more than 5 standard deviations, smashing records by massive margins. That simulated event is also remarkably similar to the extreme heat event in June that swept throughout the Pacific Northwest of the United States and Canada. During that heat wave, temperature anomalies were 16°C–20°C higher than normal maximum temperatures for the time of year, according to a new study produced by Sarah Kew and colleagues for the World Weather Attribution.

Kew, who is at the Royal Netherlands Meteorological Institute and was not involved in the research with Fischer and colleagues, said that the Nature Climate Change paper is very well timed given recent heat events (which occurred after the paper was written). She added that it is “really uncanny” how similar the simulated central North America event is to the Pacific Northwest heat wave. “[The authors] said that this kind of thing can happen in the near future; well, it just did happen. It acts as a very strong warning of what we can expect,” said Kew.

“The odds that a record-shattering heat event occurs somewhere in the northern extratropics is large and currently rapidly increasing.”In high-emissions scenarios, weeklong heat extremes that break records by 3 or more standard deviations are 2–7 times and 3–21 times more probable in 2021–2050 and 2051–2080, respectively, compared with the past 3 decades, the researchers found. Their analysis suggested that an event as extreme as the one simulated over central North America is expected to occur once every 2 decades in the northern midlatitudes after 2050. Overall, record-shattering events—those that break records by 3 or more standard deviations—are expected to occur about every 6–37 years somewhere in the northern midlatitudes.

“Record-shattering heat extremes regularly occur in large ensemble simulations,” Fischer said. “While their probability is small for a specific location, the odds that an event occurs somewhere in the northern extratropics is large and currently rapidly increasing.” He warned that places that have not seen recent increases in heat wave intensity, such as the central and eastern United States, are particularly prone to such events and should expect to see new heat records in the coming decades. He argued that such events need to be taken seriously because their impacts tend to be largest when such temperature extremes first occur because of a lack of adaptation and preparedness.

“It is a clear message to cut emissions.”The simulations showed that these record-shattering events are not caused by new climate mechanisms. Instead, there are extreme variations of common heat wave drivers in the months before, such as an unusually warm spring, low rainfall, low soil moisture, and reduced evaporative cooling. This scenario looks exactly like what led to the recent Pacific Northwest heat wave, according to Kew. She cautioned, however, that further investigation is needed before that can be said for certain.

What is more, Fischer told Eos that the probability of record-shattering heat events depends on the warming rate and not on the level of warming. “If we were to stabilize temperatures at 1.5°, 2°, or 2.5°C, their probability would quickly decline again after a few decades,” he explained.

“It is a clear message to cut emissions,” Kew said.

—Michael Allen (michael_h_allen@hotmail.com), Science Writer

The Great Unconformities?

Tue, 08/03/2021 - 11:30

The Great Unconformity remains one of the most alluring mysteries in the Earth sciences. Widespread across North America and present in other parts of the world, the knife-sharp contact between Precambrian crystalline rocks and overlying Phanerozoic sediments can represent a billion years (a quarter of Earth’s history) of missing time.

Precisely because of this missing record, the formation age and mechanism of the Great Unconformity have proven elusive. Formation of the Great Unconformity has been linked to all major tectonic, climatic, and biologic events of the Late Proterozoic – Early Paleozoic, including the assembly and break-up of Rodinia, the Snowball Earth glaciation, and the Cambrian explosion of life.

Thermochronology records the cooling of rocks as they are exhumed to the surface of the Earth and thereby provides one of the few direct proxies for erosion events at geological timescales. Applying thermochronology in “deep time” is challenging, however, because small differences in the kinetics that define how daughter products are retained between the different crystals analyzed can lead to strongly varying ages.

Sturrock et al. [2021] leverage these differences, using apatite (U-Th)/He thermochronology and quantitative thermal-history modeling to constrain the thermal and erosional history of a large tract of the Central Canadian Shield. They convincingly show that the Great Unconformity there formed after 650 million years and link its formation to kilometer-scale erosion in response to mantle-plume related uplift.

Previous work, in part by the same group, has shown that erosion leading to formation of the Great Unconformity in other parts of North America (Wyoming, the Colorado Front Range and the Ozarks) is substantially older, between approximately 850 and 700 million years ago. Thus, there may not be one but several Great Unconformities. This study adds to a growing body of work refuting hypotheses that have suggested that the Great Unconformity occurred in a single worldwide event and was linked to major Earth crises, such as the Snowball Earth glaciation.

Citation: Sturrock, C. P., Flowers, R. M., & Macdonald, F. A. [2021]. The late Great Unconformity of the central Canadian Shield. Geochemistry, Geophysics, Geosystems, 22, e2020GC009567. https://doi.org/10.1029/2020GC009567

—Peter van der Beek, Editor, Geochemistry, Geophysics, Geosystems

In a Twist, a Greek Volcano Ruled by the Sea

Mon, 08/02/2021 - 17:00
Santorini is a collection of five islands about 200 kilometers southeast of the Greek mainland. Credit: NASA/GSFC/METI/ERSDAC/JAROS

For thousands of years, the Greek volcano Santorini has blasted, bubbled, and burned in the Aegean Sea. Now scientists suspect that the volcano’s fiery bursts are the cause of rising and falling sea levels. The findings reveal a novel connection between the planet’s molten innards and its climate.

Sea levels retreat when the planet grows large ice sheets and glaciers; ice ages have much lower sea levels than interglacial periods.

Researchers from the United Kingdom and Sweden found that these lower sea levels tend to disrupt Santorini’s volcanic slumber. During the past 360,000 years, the volcano, officially known as Thira and historically known as Thera, has erupted more than 200 times. All but three of those eruptions happened during or just following periods of low sea levels.

Since most volcanos on Earth sit within or near oceans, Santorini’s tale could apply to other volcanos around the world.

Santorini’s Cliffs The cliffs on Santorini reveal layers of whitish ash deposits from past volcanic eruptions. The largest white layer in the middle distance is Santorini’s Vourvoulos eruption from 126,000 years ago. Credit: Ralf Gertisser/Keele University

Santorini has had a violent past—explosive eruptions have shattered the volcano into slivers of islands.

The most recent explosive eruption, in the 1600s BCE, sent 100 cubic kilometers of material into the air, 4 times that of the 1883 eruption of Krakatoa. The volcano’s caldera collapsed into the sea and flooded, leaving an 11-kilometer-wide crater. (The cataclysm may have inspired Plato’s story of Atlantis, too.)

Over the past 50 years, geologists discovered mounting evidence that the comings and goings of ice sheets revved up volcanos in Iceland, the western United States, France, Germany, and Chile. The ice sheets bore down on Earth’s crust, but when they melted away, the crust decompressed and fractured. Magma shot up the cracks and fueled eruptions.

Sea level, the new paper argues, has the same effect on Earth’s crust. “The only thing that’s different is in one case you have ice, and in the other case you have water,” said Earth scientist Chris Satow from Oxford Brookes University, who led the research.

But finding evidence of sea level’s effect on volcanos has been much harder—until now. A quirk of Santorini’s landscape gave scientists a unique chance to connect the pieces.

Millions of tourists flock to the volcano’s cliffs overlooking the turquoise bay annually, and Satow and his team did the same—but to sample layers of volcanic ash. Eruptions leave unique chemical fingerprints of iron, silica, potassium, sodium, and other elements buried in ash layers. “Not many other volcanoes have got this amazing record on display for us to see and investigate,” said Satow.

The researchers measured the chemical fingerprints of each ash layer and matched them with layers in marine sediments. Crucially, the marine sediments also contained records of sea level rise and fall over time.

Satow and eight others published the research in the journal Nature Geoscience today.

Stifled By the Sea A computer model of Santorini showed that reducing sea levels to 40 meters below present-day levels changes the amount of tensile stress in the roof of the volcano’s magma chamber, which sits just 4 kilometers under the surface. Because there is less water pushing down on Earth’s crust, the crust decompresses and allows fractures to form. As sea levels continue to decrease down to 70 to 80 meters below present-day levels, the crust pulls apart more and allows fractures to reach the surface and feed eruptions. Credit: Oxford Brookes University

The results could explain recent behavior at Santorini. The volcano threatened to erupt as recently as 2011–2012 when new magma flooded the volcano’s shallow magma chamber. “The fact that an eruption did not happen may be due to the sea levels being high,” Satow said.

But major eruptions can still happen; Santorini is one of the world’s Decade Volcanoes, sites identified in light of their history of large, destructive eruptions and proximity to densely populated areas. “The large volumes of magma involved [in explosive eruptions] could by themselves create the required fractures in the crust, even without the help of low sea levels,” Satow said. The massive event that took place in the 1600s BCE, nicknamed the Minoan eruption after the region’s distinct Bronze Age civilization, was one of three eruptions that blew during periods of high sea levels.

Climate change is melting ice sheets and boosting sea levels, but it’s too early to know how that could affect volcanic activity. A study on the volcanic Caribbean island of Montserrat, for instance, proposed that rapid sea level rise could amp up volcanic activity, the opposite effect seen at Santorini.

“We need more of these detailed and comprehensive studies to get a complete picture,” said Julie Belo, a scientist at the GEOMAR Helmholtz Centre for Ocean Research Kiel who did not participate in the work.

Next, Satow hopes to investigate greenhouse gas emissions from volcanos. “It would be really interesting to know if the amount of carbon dioxide that volcanoes worldwide produce is also related to sea level change,” Satow said.

—Jenessa Duncombe (@jrdscience), Staff Writer

Volcanic Tremor and Deformation at Kīlauea

Mon, 08/02/2021 - 13:22

Kīlauea in Hawaii is the best-monitored volcano in the world. The 2018 eruption was the largest in some 200 years, providing researchers with a plethora of new data to understand the volcano’s plumbing and behavior. Two new studies dig into data on volcanic tremor and deformation to better characterize the events leading up to and following the 2018 eruption.

In one study, Soubestre et al. used data from a permanent seismic network and tiltmeter located at Kīlauea’s summit and derived models of tremor source processes to examine how volcanic tremors related to the disappearance of a lava lake and subsidence in Halema‘uma‘u Crater at the beginning and throughout the 2018 eruption. Here the authors used a seismic network covariance matrix approach to enhance coherent signals and cut out noise to detect and locate the volcanic tremor sources.

The team identified three previously unidentified tremor sources, including long-period tremor during the period preceding the eruption associated with radiation from a shallow hydrothermal system on the southwest flank of Halema‘uma‘u Crater. The team picked up on two sets of gliding tremor in early and late May. Models show that the first set was linked to the intrusion of a rock piston into the hydrothermal system and the second was linked to changes in the gas content of magma within a dike below the crater affected by a dozen collapse events.

The second study focused on the period following the 2018 eruption. Here Wang et al. used GPS and interferometric synthetic aperture radar data to examine deformation around the caldera associated with the volcano’s known reservoirs—the shallow Halema‘uma‘u reservoir (HMM) and the deeper South Caldera reservoir (SC)—after the eruption ended in August of 2018. They documented inflation on the northwestern side of the caldera and deflation on the southeastern side of the caldera, indicating that the summit magma chambers are hydraulically distinct. The concurrent East Rift Zone (ERZ) inflation indicated dynamic magma transfer between the summit and the ERZ.

The authors presented a new physics-based model that uses differential equations to describe reservoir pressure and magma flux between the volcano’s reservoirs to simulate potential magmatic pathways of connectivity between the reservoirs and the ERZ. They used a dynamic inversion of the postcollapse GPS time series of surface displacement to estimate the conductivity of potential magmatic pathways.

The team found that the primary connective pathway in the postcollapse period that best fits the GPS data is a shallow connection between the HMM and the ERZ. The study doesn’t rule out a direct pathway between the SC and ERZ reservoirs but suggests that if it exists, it was significantly less active over the study period.

Together, these studies help to create an increasingly clear picture of the plumbing and processes governing Kīlauea’s activity in 2018. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2020JB021572 and https://doi.org/10.1029/2021JB021803, 2021)

—Kate Wheeling, Science Writer

Brazil’s Antarctic Station Rises from the Ashes

Mon, 08/02/2021 - 13:19

On 15 January 2020, when Brazilian scientists, navy officers, and politicians celebrated the inauguration of the new Comandante Ferraz Antarctic Station in Antarctica, it was like closing a painful chapter in Brazil’s history on the continent.

Almost 8 years earlier, in February 2012, the research facility was destroyed by a fire that claimed the lives of two navy lieutenants, Carlos Alberto Figueiredo and Roberto dos Santos. Located at Admiralty Bay on King George Island, the facility had been operational since 1984 and housed researchers working with PROANTAR (Programa Antártico Brasileiro, the Brazilian Antarctic Program). Caught by surprise by the fire, the country received the news with shock.

The following year, the Brazil Institute of Architects and the Brazilian Navy organized a contest to choose the project for the building that would replace the incinerated station.

The project chosen from more than a hundred proposals from all over the world came from Estúdio 41, a Brazilian architectural office based in Curitiba, the capital of Paraná State. “We put together a multidisciplinary team of about 15 experts in several areas, from wind resistance to geotechnics to thermal insulation, to help us think of how to respond to the harsh environmental conditions in Antarctica. As some of the competing offices had already constructed other research facilities in the continent, we knew winning would be a tough call. So getting it was really exciting,” said architect Emerson Vidigal, a member of Estúdio 41’s team.

The team spent 2 years—from 2013 to 2015—working on the project before China National Electronics Import & Export Corporation, a Chinese construction company, started building the station. “We spent a year on research, looking at similar buildings in Antarctica, and we were lucky to have been able to learn in detail from the Indian research station Bharati. Talking to the engineers of Kaefer, the German construction company that put Bharati together, gave us a deeper understanding of what we were facing. Our partners from the Portuguese engineering office AfaConsult were also crucial in the process, as it was much more an engineering challenge than an architectural one,” Vidigal added.

Bigger and Better

At 4,500 square meters, the new research facility has almost twice the area of the old station and can house 64 people. The steel structure contains an exterior of polyurethane and an insulating interior of mineral wool. “Between the external and internal layers there is a 60-centimeter buffer for temperature transition with air at 10°C on average, which helps save energy for heating,” said Vidigal.

The new Comandante Ferraz station took almost 5 years to construct. Credit: Estúdio 41

As the station’s assembly had to be made during the austral summer, when ships can reach Admiralty Bay, logistics to transport construction machinery, workers, and preassembled structures had to be carefully planned. Almost 5 years and roughly $100 million later, the station was ready.

To glaciologist Jefferson Simões, a researcher at the Federal University of Rio Grande do Sul and vice-president of the international Scientific Committee on Antarctic Research, the investment has been worth the time and effort. “Snow and frozen soil would accumulate in front of doorsteps of the old structure, sometimes making it difficult to get in and out. It is very good that the new building is elevated from the soil so the wind can blow snow away underneath,” he said.

Five of Comandante Ferraz’s 17 planned laboratories (those focused on microbiology, molecular biology, chemistry, microscopy, and common use) are ready. These spaces are equipped with instruments that range from DNA readers to ultrafreezers and water purifiers.

Wim Degrave, coordinator of FioAntar (a research project from the Oswaldo Cruz Foundation that looks for Antarctic pathogens that could threaten human, animal, and environmental health), was at the station in late 2019 to assemble the microbiology laboratory. For him, the new station will enable a significant upgrade for research.

“Usually, we had to process soil, water, plant, lichen, and other samples at a research vessel, freeze them, and wait until the ship was back in Rio de Janeiro many months later to start doing research. This isn’t ideal, since some less stable microorganisms such as viruses can deteriorate. Now we’ll be able to isolate and analyze fresh samples at the station. Not only the quality of research will be better, but it will also be possible to work the whole year in a continuum between sampling and analysis, gaining a lot of time,” he explained.

Even research groups who will not work directly at Comandante Ferraz will benefit from it. “This station is a source of pride for Brazil and its science,” said paleontologist Alexander Kellner, coordinator of Brazil’s PaleoAntar project, which conducts paleontological research in Antarctica. Kellner’s team often goes to James Ross Island, southeast of the Antarctic Peninsula, to look for frozen fossils. “An icebreaker would be a great addition to the new station,” he added. “We would be able to do research in the whole continent.”

https://eos.org/wp-content/uploads/2021/07/comandante-ferraz-project-videos-mute.mp4

Credit: Estúdio 41

https://eos.org/wp-content/uploads/2021/07/comandante-ferraz-project-videos-sound1.mp4

Credit: Estúdio 41

A Strategic Place

One aspect on which most researchers agree is that a research station in Antarctica is strategic in geopolitical, as well as scientific, terms. “Only the countries that are doing research down there will have a say in the future of the continent,” Simões emphasized.

“A small fraction of the billion-dollar fund the congress is trying to approve to finance political campaigns would do a great good for Brazilian research.”“But a lot of it will depend on funding for research projects, which are quite scarce in Brazil now,” he added.

To him, research in the Antarctic is far from being a luxury. Many projects focus on climate change, air pollution, the carbon cycle, and myriad other studies that directly affect life on Earth, as well as policy. For instance, Simões said, “by looking at some ice cores a few years back, we could clearly detect uranium pollution from mining in Australia in recent decades, as well as arsenic due to copper mining in Chile.”

Simões said Brazil’s research planning in Antarctica is being restructured. As all projects were halted during the pandemic, scientists are seeking resources that stretch beyond 2022. “We don’t have a perspective for funding after that yet. The research station cannot become a white elephant. If the government granted us just a million dollars a year, we’d be able to perform miracles,” Simões said.

“A small fraction of the billion-dollar fund the congress is trying to approve to finance political campaigns (the electoral fund) would do a great good for Brazilian research,” Kellner added.

—Meghie Rodrigues (@meghier), Science Writer

Simulating 195 Million Years of Global Climate in the Mesozoic

Fri, 07/30/2021 - 13:32

The Mesozoic, which stretched from about 252 million to 66 million years ago, was a pivotal period in Earth’s history. In addition to being the age of the dinosaurs, it was when the supercontinent Pangaea began to separate into the fragmented continents we’re familiar with today. Together with elevated levels of carbon dioxide and the brightening Sun, tectonic changes influenced the global climate, producing warm and humid greenhouse conditions. A detailed understanding of the factors that drove Mesozoic climate trends will not only provide insight into Earth’s history but also help scientists study the consequences of human-caused warming of our planet.

One approach to investigating past climates is using numerical models. In a new study, Landwehrs et al. performed an ensemble of climate simulations covering a period from 255 million to 60 million years ago in 5-million-year time steps. They adjusted specific parameters in different runs to dissect the sensitivity of past climates to paleogeography, atmospheric carbon dioxide levels, sea level, vegetation patterns, the Sun’s energy output, and variations in Earth’s orbit.

The authors found that global mean temperatures during the Mesozoic were generally higher than preindustrial values. They also observed a warming trend, driven by increasing solar luminosity and rising sea levels. Ocean areas typically reflect less solar radiation than land; accordingly, the researchers found that higher sea levels and flooding of continental areas coincided with warmer global mean temperatures. Concurrent with this general trend, fluctuations in atmospheric carbon dioxide produced warm and cool anomalies in global mean temperature. The authors note that this finding does not mean that human-induced global warming should be ignored; modern climate change is happening much faster than changes in Earth’s history.

The ensemble of climate simulations provides insight into other aspects of long-term Mesozoic climate change as well. Overall, the authors identified a transition from a strongly seasonal and arid Pangaean climate to a more balanced and humid climate. To aid additional analyses of Mesozoic climate trends, the authors shared their model data online. (Paleoceanography and Paleoclimatology, https://doi.org/10.1029/2020PA004134, 2021)

—Jack Lee, Science Writer

Soil Saturation Dictates Africa’s Flood Severity

Fri, 07/30/2021 - 13:31

In the summer of 2020, deadly floods ravaged Africa, affecting nearly a million people and killing hundreds. However, the physical causes of floods across the continent’s diverse climate and terrain are gravely understudied. Lacking a broad network of water gauges, researchers have focused primarily on specific countries or single bodies of water. “The large extension of ungauged areas [has prevented] significant studies [from being conducted both] quantitatively and qualitatively,” said Mohamed El Mehdi Saidi of Cadi Ayyad University in Morocco.

That has now changed, thanks to a 2-year project by an international team to curate the most complete hydrological data set for the African continent to date. This massive compilation combines on-the-ground and remote sensing measurements covering nearly 400 stream gauges and more than 11,000 flood events spanning at least 3 decades. The team’s analysis, the first continent-wide study of flood drivers in Africa, suggested that the largest yearly floods are more strongly linked to regions’ annual peaks in soil moisture than to annual peaks in precipitation. The findings, the first of their kind, were published in June in Water Resources Research.

An 11,000-Piece Puzzle

Other research teams have conducted several continent-wide studies of flood drivers across the United States, Europe, and Australia. Higher data coverage of stream flows and flooding patterns across these landmasses has led to a stronger understanding of when and why damaging floods occur. These continents, however, differ drastically from Africa climatically and geographically, leading scientists to suspect that the triggers of African floods are unique.

Africa’s largely arid climate, with the Sahara covering 25% of the landmass, is part of that equation. “You additionally have this ability to study a climate largely free of snow, which is a complicating factor when studying floods,” said infrastructure engineer Conrad Wasko of the University of Melbourne in Australia who was not involved in the study. With deadly floods becoming increasingly frequent in Africa as climate change worsens, hydrologists felt compelled to improve their data collection across the continent’s widely varying river basins.

This map shows the distribution of measurement stations across Africa. Note the sparseness of data in the central and northeastern regions of the continent. Credit: Tramblay et al., 2021, https://doi.org/10.5194/essd-13-1547-2021

The team’s African Database of Hydrometric Indices (ADHI), published in Earth System Science Data, includes hydrological parameters from watersheds across Africa spanning 33 years on average. Given the sparseness of data across the continent, the team took laborious steps to ensure that the records from different sources were of similar quality. “The most important thing was to manually and visually check each [measurement] independently,” said Yves Tramblay, a hydrologist at the French National Research Institute for Sustainable Development and lead author of the study.

For regions lacking in ground observations, the scientists incorporated Climate Hazards group Infrared Precipitation with Stations (CHIRPS), a series of remote sensing estimates from a hybrid satellite and ground data set, to obtain a homogeneous average of precipitation across all of Africa. They validated these measurements with gauged data when possible. The team’s thorough approach impressed Wasko: “Within engineering, we have a predisposition to collect [on-the-ground] data. New technologies, like remote sensing, are becoming essential to understanding hydrology in remote areas,” he said.

A New Flood Driver Takes the Stage

The ADHI data set allowed the team to compare the timing of several parameters relevant to floods. To determine which ones aligned most strongly with the largest floods each season, they isolated the dates when floods occurred and rigorously compared them to the timing of heavy rainfall and soil moisture conditions using directional statistics—a method that accounts for the direction data follow (in this case, the timing). The analysis revealed that high soil moisture levels showed a stronger correlation to the onset of flooding than to other parameters, most notably rainfall.

“Floods are always caused by precipitation. But the difference in soil moisture conditions before a flood event can strongly modulate its magnitude.”When ground already contains a lot of water, heavy rainfall mostly runs off the surface rather than absorbing into soils—greatly increasing the chances that even modest precipitation will create floods. “It was kind of surprising because we always thought that soil moisture in arid catchments was not a strong driver, but we find that overall it’s still a valuable one,” Tramblay said. “A common assumption is that floods are driven by extreme precipitation events. That’s true: Floods are always caused by precipitation. But the difference in soil moisture conditions before a flood event can strongly modulate its magnitude.”

The new approach invites further research on floods across Africa, Tramblay noted. The team plans to continue conducting targeted studies across many sites to “get a much clearer picture of the differences [in flood drivers] at a regional scale,” he said.

Tramblay also hopes the work will help future scientists and emergency planners across Africa prepare for each year’s flood season by having a better grasp of whether a region might be particularly susceptible. “[This is] an incentive to not look only at extreme rainfall when you’re doing flood projections, but to look at other land surface variables, including soil moisture, vegetation coverage, and change in land use,” said Tramblay. “There’s more recognition that if we’re going to be forecasting floods and designing infrastructure [to mitigate them], it’s not just rainfall we need to be thinking about, but all the other factors that affect flooding.”

—Ellis Avallone (@ellantonia_), Science Writer

Why Study Geysers?

Fri, 07/30/2021 - 13:30

Each year, millions of tourists visit geysers around the world, marveling at the jets of water spouting high into the air from subterranean reservoirs. Fascination with these rare features is nothing new, of course: Written records of their occurrence date back to the 13th century at least, and for more than 2 centuries, scientists have been improving our understanding of Earth’s geysers.

The English word geyser originates from geysir, a name given by Icelanders in the 17th century to intermittently discharging hot springs. The name descends from the verb gjósa, which means to gush or erupt. Natural geysers are rare—fewer than a thousand exist today worldwide, and only a handful of fossil examples are known from the geological record. About half of Earth’s geysers are located in Yellowstone National Park in the United States. Other large geyser fields include the Valley of Geysers in the Kamchatka Peninsula of Russia, El Tatio in Chile, and Geyser Flat at Te Puia, Rotorua, in New Zealand.

In 1846, French mineralogist Alfred Des Cloizeaux and German chemist Robert Wilhelm Bunsen formulated an early model to explain geyser eruptions based on field measurements of temperature, chemistry, and circulation and eruption patterns at Geysir in Iceland. Since then, scientific knowledge of geysers has advanced significantly [Hurwitz and Manga, 2017], providing valuable insights into volcanic processes, the origin and environmental limits of life on Earth (and potentially elsewhere, including on Mars), and similar geysers on icy outer solar system satellites. Demonstrating these connections, geologist and planetary scientist Susan Kieffer wrote the following in a perspective on her research career:

“[M]y initial idea of studying Old Faithful geyser as a volcanic analog [sic] led me to work not only on the dynamics of eruption of Mount St. Helens in 1980 but also on geysers erupting on Io (a fiery satellite of Jupiter), Triton (a frigid satellite of Neptune), and Enceladus (an active satellite of Saturn).”

Continuing research into the inner workings of geysers will help us further understand and protect these natural wonders and will reveal additional insights about volcanism on and off Earth.

Like Volcanoes, but More Accessible

Similar to volcanoes, geysers are transient features with periods of activity and dormancy. Geyser eruption patterns can change following large earthquakes, shifts in climate, and variations in the geometry of their conduits and subsurface reservoirs. Eruption processes of geysers, which can be driven by geothermal heating and the formation of vapor bubbles, are also akin to those operating in volcanoes.

Eruption processes of geysers, which can be driven by geothermal heating and the formation of vapor bubbles, are akin to those operating in volcanoes.The model developed by Des Cloizeaux and Bunsen showed that as water rises toward the surface and pressure decreases, boiling forms bubbles. The liquid water containing the bubbles further lowers the density and pressure of the mixture. Decreasing pressure similarly causes changes in magma that underpin key volcanic processes, such as melt generation in the mantle and the formation of bubbles in magma that drive eruptions.

Because geysers have smaller eruptions and erupt more frequently than volcanoes, they provide useful natural laboratories to study eruption processes and test new monitoring technologies. Volcanic eruptions are sometimes preceded by magma movement that is difficult to monitor because of the large spatial scales and long timescales involved. In contrast, measurements of fluid movement, for example, can be made relatively easily through many geyser eruption cycles, providing data that can be used to improve the interpretation of volcanic phenomena. Measurements and video observations can also be collected within the conduits of active geysers—a feat that is impossible at active volcanoes.

An array of instruments (foreground) measures seismic tremor around geysers at El Tatio in Chile. Credit: Shaul Hurwitz, U.S. Geological Survey

Signals such as seismic tremor—sustained ground vibrations that are common prior to and during volcanic and geyser eruptions—can be very informative for monitoring subsurface processes at active volcanoes and geysers. Tremor in volcanoes can last for days, weeks, or even longer leading up to volcanic eruptions [Chouet and Matoza, 2013]. Tremor may be caused by degassing of magma and by the movement of fluids within a volcanic edifice. However, identifying fluid types (gas, liquid water, magma) and the processes responsible for episodes of tremor is challenging because of the geometric complexities and sizes of volcanic systems.

Seismometers deployed around the iconic Old Faithful and Lone Star geysers in Yellowstone have detected tremor caused by continuous bursts of rising steam bubbles, analogous to bubbles forming and bursting in a teakettle. Thus, by analogy, such measurements of tremor in geyser systems can help elucidate processes that generate volcanic tremor.

Tracking tremor signals in time and space using dense arrays of seismometers also has illuminated the subsurface structure of volcanoes and geysers [Eibl et al., 2021; Wu et al., 2019]. The locations of tremor sources around Strokkur Geyser in Iceland, and Old Faithful, Lone Star, and Steamboat in Yellowstone, for example, indicate that these geysers’ reservoirs are not located directly beneath their vents. Tilting of the ground surface around Lone Star Geyser and a geyser at El Tatio, as well as video observations in the conduits of geysers in Kamchatka, also indicate reservoirs that are not aligned below the geysers’ vents. This type of reservoir, in which liquid and steam bubbles accumulate and pressure builds prior to an eruption, is called a bubble trap and might be a common feature of many geysers [Eibl et al., 2021].

Carolina Muñoz-Saez inserts pressure and temperature sensors into a geyser conduit at El Tatio in northern Chile. Seismometers that measured seismic tremor throughout many eruption cycles are visible in the background. These experiments were conducted in coordination with the communities of Caspana and Toconce. Credit: Max Rudolph, University of California, Davis

Laboratory experiments of geysers have shown how heat and mass transfer between laterally offset reservoirs and conduits control eruption patterns [Rudolph et al., 2018]. Geophysical imaging has similarly revealed that although most volcanic vents are located directly above their magma reservoirs, many reservoirs are laterally offset from their associated volcanic edifices [Lerner et al., 2020].

A striking example of an offset magma reservoir was highlighted in a 1968 study of the Great Eruption of 1912 in Alaska [Curtis, 1968], in which magma erupted from Novarupta volcano, but collapse occurred some 10 kilometers away at Mount Katmai, where most of the magma that erupted at Novarupta had been stored. Mapping of such laterally offset magma storage systems, as well as detailed physical knowledge of how they work as gleaned from studies of and experiments with geysers, may help scientists design better volcano monitoring networks.

Earth Tides, Earthquakes, and Climate Change

Eruptions at geysers and volcanoes are controlled by delicate balances in heat supply and gas and fluid flows within their systems, and by the tortuous pathways that liquid water, steam, and magma take to the surface—balances that can be affected by external forces. Documenting whether geysers and volcanoes respond to tides and earthquakes provides opportunities to quantify their sensitivity to changes in physical stress in the subsurface and to help evaluate whether they are poised to erupt [Seropian et al., 2021].

Past studies have suggested, on the basis of statistical correlations, that small forces exerted by Earth tides can trigger volcanic eruptions. However, statistical tests of tidal influence on volcanic eruptions are limited because of the rarity of eruptions from a single volcano. In contrast, the thousands of geyser eruptions that occur annually form a much broader sample pool on which to base statistical tests. One such evaluation uncovered a lack of correlation between Earth tides and the intervals between geyser eruptions, a finding that suggests that a correlation between Earth tides and volcanic eruptions is also unlikely.

In Yellowstone, some geysers stopped erupting whereas others started erupting, after the magnitude 7.3 Hebgen Lake earthquake in Montana in 1959.Although tides might not affect geyser eruptions, regional and even very distant large earthquakes can. Written accounts document renewed activity of Geysir following large earthquakes in southern Iceland in 1294. In Yellowstone, some geysers stopped erupting whereas others started erupting, after the magnitude 7.3 Hebgen Lake earthquake in Montana in 1959. The magnitude 7.9 Denali earthquake in Alaska in 2002 affected eruptions of some Yellowstone geysers 3,000 kilometers away.

Earthquakes can also promote volcanic unrest and eruptions. Establishing causal relations between earthquakes and eruptions is challenging because few active volcanoes occur in any given area, and changes in the subsurface can take longer to manifest as an eruption. However, geysers erupt more frequently than volcanoes, which again points to the utility of studying geysers as volcanic analogues.

Precipitation trends and climate changes can affect geysers as well. Eruption intervals at Old Faithful Geyser have changed in the past, and it even ceased erupting in the 13th and 14th centuries because of a severe drought. How often geysers erupt may also change in response to seasonal and decadal changes in precipitation, which affect the supply of groundwater that feeds the eruptions.

Volcanoes also display slight seasonal patterns in their eruptions, and they respond to changing climate. As air temperatures warm, for example, glaciers covering volcanoes melt, which in turn reduces pressure on underlying magma. Pressure reduction causes gas bubbles to form, and the buoyant mixture of magma and bubbles is then more primed for eruption.

On longer timescales, rates of volcanism vary over glacial cycles, with more eruptions and larger volumes of magma erupted as glaciers retreat. In line with this observation, we know from dating sinter deposits and from geologic mapping that most geyser fields were inactive during Earth’s last glacial period (which ended between ~20,000 and 12,000 years ago) when they were covered by ice [Hurwitz and Manga, 2017].

Origins and Limits of Life on Earth and Mars A recent geyserite deposit from northern Waiotapu, in New Zealand’s Taupo Volcanic Zone, shows fingerlike formations. Similar formations have been found in silica-rich deposits on Mars. Credit: Kathleen A. Campbell, University of Auckland

Sinter deposits form when hot water erupting from geysers cools and evaporates rapidly at the surface, causing dissolved silica to precipitate as opaline or amorphous (noncrystalline) solids. High-temperature, vent-related sinter that forms in surge and splash zones around or near erupting geysers is termed geyserite. Around geysers and in downslope pools and discharge channels, the complex sedimentary structures preserved in sinter reflect physical, chemical, and biological processes occurring in hot spring subenvironments. For example, sinter textures produced in hot spring fluid outflows record temperature and pH gradients across a given geothermal field, from vents to discharge channels to pools, and from terraces to marsh settings.

Sinter typically entombs both biotic (e.g., microbes, plants, animals) and abiotic (e.g., weathered sinter fragments, volcanic ash, detritus) materials. Geyserite, in particular, serves as an archive of conditions in Earth’s hottest environment on land (up to about 100°C) and of extreme thermophilic (high temperature–adapted) life therein [Campbell et al., 2015].

Research on modern hot springs suggests that extended hydration and dehydration cycles in geyser outflow channels can give rise to prebiotic molecular systems, which hints at a possible role for geysers in the origin of life on Earth.Research on modern hot springs suggests not only that they can host extant life, but also that extended hydration and dehydration cycles in geyser outflow channels can give rise to prebiotic molecular systems that display fundamental properties of biology, such as enclosed, cell-like structures composed of lipids and polymers [Damer and Deamer, 2020]. This observation hints at a possible role for geysers in the origin of life on Earth billions of years ago. Indeed, inferred geyserite deposits associated with rocks containing microbial biosignatures have recently been reported in approximately 3.5-billion-year-old hydrothermal sedimentary deposits in Western Australia [Djokic et al., 2017].

On Mars, silica-rich deposits detected by the Spirit rover amid Columbia Hills in Gusev Crater closely resemble fingerlike sinter textures on Earth. This site was proposed as a landing site for the NASA Mars 2020 mission, which will cache samples for eventual return to Earth. Although the Perseverance rover was instead sent to explore deltaic deposits in Jezero Crater, the digitate silica structures at Columbia Hills remain as biosignature candidates that may one day be collected and brought to Earth for in-depth verification of their origin. Therefore, sinters remain a key target in the search for ancient life on Mars, particularly from the time in its history when volcanoes and liquid water were active at the surface—about the same time that life was taking hold in hot water here on Earth.

In addition to benefiting our understanding of what constitutes life and where it can thrive, advanced biotechnology has also benefited from geyser studies. In 1967, microbiologist Thomas Brock and his student Hudson Freeze isolated the bacterium Thermus aquaticus from the hot waters of Yellowstone’s geyser basins. Later, biochemist Kary Mullis identified an enzyme, named Taq polymerase, in a sample of T. aquaticus that was found to replicate strands of DNA in the high temperatures at which most enzymes do not survive. This discovery formed the basis for developing the revolutionary polymerase chain reaction (PCR) technique in the 1980s (for which Mullis shared the 1993 Nobel Prize in Chemistry). PCR is now the workhorse method used in biology and medical research to make millions of copies of DNA for various applications, such as genetic and forensic testing. Recently, PCR also became widely used for COVID-19 testing.

Exploring for Energy and Mineral Deposits

Sinter deposits can also inform exploration for geothermal energy, helping locate resources, as well as for mineral deposits. Whereas currently active hydrothermal systems provide energy for electricity generation, industry, and agriculture, giant fossil hydrothermal systems host many of the world’s most productive precious metal mining operations [Garden et al., 2020]. Such epithermal ore deposits form in the shallow subsurface beneath geothermal fields as high-temperature fluids—both magmatic and meteoric in origin—gradually deposit valuable metals including gold, silver, copper, and lithium.

Geyserites form at the surface emission points of rising hot fluids tapped from deep reservoirs and can point to completely concealed subsurface ore deposits [Leary et al., 2016], thus informing exploration for mineral resources; they may also contain traces of precious metals themselves.

Geysers in the Solar System

Studies of physical processes in easily observable geysers on Earth can also guide and constrain models proposed to explain eruptions elsewhere in our solar system. The geysers of the icy outer solar system satellites Enceladus (Saturn), Triton (Neptune), and Europa (Jupiter) are similar to Earth’s geysers in that changes of state of materials (e.g., melting and vaporization) drive mixtures of solids and gases to erupt episodically.

NASA’s Cassini spacecraft took this image during its survey of the southern hemisphere geysers on Saturn’s moon Enceladus. The four fractures from which the geysers erupt, referred to as tiger stripes, are approximately 135 kilometers long and cross Enceladus’s south pole. Credit: NASA/JPL/Space Science Institute

At the south pole of the ice-covered ocean world Enceladus, some 100 geysers erupt from four prominent fractures, delivering water from a habitable ocean into space and supplying ice particles to Saturn’s E ring. At Triton, the largest of Neptune’s 13 moons, NASA’s Voyager 2 spacecraft detected surface temperatures of −235°C and geysers that erupt sublimated nitrogen gas. Whether eruptions currently occur on Europa remains debated.

As on Earth, studying physical controls on geyser location, longevity, and eruption intervals on these other worlds can improve our understanding of interactions between their interiors and their surface environments.

Engaging the Public in Research and Conservation Visitors on a boardwalk watch an eruption of Grand Geyser in the Upper Geyser Basin of Yellowstone National Park in June 2012. Credit: Jim Peaco, National Park Service

New sound and visual approaches developed to convey complex patterns in geyser systems may help identify relationships between volcanic signals that might otherwise be overlooked.Tourists and amateur enthusiasts are captivated by the views and sounds of geyser eruptions. These spectacular events also provide public showcases for curiosity-driven scientific research. For example, new sound and visual approaches developed to convey complex patterns in geyser systems could provide valuable educational tools and may also help identify relationships between volcanic signals—such as surface deformation and seismicity indicating preeruptive activity—that might otherwise be overlooked.

Characterizing the sources of thermal water feeding geyser eruptions and mapping the subsurface hydraulic connections between geyser fields and adjacent areas are needed to protect and preserve these natural wonders from human impacts. Geothermal energy production and hydroelectric dam siting have drowned or driven more than 100 geysers to extinction in New Zealand and in Iceland, for example, and geyser eruptions completely ceased in Steamboat Springs and Beowawe in Nevada owing to exploitation of geothermal resources. In contrast, some dormant geysers in Rotorua, New Zealand, resumed erupting a few decades after geothermal extraction boreholes were shut down.

Geysers are curious and awe-inspiring natural phenomena, and they provide windows into a broad range of science questions. They deserve both our wonder and our protection.

Acknowledgments

We thank the communities and agencies that enabled research on land they own or manage (Amayras Communities of Caspana and Toconce in El Tatio, Chile; Environment Agency of Iceland for research near Strokkur; the Department of Conservation, Wai-O-Tapu Thermal Wonderland, the Ngati Tahu–Ngati Whaoa Runanga Trust, and Orakei Korako Geothermal Park and Cave in New Zealand; and the National Park Service in the United States for research in Yellowstone). We thank Wendy Stovall, Lauren Harrison, and Mara Reed for constructive reviews. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. government.

Understanding and Anticipating Induced Seismicity

Fri, 07/30/2021 - 13:28

Many activities required to power and sustain human society have the potential to create earthquakes. This is known as ‘induced seismicity’. Historically, the first observations of induced seismicity were connected to mass displacement, such as during gold mining in South Africa as early as 1894, and coal mining.

Many activities required to power and sustain human society have the potential to create earthquakes.Another cause is reservoir impoundment for water supply and power generation, such as the Lake Meade, Hoover Dam M 5 event in 1939 (Foulger et al. 2018). The largest detected induced event to date was the M 6.3 Koyna earthquake in 1967 (Gupta et al. 2015). This is set to continue with more than 3,500 dams presently being built or planned (Zarfl et al., 2019).

Meanwhile, components of renewable energy technologies such as wind turbines, solar cells, and batteries, require a variety of minerals and metals; thus, new mines will be developed in the forthcoming decades to extract these needed resources (Mining Equipment Market Share & Growth Report, 2020-2027).

Induced seismicity can also be caused by underground industrial activities such as geothermal energy, geological sequestration of CO2 (In Salah Project, microseismicity), exploitation of unconventional hydrocarbon reservoirs (Blackpool, M 2.3), and storage of gas in geologic formations (CASTOR UGS project offshore Spain, M 4) to cover seasonally-varying energy demand. These operations require more invasive techniques for fluid injection and production, and increase the risk of seismic activity, sometimes at close proximity to urban centers (e.g. Suckale, 2010; Ellsworth, 2013). Consequently, certain regions such as Oklahoma and Alberta have been experiencing a long-term increase in induced earthquake rates well beyond historic levels with some events causing local damage and destruction (Ellsworth, 2013).

Evolution of the number of earthquakes of a magnitude greater than 3 between 1975 and 2020 in mid-continental USA. The data comes from the ANSS catalogue of USGS (2017). The statistics show the dramatic increase of the number of earthquakes attributed to the injection of significant amounts of wastewater into the Arbuckle formation in Oklahoma, USA. This figure is an update of versions by Rubinstein and Mahani (2015) and Ellsworth (2013).

Research is urgently needed to assess whether human-induced seismic hazards can be controlled, and whether an improved understanding of the underlying physical processes may help mitigation efforts.Project developers and regulators thus face increasing public awareness and concern about the damaging potential of induced seismicity. Owing to public opinion pressure, human-induced earthquakes have often forced rapid termination of subsurface activities (for example, coal mining in the German Ruhr Area, the Basel geothermal project in Switzerland, and the CASTOR Underground Gas Storage in Spain). Research is therefore urgently needed to assess whether human-induced seismic hazards can be controlled, and whether an improved understanding of the underlying physical processes may help mitigation efforts.

One of the key outstanding questions is whether it is possible to control or reduce induced seismicity rates and maximum magnitudes in order to mitigate the resulting risk to society and critical infrastructures (e.g. McGarr 2014; Galis et al., 2017; McGarr and Barbour, 2018; Kwiatek et al., 2019; Shapiro et al., 2013; Shapiro et al., 2011; van der Elst et al., 2016). There is a clear need for improved seismic hazard assessment and operational forecasts of induced seismicity for different types of subsurface operations and tectonic settings.

General mechanisms controlling the occurrence of induced seismicity include elastic loading and unloading and modification of pore fluid pressure and stress conditions due to fluid injection and production in reservoirs and surrounding rocks (Segall, 1992). Although pore pressure increase and effective stress reduction have traditionally been thought to produce most induced events, recent observations suggest a more complex suite of mechanisms. Sudden rate changes and long-range effects highlight the importance of the interplay between fluid pressure  and solid stress (e.g. Rudnicki, 1986; Segall and Fitzgerald, 1998; Altmann et al., 2014; Goebel et al., 2017). Shallow faults may promote aseismic deformation which can trigger seismic events at larger distances (Cornett et al., 1997; Bourouis et al., 2007; Guglielmi et al., 2015). Fluid flow can trigger earthquakes and potentially lead to cascading failure of fault systems  leading to a series of seismic events (Llenos and Michael, 2013; Sumy et al., 2014).

A new cross-journal special collection in JGR: Solid Earth and Earth and Space Science, entitled Understanding and anticipating induced seismicity: from mechanics to seismology solicits papers that contribute to the understanding of induced seismicity at different spatial and temporal scales.

A new cross-journal special collection solicits papers that contribute to the understanding of induced seismicity at different spatial and temporal scales.Induced seismicity research involves many different disciplines from geomechanics and engineering to seismology and geodesy, requiring a broad suite of analytical, numerical, and statistical analysis tools to improve theoretical understanding and disaster mitigation. The special collection encourages contributions on current scientific challenges in an effort to advance physical process understanding at different scales from laboratory to mesoscale injection tests and reservoir to regional-scale models and observations.

—Birgit Müller (Birgit.mueller@kit.edu, 0000-0002-5668-1437), Karlsruhe Institute of Technology, Germany; Mai-Linh Doan, ( 0000-0002-6437-9756), Université Grenoble-Alpes, France; Thomas Goebel ( 0000-0003-1552-0861) The University of Memphis, USA; Yajing Liu ( 0000-0002-5323-8077), McGill University, Canada; Patricia Martínez-Garzón ( 0000-0003-4649-0386), GFZ German Research Centre for Geosciences, Germany; Tom Mitchell ( 0000-0003-0809-1528), University College London, UK; and Ilia Zaliapin ( 0000-0001-6257-0517), University of Nevada, USA

Eddy Killing in the Ocean

Thu, 07/29/2021 - 13:40

Eddies encourage the ocean’s absorption of carbon dioxide from the atmosphere and help regulate the planet’s climate. Now, scientists have more details about how these ephemeral ocean features die.

Eddies are circular currents that wander around the ocean like spinning tops, ranging from tens to hundreds of kilometers in diameter. They mimic weather systems in the atmosphere and serve as a feeding grounds for sharks, turtles, and fish. Eddies often spin off major ocean currents and typically die within a matter of months.

Some fundamental questions in physical oceanography center around the life cycle of eddies: What gives rise to them, and how do they die? “It’s a big puzzle that’s been long-standing in the community,” said fluid dynamicist Hussein Aluie from the University of Rochester, N.Y.

Aluie and his colleagues found that when it comes to eddy killing, the planet’s winds are partly to blame.

Their innovative analysis of satellite data suggests that wind sucks energy out of the ocean from features smaller than 260 kilometers—features that include most eddies. Wind continually extracts about 50 gigawatts of energy from eddies around the world. The team published their research in Science Advances in July.

“Fifty gigawatts is equivalent to detonating a Hiroshima nuclear bomb every 20 minutes, year-round,” said first author Shikhar Rai, a doctoral student at the University of Rochester. “It is equivalent to operating 50 million microwave ovens continuously throughout the year.”

Although it’s long been suspected that wind zaps eddies of their spin, the latest study provides a seasonal signal and an estimate of wind power loss in major currents. Although wind may be a killer of eddies, it supercharges larger-scale ocean circulation. Wind adds about 970 gigawatts of energy to features larger than 260 kilometers, the recent research found.

Eddies boost ocean heat intake, ocean mixing at the surface, and the exchange of gases with the atmosphere, so calculating these processes relies on accurate depictions of eddies in computer models.

Blowing in the Wind

Eddies likely form from interconnected physical forces in the ocean that include density-driven motion from water of different temperatures or salinities.

Wind destroys ocean eddies by applying stress to the ocean’s surface and slowing eddies’ spin to the point of extinguishing them. Because wind stress hinges on the difference between the speed and direction of wind compared with the speed and direction of the ocean’s surface flow, wind categorically slows eddies rather than quickening them.

Eddy killing happens year-round, but the effects are particularly strong in winter, when winds grow stronger because of storms, according to the new study.

Most eddies come from western boundary currents like the Gulf Stream in the Atlantic and the Kuroshio in the Pacific, and the latest results reveal just how much energy relative to the total input wind removes from these currents’ eddies: 50% from the Gulf Stream and a whopping 90% from the Kuroshio.

“The movement of the ocean is critical in regulating the climate of the Earth,” Aluie said. Eddies can affect the trajectories of major currents: For example, eddies are widely believed to play a crucial role in causing the warm waters of the Gulf Stream to curve away from the eastern United States, keeping the climate of Canada, Greenland, and the Labrador Sea cold.

The research adds to the building evidence that wind stifles eddies. Chris Hughes, a professor of sea level science at the University of Liverpool and author of a 2008 study that found that wind sucked 60 gigawatts of energy from the ocean, said, “It’s nice to see this confirmed independently and some new diagnostics shown.”

A Blurred Photograph Coarse-graining analysis subtracts a blurred version of data (right) from a precise version (left). Credit: Paul Green/Unsplash

The research team used an emerging method in physical oceanography to conduct the new work. Typically, researchers study how the ocean changes over time. But in the latest analysis, the scientists looked at differences over space, not time.

The latest study “represents a novel application of the newly developed coarse-grain method,” said physical oceanographer Xiaoming Zhai of the University of East Anglia, who was not involved in the research.

Coarse-graining analysis can be explained with a simple example, said Aluie. Imagine a flower in a photograph. If you blur the photograph, you can’t see the texture of the flower’s petals, the grains of pollen on its anthers, or the edges of the sepals. If you now take the unblurred photo and subtract the blurry one from it, you get only the fine details of the flower.

The new study used measurements taken between 1999 and 2007 from NASA’s QuikSCAT satellite scatterometer. By “blurring” the satellite information, Rai and his colleagues used coarse-graining analysis to see the details of small-scale ocean flow, which included eddies. The method allowed them to pinpoint the 260-kilometer cutoff.

Sadly, QuikSCAT died in 2009, but an upcoming NASA mission, Surface Water and Ocean Topography (SWOT), along with wind data from other satellite missions could provide Rai and others with higher-quality data soon.

The team will continue to use spatial techniques like coarse-grain analysis in future work, which will include a look into the other side of an eddy’s life cycle: its birth.

—Jenessa Duncombe (@jrdscience), Staff Writer

Peculiar Planets Prefer Perpendicular Paths

Thu, 07/29/2021 - 13:40

Just like the planets of our solar system, most exoplanets tend to orbit their star in the same direction that the star spins. But when they don’t, exoplanet orbits overwhelmingly prefer to be perpendicular. This new understanding of planetary orbits, published in Astrophysical Journal Letters, raises questions about which planets can become misaligned from the direction that their star spins and how the orbits get that way in the first place.

From a Certain Point of View

When seeking to explain strange exoplanet phenomena, the most useful point of comparison is our own solar system. We know more about it than any other of the thousands of planetary systems discovered to date. The dynamics of the solar system are relatively neat and tidy: The orbits of the eight planets all sit very neatly in the same plane, that plane lines up almost exactly with the Sun’s equator, and the whole system rotates in the same direction.

Within the solar system, the largest angle of misalignment between a planet’s orbit and the Sun’s equator—which defines the plane of the Sun’s spin—is Earth’s at just over 7°. Exoplanet scientists have been able to make similar measurements of the spin-orbit alignment within other planetary systems. “Is 7° a small value or a large value?” asked Simon Albrecht, an astronomer at Aarhus University in Denmark and lead author on the recent study. “The jury on that is still out.”

“That alignment in our solar system is part of what led us to believe that planets form out of a disk that’s around the star,” added astrophysicist and coauthor Rebekah Dawson of Pennsylvania State University in University Park. The prevailing theory of planet formation posits that a large cloud of dust and gas collapses under its own gravity to create a star in the center. The leftover material flattens out into a disk that coalesces into one or more planets (see video at right). In that simplified model, all of the star- and planet-forming material swirls in the same direction, which should make the resulting star and planets all spin in a common direction.

However, “we have known for over a decade that there are planets that are not orbiting in the same plane as their star,” Dawson explained. Although most exoplanets orbit in the same direction as the star’s spin (prograde) and with a very small angle between spin and orbit (0°), there are plenty whose orbits don’t follow suit, including some that orbit opposite to the direction of the star’s spin (retrograde) and others that travel completely backward (180°). “The angle between the planet’s orbit and the star’s spin was some of the first three-dimensional information that we started to get about other planetary systems.…We have to imagine something that’s different or more complicated than the history that we’ve naively invoked for our solar system.”

Astronomers can calculate the angle of inclination between the exoplanet’s orbit and the star’s spin by measuring the transit of the planet in different wavelengths and comparing the different transit profiles, a method called the Rossiter-McLaughlin effect (Figure 1).

Fig. 1. If a star’s spin axis is not pointed toward Earth, some of the light from the star will appear to be moving toward observers (blueshifted), and some of the light will appear to move away from observers (redshifted). Here this apparent movement is represented by the stars (large circles) colored blue and red as they spin from left to right (dashed arrow). Exoplanets (black circle with white halo) will block varying amounts of blueshifted and redshifted light as they transit the star (solid arrow). The pattern of how much of the bluer or redder light is blocked over time, known as the Rossiter-McLaughlin effect, can reveal the direction of the planet’s orbit relative to the star’s spin. Credit: Kimberly M. S. Cartier

Usually, however, astronomers can measure only one dimension of a star’s 3D spin—the component of the spin that’s pointed at Earth. “That can tell you that something is misaligned but not by how much,” Dawson said. How much of the star’s total spin we can see and measure depends on the geometry of our vantage point: If a star’s spin axis points directly at Earth, we would measure no spin at all and see no planetary misalignment. To understand the physical reasons why planetary systems are misaligned, it’s not the perceived angle of misalignment that matters, but the true one.

A recent mathematical advancement helped Albrecht and his team calculate our viewing angle for 57 stars that host misaligned planets. With that additional information, the researchers determined that the planets’ misalignments weren’t as random as previously thought. In fact, they found that a significant number of the true misalignment angles were close to 90°, meaning that the planets orbit their stars from pole to pole rather than across the star’s equator.

More Questions Than Answers

For now, the data on perpendicular planets are outpacing the theories that explain them. There’s no obvious commonality that groups these stars and planets together that might explain why misaligned planets end up on polar orbits: The stars range from hot to cold, the planets range from Neptune mass to more massive than Jupiter, and the planetary orbits range from very close in to quite far away.

“The biggest thing these planets have in common is that we can measure this [viewing angle] for them,” Albrecht said. There are no models of planetary dynamics that predict a preference for perpendicular planets, he explained, because, quite simply, no one knew that their models needed to explain it.

No one theory can yet explain all of the perpendicular planetary systems.Regardless, Albrecht and his team offered a few potential ideas to start with, although they acknowledged that no one theory can yet explain all of the perpendicular planetary systems they analyzed. Three of the proposed explanations rely on the gravity of another object—the star, an unseen planet, or the planet-forming disk—tugging a planet’s orbit into a 90° misalignment; the fourth theory invokes a magnetic interaction during planet formation.

J. J. Zanazzi, a postdoctoral researcher at the Canadian Institute for Theoretical Astrophysics in Toronto, said that the team “did a great job summarizing the primary theories which can lead to their very exciting result that spin-orbit misalignments come in two flavors,” well aligned or perpendicular. “All the mechanisms have different strengths and weaknesses, and each mechanism fails to explain some part of [the] observation.” Zanazzi was not involved with this research.

The good news, Zanazzi said, is that “all of the astrophysical mechanisms which have been proposed make specific predictions when the mechanism does not work.…For me, a big thing observers can do in the near future is look for companion planets or stars which can cause the required tilts.” If they fail to find any that fit the bill, such a pattern would narrow down the potential explanations.

Moreover, Albrecht said, as theorists begin to refine their models to explain a cluster of polar orbits, those models can help guide the observers toward the right planetary systems to take a closer look at. Will polar orbits be more prevalent around cool stars or hot stars? Will perpendicular planets be found mostly in multiplanet systems or as loners? More observations, new theories, and time will tell.

—Kimberly M. S. Cartier (@AstroKimCartier), Staff Writer

Irtysh River Drove Arctic Sea Ice Expansion 3 Million Years Ago

Thu, 07/29/2021 - 13:38

During the late Pliocene, Arctic sea ice began to expand rapidly. The new ice created changes to sea level, albedo, the thermohaline circulation, and a host of other factors that still drive the planet’s climate today. But piecing together what caused the ice to expand rapidly has remained an elusive goal for scientists.

Now, a new study by Ma et al. shows that the sea expansion coincided with the formation of Siberia’s Irtysh River 2.77 million years ago. Previous work has shown that the Irtysh River was once a series of inland rivers that drained into a large paleolake in the Junggar Basin, located in northwestern China. But at some point, the basin burst, and the Irtysh began to flow northward toward the sea.

By analyzing neon-21 isotopes along with aluminum-26/beryllium-10, the researchers determined the timing of this critical event. Isotopes like these can be used to date rock and sediment samples because they are cosmogenic in nature and decay at different rates, meaning that if a sample is exposed to cosmic rays at the surface, the isotopes will be created. Then, if the sample is buried, the different nuclides will decay at different rates, providing insight into how long the sample has been sequestered from cosmic rays. With this technique, the scientists reconstructed much of the Junggar Basin’s geologic history and imply when the Siberian-Arctic river system began supplying fresh water to the Arctic Ocean.

The new water provided by the Irtysh created a layer of fresh water roughly 9 meters thick in the Kara Sea, which lies off of western Siberia. The scientists say this sudden influx of fresh water would’ve disrupted the vertical stability of the water and reinforced the stratification of vertical circulation. In combination, these changes created more sea ice in the Arctic, which then drove a series of albedo-based feedbacks, creating colder temperatures and yet more ice. The results show what an incredible impact even a single freshwater input can have in driving sea ice formation and the planet’s climate at large. (Geophysical Research Letters, https://doi.org/10.1029/2021GL093217, 2021)

—David Shultz, Science Writer

Scientists Uncover the Seasonality of COVID-19

Thu, 07/29/2021 - 13:38

As the novel coronavirus has raced around the world, experts have wondered whether it would behave like influenza and other respiratory viruses, spiking in the winter and abating in the summer. Now, more than a year into the pandemic, researchers have enough data to confirm the seasonality of COVID-19 and determine which environmental factors may be driving it.

Of course, environmental factors alone cannot fully explain the spread of COVID-19; social and biological factors, such as population density and social distancing policies, also play a role. To isolate the impact of the environment, Choi et al. examined data on COVID-19 prevalence and environmental variables between 1 March 2020 and 13 March 2021 across five countries—Canada, Germany, India, Ethiopia, and Chile—that had relatively consistent social controls throughout the study period.

Previous studies have linked seasonal spikes of viruses like influenza, which spread by virus-laden droplets, to low humidity. But Choi and colleagues note that this pattern holds only in temperate regions; in the tropics, influenza peaks during the wet season. To account for this disparity, the team also looked at air drying capacity (ADC), defined as the rate of decrease with time of droplet surface area, given ambient temperature and humidity. Essentially, it predicts the fate of droplets under specific temperature and humidity conditions.

The team compared COVID-19 rates with the daily mean temperature, specific humidity, ultraviolet radiation, and ADC across a wide range of climate zones. Much like influenza, COVID-19 peaked in the winter months in the countries with temperate climates—Canada, Germany, and Chile—when temperature and humidity were at a low. But in the tropical countries, cases peaked during the summer monsoons, when humidity was at a high, suggesting that temperature and humidity considered separately can’t explain the seasonality of respiratory viruses like influenza and COVID-19. The seasonal values of ultraviolet (UV) radiation and ADC, however, were consistent with fluctuations in COVID-19 prevalence across all five countries, with high ADC and UV linked to low prevalence and vice versa.

Understanding the seasonality of the virus will be critical for future efforts to combat its spread, as experts have cautioned that people may need annual booster shots to protect against the virus and its emerging variants. (GeoHealth, https://doi.org/10.1029/2021GH000413, 2021)

—Kate Wheeling, Science Writer

俯冲起始可能取决于构造板块的历史

Wed, 07/28/2021 - 12:39

This is an authorized translation of an Eos article. 本文是Eos文章的授权翻译。

俯冲带是板块构造的基本组成部分,在这里一个板块滑入另一个板块之下向地幔靠近。但这个过程的最初阶段——俯冲起始——对于科学家来说仍然有些神秘,因为大多数俯冲作用的地质记录都被正在发挥作用的极端力量掩盖和覆盖了。要想了解俯冲带如何开始形成,唯一的办法就是看看现今地球上的年轻例子。

这张示意图显示了大约1600万年前Puysegur边缘的构造背景。走滑运动使得来自澳大利亚板块的海洋地壳与来自太平洋板块的变薄的大陆地壳并置。在新西兰南岛附近的板块碰撞迫使澳大利亚大洋板块处于太平洋大陆板块之下,在Puysegur海沟产生了俯冲作用。图片来源: Brandon Shuck

在一项新的研究中,Shuck等人使用地震成像技术的组合构建了新西兰西南海岸Puysegur海沟的详细图像。在这个地方,东边的太平洋板块覆盖了西边的澳大利亚板块。Puysegur边缘的构造活动非常活跃,在过去的4500万年里已经发生了几次更替,从裂谷作用到走滑作用再到早期俯冲作用。该地区边缘保存完好的地质历史使其成为研究俯冲作用如何开始的理想地点。该研究团队进行的地震结构分析表明,俯冲带的形成始于地壳中现有的薄弱部分,并依赖于岩石圈密度的差异。

该俯冲带形成的必要条件大约始于4500万年前,当时澳大利亚板块和太平洋板块开始相互分离。在这一时期,伸展力导致海底扩张,并在南部形成新的高密度海洋岩石圈。而在北部,西兰大陆厚厚的、浮力强的大陆地壳只是被拉伸和略微变薄。在接下来的几百万年里,板块旋转,走滑变形使得高密度的海洋岩石圈自南向北移动,在那里撞上低密度的大陆岩石圈,促使了俯冲作用的开始。

研究人员认为,岩石圈密度的差异,加上以往构造期的走滑边界上存在的薄弱区域,促进了俯冲作用的发生。研究团队得出结论称,走滑作用可能是俯冲带形成的关键驱动因素,因为走滑作用能够有效地将非均匀岩石圈的不同部分聚集到板块边界上。(Tectonics, https://doi.org/10.1029/2020TC006436, 2021)

—科学作家David Shultz

This translation was made by Wiley. 本文翻译由Wiley提供。

Share this article on WeChat. 在微信上分享本文。

Evolving the Geodetic Infrastructure

Wed, 07/28/2021 - 12:38

The shape and position of Earth are constantly changing. Geodesy is the branch of geophysics that studies these properties—our planet’s size, orientation, and gravity—which are crucial for answering important Earth and space sciences questions: How will sea levels rise in the coming decade? What are the precise orbits of satellites? What are the patterns in a volcano’s magma migration? How is elevation determined? Pursuing these questions requires maintenance and improvement of the geodetic infrastructure—the instruments, software, and expertise that provide precise measurements.

“It’s like a freeway system…it’s really fundamental.”“It’s like a freeway system or something—it’s really fundamental,” said David Sandwell, a marine geophysicist at Scripps Institution of Oceanography in San Diego. Sandwell chaired the committee behind a report addressing geodetic infrastructure, released in 2020 by the National Academies of Sciences, Engineering, and Medicine (NASEM). The report provides recommendations to ensure that researchers will be able to continue using geodetic approaches to tackle diverse Earth science questions, from sea level change to weather models to geological hazards. These research areas were highlighted in an earlier decadal survey funded by NASA, NOAA, and the U.S. Geological Survey (USGS).

Terrestrial Reference Frame

The primary need for the geodetic infrastructure is to define a terrestrial reference frame, a set of 3D coordinates organized around Earth’s center of mass. The more correct this reference frame is, the more accurate and stable satellite orbits are. This accuracy provides scientists with better data sets.

The terrestrial reference frame relies on four techniques. Very long baseline interferometry (VLBI) measures radio signals from distant quasars to measure Earth’s orientation in space and scale. Satellite laser ranging (SLR) relies on short pulses sent to satellites; the return times can be used to trace satellite orbits and calculate Earth’s center of mass and scale. The Global Navigation Satellite System (GNSS), which includes GPS, and the Doppler Orbitography by Radiopositioning Integrated on Satellite (DORIS) system provide additional global measurements. Raw data from these systems are combined and analyzed to produce the International Terrestrial Reference Frame (ITRF). The ITRF is a group effort, the report describes: “All parties involved work in an open international collaborative environment to provide the most accurate reference frame for science and applications.”

The report recommends the deployment of more VLBI and SLR stations and the establishment of more receivers that can interface multiple GNSS systems. Such improvements would provide more coverage and improve the accuracy and stability of the terrestrial reference frame.

Geodetic infrastructure like this is critical for determining the precise orbits of satellites described in the decadal survey, said Shin-Chan Han, a geodesist at the University of Newcastle in Australia. “Such precise orbit is mandatory to achieve all the identified important science problems,” he said. Han reviewed the NASEM report but wasn’t directly involved in its preparation.

Monitoring Land Subsidence

Geodetic infrastructure is also crucial in hydrology. GPS and interferometric synthetic aperture radar (InSAR), a satellite-based technique for measuring land deformation, have transformed how scientists study land elevation changes due to groundwater removal and recharge.

“I just can’t imagine waking up one morning and saying, oh, the GPS constellation isn’t working anymore,” said Michelle Sneed, a USGS hydrologist who was part of the committee that authored the NASEM report. One area that Sneed monitors is the San Joaquin Valley, an agriculture region in central California that has changed dramatically because of groundwater pumping for irrigation. From 1925 to 1977, the land surface in this area subsided about 9 meters because of compaction. Sneed and colleagues used continuous GPS and InSAR to assess land subsidence in the west central San Joaquin Valley and explored potential risks to the California Aqueduct.

The techniques also indicated that Coachella Valley has stabilized, likely because of projects that increased recharge and reduced reliance on groundwater. “The integration of these different geodetic techniques…adds different pieces to the stories,” Sneed said. “InSAR is this great spatial tool. But if you want a daily value of the land surface at any one point, then you need continuous GPS.”

Additional Infrastructure

Maintaining geodetic infrastructure faces significant challenges, the NASEM report notes. Making the software for processing raw geodetic data widely available is one such challenge, and compensating for an aging workforce is another. “I’m concerned about a shortage [in the] geodesy workforce,” Han said.

Maintenance and enhancement of geodetic infrastructure will be crucial for addressing the Earth science questions outlined in the decadal survey. As described in the report, “the international geodetic infrastructure is the largely invisible foundation of Earth system science and applications.”

—Jack Lee (@jackjlee), Science Writer

SnowSchool Spans the States

Wed, 07/28/2021 - 12:37

Imagine young students bundling up in winter clothing, strapping on snowshoes, and trekking to a site with thick snowpack where a volunteer instructor cuts out a refrigerator-sized block of snow. If the block stays coherent, the instructor asks the kids to jump on it until it fails, making them tumble into a flurry of snow. Together, the teacher and students measure the density and dimensions of the snow block to calculate its weight, which can be nearly as heavy as a car. By experiencing this mini avalanche, the students might begin to fathom what a real one might feel like.

This snow stability test is among the many experiments that SnowSchool, a nationwide program run by the nonprofit Winter Wildlands Alliance, uses to seed K–12 students’ interest in science and outdoor education. The curriculum integrates the local ecology for each of 81 active sites across the United States and uses snow as the medium to engage students, said Kerry McClay, SnowSchool’s national director.

“This fusion of fun, science, and snow ignites that sense of wonder and lets kids explore, with their curiosity in the driver’s seat.”More than half of SnowSchool’s students come from underserved populations, including numerous Title I schools—schools with at least 40% enrollment from low-income families. “Every community where a SnowSchool site is located is different,” McClay said. Some sites serve tribal communities on reservations. Another site ferries students from Oakland, Calif., to the Sierra Nevada, a rather lengthy trip that takes at least 3 hours.

Because SnowSchool is heavily subsidized by grants and fueled by donations, participation is often free, said McClay. The program provides gear, including snowshoes and winter clothing for students; resources and training for volunteers; and curricula for teachers. Interested schools simply need to apply and provide buses to transport students to their SnowSchool sites.

The more than 35,000 students in the program might explore how snowpack forms and melts, build igloos, or track wildlife, said McClay. A favorite activity of the students—many of whom have never seen the deep snowpack of the mountains—is sliding on their bellies through drifts. This fusion of fun science and snow “ignite[s] that sense of wonder and lets kids explore…with their curiosity in the driver’s seat,” said McClay.

Snow Stratigraphy

An especially illuminating experiment, he said, begins with the classic kid activity of digging a hole in snow. The instructors and students begin by digging a trench through the snowpack, down to where the snow meets the ground, sometimes 6 feet deep (nearly 2 meters), said HP Marshall, a snow scientist and professor at Boise State University who helps design materials and train volunteers for SnowSchool. “It’s like looking at tree rings,” he explained, except instead of years, each layer in the snow pit signifies a discrete weather event. The students learn to identify the previous night’s soft snow, last week’s snowstorm, and last month’s ice crust left by a rainy day.

Researcher Kelsey Dean examines snow crystals with a macroscope while working in a snow pit in Fraser Experimental Forest, Colo. Credit: Kelly Elder

Then the students get macroscopes—like microscopes, but with a large viewing area—and they look at the changing snow crystals. “It’s like a whole other universe,” said Marshall.

Digging the trench serves as a window into ice core climate research, said Marshall, and lets instructors start discussions about how scientists study climate change. In the world of climate research, scientists drill many kilometers down, extracting deep ice cores that help researchers see what the climate was like and how it changed many hundreds or even thousands of years back.

Another SnowSchool project is a crowdsourced science initiative conducted in collaboration with NASA’s SnowEx program. In this project, students from SnowSchool collect snow data on the ground that will ultimately help calibrate satellite data.

No Snow, No SnowSchool?

Near Boise, Idaho, the flagship SnowSchool site at the nonprofit Bogus Basin recreation area and ski resort beckons. At this location, students often come from predominantly Latinx agricultural communities and typically have not spent much time in a snowy environment, said Marshall. By focusing the curriculum on water availability, he viscerally links water to everyday life for students steeped in cultivating crops. Students learn the role snow plays in the water cycle, which gives them tools to talk about snow and water with their families. “Snow water resources,” Marshall said, “are so impacted by climate change.”

With the uptick in extreme events, the snowpack atop mountains is more variable and melts faster, said McClay. “Eighty percent of our water is coming from melted snow,” he said. Students see trends with snow-sourced data and begin to consider the repercussions for water supply, irrigation, agriculture, or fires. “The list goes on.”

Unfortunately, Marshall admitted, “people that live too far from the mountains can’t really engage with this program.” For these communities, “the SnowSchool organization put a lot of effort into videos and online material,” in part as a response to travel restrictions imposed by the COVID-19 pandemic.

In his visits to classrooms, Marshall has found that even a cooler filled with snow excites kids. “They want to have snowball fights, [or] see how long they can stick their hands in [it],” he said. McClay is hopeful that as SnowSchool expands, students everywhere can engage in the program—as long as there’s access to snow.

“SnowSchool,” said McClay, “is not as effective without snow.”

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Exploring the Dramatic Shift in Ice Age Duration

Wed, 07/28/2021 - 12:36

For the past 1.2 million years (during the Late Pleistocene period), ice ages have occurred in cycles lasting roughly 120,000 years. Before this period (during the Early Pleistocene period), these cycles lasted only about 41,000 years. The cause of the change in ice age duration, known as the Mid-Pleistocene Transition, is unknown. A recent article published in Reviews of Geophysics examines possible explanations for the Mid-Pleistocene Transition. We asked the authors about the Mid-Pleistocene Transition and possible explanations for this period.

 What makes the Mid-Pleistocene Transition (MPT) particularly fascinating to study?

A change in any one of these physical systems will affect all others, which is a profound realization when thinking about current climate change.Studying glacial cycles, and particularly the MPT, requires you to think about all the possible ways the different components of the Earth’s climate can interact. A growing ice sheet will reflect more sunlight back into space, cooling down the climate. A colder climate means colder oceans, which absorb more CO2 from the atmosphere. Ice sheets cause erosion, which creates airborne dust and influences the flow of the ice. Some of this dust can rain down on top of the ice sheet, creating dirty snow that absorbs more sunlight, or it can be blown into the oceans, where it can fertilize algae growth and increase CO2 drawdown.

These interactions between the ice sheets, the global climate, the oceans, the carbon cycle, and even the solid Earth, are fascinating. A change in any one of these physical systems will affect all others, which is a profound realization when thinking about current climate change.

How did the cycle of ice ages differ between the Early Pleistocene and Late Pleistocene?

The Pleistocene (the last 2.8 million years of Earth’s geological history) is distinguished by the presence of glacial cycles (“ice ages”): periodic, dramatic climate changes that caused vast ice sheets to appear and disappear over large parts of North America and Europe.

During the Early Pleistocene, these glacial cycles occurred roughly every 41,000 years. This makes sense, because these cycles are caused by small changes in Earth’s orbit, which also occur every 41,000 years. The MPT marks the transition to the Late Pleistocene, where the glacial cycles took much longer (about 100,000 on average). Understanding how a 41,000-year change in Earth’s orbit can lead to a 100,000-year climate response is one question we tried to answer; the other is why it only did so during the Late, and not during the Early Pleistocene.

What are the various explanations for the MPT?

There are two groups of theories. The first is the “global cooling plus non-linear feedbacks” group, which says that ice sheets respond non-linearly to changes in climate. The larger ice sheets of the cold Late Pleistocene created their own cold climate environment, making them more resistant to climate warming. This allowed them to survive some of the warm interglacial periods, growing even larger during the next cold phase. The reason why this didn’t happen during the Early Pleistocene is because the world was warmer then, so that the ice sheets never reached the required size to survive a warm period.

The second group are the “erosion” theories. Ice sheets slowly grind away the land underneath them, scraping away the soil until nothing remains but bare rock. Ice slides more easily over soil than over rock, so that soil-based ice sheets tend to “flatten out” when compared to rock-based ice sheets. Also, as mentioned before, soil-based ice sheets create airborne dust which can lead to dirty snow (which absorbs more sunlight), and oceanic algae fertilization (which draws CO2 out of the atmosphere). In this theory, the MPT marks the moment when the last soil was eroded away in northern North America and Europe, and these different processed ceased.

Does the available evidence appear to more strongly support one explanation over the others?

It’s likely that all of these mechanisms at least played a role, but determining how much of a role is tricky due to the lack of detailed and conclusive data.It’s likely that all of these mechanisms at least played a role, but determining how much of a role is tricky due to the lack of detailed and conclusive data. Since the erosive action of ice sheets tends to remove evidence of earlier ice sheets, it’s difficult to figure out what the older ones looked like. Data on the state of the Earth’s climate or the composition of the atmosphere is also limited. Our most valuable source of information is the ice core record, but that only extends to 800,000 years ago – not long enough to cover the MPT.

What different research approaches could be used to resolve the question of the MPT’s cause?

Currently, a team of scientists is drilling into the Antarctic ice sheet to produce a core that, if all goes well, should go back well over a million years. We’re very excited to see what comes out of that project, particularly in terms of CO2 concentration.

At the same time, the ice-sheet modelling community is working on improving the physics of sliding and oceanic melting. Although most of the focus is on near-future ice-sheet retreat, the outcomes are also important for the MPT, since these processes were important in those times as well. And the more accurate our models can reproduce ice-sheet evolution in the past, the better they are at predicting the future.

Constantijn J. Berends (c.j.berends@uu.nl; 0000-0002-2961-0350), Roderik S. W. van de Wal ( 0000-0003-2543-3892), and Lucas J. Lourens ( 0000-0002-3815-7770), Utrecht University, The Netherlands

Code-Switching and Assimilation in STEM Culture

Wed, 07/28/2021 - 12:35

Picture a young weather enthusiast walking across the stage to receive their meteorology degree. They feel pride in this culmination of their years of hard work. They also recall how that hard work always seemed to appear to others. Friends and family called them “proper” during visits home from school, creating a distance that lingered. Their colleagues and peers frequently offered their own unsolicited impressions:

“You are so articulate!”

“You need to be more professional…”

“You cannot show up like that.”

“You are not like those other Black people.”

Or in another common story, an early-career scientist reflects on the cost of their profession. They earned a degree, but they had to permanently relocate for school and the only career opportunities available to them. Visiting home and family is emotionally exhausting because it is a constant reminder of what was given up to focus on those limited opportunities. They raise a new family away from their abuelitos, missing out on making tamales with their tías or dancing to cumbia at their cousin’s quinciañera. As they slowly lose their grasp of their native language, they fear their children will also lose that deep connection with their Latino heritage. Sí se puede, but is it worth it?

Pursuing careers in this extremely white dominated field requires us, more often than not, to assimilate either internally or externally to the culture, to code-switch.On the surface these stories may sound and feel similar to most of us who pursued higher education or careers in academia. Who hasn’t felt inadequate, had trouble finding their place in a new environment, or ultimately felt as though they did not belong? The difference we authors want to express is that although the situations and experiences may sound similar, the consequences of these experiences for Black, Indigenous, and People of Color (BIPOC) professionals in geosciences are very different. Additional stress, emotional labor, and baggage cause long-lasting trauma for BIPOC professionals. We feel this trauma. It is visceral. And it bubbles to the surface even as we write this article. Pursuing careers in this extremely white dominated field requires us, more often than not, to assimilate either internally or externally to the culture, to code-switch. In the process, we lose our authenticity.

This assimilation, however, is counterproductive to the creation of a richly diverse and inclusive scientific community that is prepared to address the questions of our modern world, and more importantly, it is deeply disrespectful and harmful to the BIPOC scientists whom the community boasts about recruiting. We are asking our colleagues to form a better awareness of code-switching, why BIPOC scientists perform it, and how we can address the deficiencies in our community that require it.

Code-Switching and Identity Shifting

The term code-switching originates from linguistics, meaning “the practice of alternating between two or more languages or varieties of language in conversation.” The concept of code-switching has evolved to describe the changes in speech, appearance, and behaviors by an individual to adjust to the norms of the dominant culture in a given space. We have all code-switched at some point, but for BIPOC it can be a mandatory coping strategy to protect ourselves from judgement, discrimination, hypervisibility, and tokenism [Dickens et al., 2019].

Initiatives to increase the number of BIPOC in science, technology, engineering, and mathematics (STEM) have been working, if slowly, and now these folks are attempting to exist and thrive within the white-centric environments of academic institutions, scientific laboratories, and private industries. Wanting to fit in and be comfortable, BIPOC learn to assimilate cultural norms by “deemphasizing a negatively-valued identity and replacing it with a positively-regarded identity,” also known in psychology literature as identity shifting [Dickens and Chavez, 2018, p. 761].

These shifts can be intentional or unintentional as we evaluate the level of risk associated with the possibility of making white people uncomfortable. For example, we’re asked to participate in a diversity panel, again. Should we express to our white colleagues that we feel used as a prop or just stay quiet and humbly accept the invitation? Will we be seen as problematic or ungrateful? Even through editing of this writing, we authors felt conflicted over appeasing the editors and staying true to our story, appreciating the critique yet not wanting to lose our voice. The risk can range from feeling embarrassed or worried you made a bad impression to being harassed and fearing for your life and safety.

Some BIPOC grow up in segregated communities, only learning to identity shift after they leave home and experience predominantly white spaces such as a university, a scientific conference, or an internship at a national lab. Even within historically Black colleges and universities, academic spaces where Black culture is championed, a Black STEM student may still feel like an outcast if they feel the need to hide their perceived nerdy self to belong, as nerdiness is stereotypically associated with whiteness.

The Cost and Consequences of Code-Switching

Code-switching is exhausting, taking up mental capital that should be devoted to our research.The inner turmoil created by shifting identities can often manifest as physical and mental ailments [Dickens and Chavez, 2018]. Code-switching is exhausting, taking up mental capital that should be devoted to our research. We just want to be scientists, without having to separate our culture from our profession, and to be able to present ourselves authentically without needing to constantly account for potentially negative reactions from others.

Instead, we live with that constant nagging in the back of our minds, reminding us that we have to say the right things, react the right way, and behave in a manner that draws attention away from the obvious difference we present. When we are not able to blend in, we falsely believe that we don’t belong and fear being called out as incompetent. This phenomenon is often called imposter syndrome. This explanation, however, identifies the person feeling it as the responsible party—the one who needs to change. Imposter syndrome is, instead, a scapegoat that takes focus away from addressing the culture of bias and systemic racism that exists for women and BIPOC scientists [Tulshyan and Burey, 2021].

For BIPOC in geoscience, those feelings are compounded because of the more extreme cultural isolation that exists in the field compared with other STEM fields. In the geosciences, we are often not just one in a historically excluded group but the only one in our field or lab.

What does a reliance on code-switching force us to give up? We become accustomed to adjusting to norms within our professional workplace (e.g., at the office, at conferences, during a field campaign or expedition) or, often, in the neighborhoods we’re required to move to. Those adjusting behaviors start to become unconscious, even dominant. We start to lose our native and colloquial language and cultural norms. Returning home can make us feel like outsiders looking in. We lose the thread that connects us with the people we grew up with and the people who raised us. Ultimately, we are left to wonder where we fit in.

We are never white enough in our professional environments but become too white in our home communities. Some of us self-exclude and choose not to be seen altogether, not wanting to lose ourselves or be the representative of an entire race of people. But by choosing to stay in the shadows, we also lose the opportunities and recognition that make any profession worth pursuing.

Professionalism Versus Assimilation

We are taught to be professional, but let’s consider the origins of present-day professional standards. In the broadest sense, the concept of professionalism encompasses the conduct by which one is expected to present oneself in formal settings, often customized to one’s discipline.

We strongly believe we should elevate and celebrate the people within our scientific communities, not ask them to assimilate.For geoscientists, these settings include job interviews, research seminars, conferences, classes, labs, and field campaigns. The standards are taught by mentors and in professional development seminars that focus on how to modify people’s behavior rather than how to evaluate, modernize, or fix the many problems in the culture. We persist in perpetuating professional standards that were established by white men many decades ago when women and BIPOC were not represented. Ethnically and culturally traditional attire, hairstyles, and vernacular were inconceivable when present-day professionalism was defined. Some scholars contend that this bias in professional standards is a form of white supremacy.

BIPOC resort to code-switching to boost their perceived professionalism—we assimilate [McCluney et al., 2019]. Code-switching, then, becomes a barrier to true inclusivity [Goldstein Hode, 2017], which should be the ultimate goal of modern professional behavior based in mutual respect and ethical integrity. Inspired by the perspectives of Halsey et al. [2020], we strongly believe we should elevate and celebrate the people within our scientific communities, not ask them to assimilate.

A Path Forward Isn’t Easy

The need for code-switching will persist until we can eradicate the systemic, institutional, and personal racism against which we need a shield. The onus should be on the larger community, not on BIPOC alone, to develop strategies that lead us to modern-day professionalism that is inclusive and respectful of everyone.

How can we collectively create an inclusive community and environment where we can each be our authentic selves? It’s not easy, and we don’t have all the answers. It requires us all to challenge professional standards.

Professionalism should require mutual respect, not assimilation to a single specific set of behaviors. Everyone, but especially those in leadership or supervisory positions, should seek out and recommend professional development opportunities on cultural competencies. Look around your workplace and take steps to evaluate and assess the culture and climate, then use these data to modernize your policies and practices to focus on equitable inclusion. Understand and listen to the variety of experiences of the people around you, in particular those of your BIPOC colleagues. Accept BIPOC colleagues for who they are. By doing so, you’ll show everyone around you how to change the culture rather than changing the people. By working together, we will become better together.

The more we assimilate, the less diverse our science and our ideas become.Ultimately, the need for code-switching negatively affects the individual BIPOC professional as well as the entire science community. As challenging as it can be, we are passionate about the science we pursue and desire to contribute to it. But the more we assimilate, the less diverse our science and our ideas become. This lack of diversity makes code-switching and the persistence of the institutions that require it a detriment to the advancement of our knowledge of our rapidly changing world.

To our BIPOC friends, peers, and colleagues: We carry hope in each other, knowing that we can look across the conference table, the poster session, or the Zoom room and be able to lock eyes and feel comfort and community. We want future generations to be empowered to show up as their authentic selves and focus their time and effort on great science, without interference and the additional labor of code-switching.

Acknowledgements

The authors would like to thank Deanna Hence, Rebecca Haacker, Rosimar Ríos-Berríos, Talea Mayo, and Valerie Sloan for their encouragement, support, and helpful contributions to this article.

El universo de Dune inspira la nomenclatura de Titán

Tue, 07/27/2021 - 11:34

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Dune de Frank Herbert cuenta la historia de Paul Atreides, un hijo de una familia noble enviada al hostil planeta desértico Arrakis para supervisar el comercio de una misteriosa droga llamada melange (apodada “especia”), que otorga a quien la consume habilidades sobrenaturales y longevidad. Sobreviene la traición, el caos y las luchas políticas internas.

Imagina que estás en Arrakis, rodeado por un océano de arena. El aire es irrespirable, el cielo brumoso, el paisaje misterioso. Arena por millas, hasta donde alcanza la vista. Sabes que a varios cientos de kilómetros de distancia hay una vasta red de cañones que, desde arriba, parecen haber sido tallados por enormes gusanos.

Antes de emocionarse demasiado, es importante saber que este no es el famoso planeta desértico que aparece en las novelas de Dune.

No, este Arrakis está más cerca de nuestro propio mundo.

Este Arrakis está a tan solo mil millones de kilómetros de la Tierra, en un mundo que orbita a Saturno.

Incluso hemos aterrizado una nave espacial cerca de allí.

Si aún no lo has adivinado, este Arrakis, oficialmente llamado Arrakis Planitia, pertenece a la segunda luna más grande de nuestro sistema solar, Titán. Arrakis es una vasta llanura de arena indiferenciada, pero no arena como la conocemos. La arena de Titán está hecha de grandes moléculas orgánicas, lo que la haría más suave y pegajosa, dijo Mike Malaska, científico planetario del Laboratorio de Propulsión a Chorro (JPL, por sus siglas en inglés) de la NASA en Pasadena, Calif.

Todas los rasgos de Titán (aquí fotografiadas en ultravioleta e infrarrojo por el orbitador Cassini) llevan el nombre de lugares en las novelas de Dune de Frank Herbert. Créditos: NASA/JPL/SSI

A Malaska le gusta imaginar que la arena de hidrocarburos de Titán, que en realidad se conoce como tholin, o suciedad orgánica compleja, podría duplicarse como la especia infame en el centro del extenso arco narrativo de Dune.

En los libros de Dune, la especia huele a canela, mientras que el tholin en Titán “probablemente huele a almendras amargas…y a muerte”, dijo Malaska.

Arrakis no es el único nombre de las novelas de Dune que adorna las características geológicas de Titán. Todas las llanuras y laberintos (rasgos en forma de cañón tallados en la superficie) indiferenciados de Titán que tienen nombre llevan el nombre de planetas de la serie Dune. Está Buzzell Planitia, que lleva el nombre del “planeta del castigo” utilizado por una antigua orden de mujeres con habilidades sobrenaturales. Está Caladan Planitia, que lleva el nombre del planeta natal del héroe principal de Dune, Paul Atreides. Está Salusa Labyrinthus, que lleva el nombre de un planeta prisión. Y más.

“Estoy asombrado [de] cuánto se parece Titán a la descripción de Arrakis”, dijo Malaska. Además de las vastas llanuras de arenas de hidrocarburos que se extienden a lo largo de la superficie de Titán, el complejo clima de tormentas y lluvia de metano de la luna se siente como de Dune. “Titán es Dune”.

Y, por supuesto, están las dunas. Los campos de dunas de Titán rodean el ecuador de la luna de 16.000 kilómetros de largo. La luna tiene más dunas que la Tierra tiene desiertos.

Rosaly Lopes, otra científica planetaria del JPL, fue una de las primeras personas en ver las dunas de Titán. Ella y otros miembros del equipo Cassini estaban analizando imágenes de uno de los primeros sobrevuelos de Titán de la nave espacial, allá por 2005, y vieron extraños rasgos curvados en la superficie.

“Cuando vimos las dunas por primera vez, no sabíamos que eran dunas”, dijo Lopes. No fue hasta un sobrevuelo posterior de Cassini que confirmaron que Titán tenía dunas en todo alrededor de su ecuador.

Aunque Herbert se inspiró originalmente en las dunas de arena de la costa de Oregón, también podría haber estado imaginando Marte.De hecho, Lopes fue la primera en sugerir nombrar las llanuras y laberintos de Titán en honor a los planetas del universo Dune en 2009, aunque no recuerda exactamente cómo surgió la idea. Ella dijo que tenía sentido, considerando las dunas de Titán.

Los científicos planetarios no nombran los rasgos hasta que existe una necesidad científica para ellos, dijo Lopes. Primero se debe elegir un tema, ya sean aves míticas para áreas interesantes en el asteroide Bennu, o dioses del fuego para volcanes en la luna de Júpiter Io (Lopes nombró a dos de ellos, Tupan y Monan, en honor a deidades de culturas indígenas en su país de origen de Brasil). Hay otros rasgos literarios en el sistema solar, como los cráteres de Mercurio que llevan el nombre de artistas y escritores famosos.

Aunque Herbert se inspiró originalmente en las dunas de arena de la costa de Oregón, Malaska imagina que Herbert, y sus muchos lectores, también podrían haber estado imaginando Marte, el único planeta desértico que conocíamos en la época en que se publicó Dune, en 1965. De hecho, ese mismo año, la NASA hizo su primer sobrevuelo exitoso de Marte con su nave espacial Mariner 4 y la humanidad pudo ver de cerca el Planeta Rojo.

Pero los campos de dunas de Titán son únicos en el sistema solar, y es lógico que esta misteriosa luna lleve el nombre de un revolucionario universo de ciencia ficción.

—JoAnna Wendel (@JoAnnaScience), Escritora de ciencia

This translation by Daniela Navarro-Pérez (@DanJoNavarro) of @GeoLatinas and @Anthnyy was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Testing on the Tundra: NASA Snow Program Heads North

Tue, 07/27/2021 - 11:33

Seasonal snowpack covers 46 million square kilometers annually—31% of Earth’s land area—but that number is shrinking. Snowpack is accumulating later, melting earlier, and retreating at an even faster rate than Arctic sea ice. This reduction in snowpack has implications for water locally and climate globally.

“The Earth gets rid of enormous amounts of heat by painting itself white in the winter, and that’s going away.” “Snow is an enormous regulator of heat on Earth because of its high reflectivity,” said Matthew Sturm, group leader of the Snow, Ice and Permafrost Group at the University of Alaska Fairbanks Geophysical Institute. “The Earth gets rid of enormous amounts of heat by painting itself white in the winter, and that’s going away.”

Just how substantial will changes brought by shrinking snowpack be? SnowEx, a multiyear NASA research program, hopes to find out. SnowEx has tested sensors in Western states since 2017; this winter the research continues in Alaska, a state with applicable infrastructure, experience, and plenty of snow.

Matthew Sturm, who specializes in tundra research, probes for snow depth using a GPS-enabled automatic depth probe. Credit: Matthew Sturm A Satellite for Snow

Every 10 years, an independent panel assesses NASA’s satellite fleet and recommends research areas that are currently unmet. The 2017 decadal survey suggested snow (and, specifically, snow water equivalent) as a possible mission focus for NASA’s Explorer program.

“[Snow water equivalent] is a critical component of hydrologic cycling and the Earth’s energy balance, but it’s really difficult to measure,” said Carrie Vuyovich, project scientist for SnowEx Alaska and a research physical scientist at NASA’s Goddard Space Flight Center. Field observations provide valuable data, but only in limited areas. “It means a huge amount of landscape is missing information,” she said. “Satellites are really the ideal observers to cover that amount of area.”

To prepare for a potential satellite mission, SnowEx scientists are developing and refining aircraft-mounted sensors adaptable across a range of conditions. There’s no guarantee of a satellite launch—“It’s a competitive process,” said Vuyovich—but the snow science community can prepare for the potential opportunity by testing sensors calibrated to the temporal and spatial intricacies of snow.

Tundra Crust and Taiga Woods

A snow-focused satellite should work in all regions, from deep mountain powder to dense tundra crust. Sensors must also react to complicated conditions: wet snow, deep snow, snow covered by trees. SnowEx’s mobility allows it to refine algorithms and accuracy in various conditions, and Alaska is essential for testing those abilities.

SnowEx tundra research will take place in the Toolik/Imnavait area of northern Alaska. The area is home to unique snow formations like sastrugi, snow ridges shaped by wind erosion. Credit: Matthew Sturm

“The large fraction of global snowpack is here at higher latitudes,” said Svetlana Stuefer, an associate professor of civil and environmental engineering at the University of Alaska Fairbanks. As deputy project scientist for SnowEx Alaska, Stuefer is helping coordinate the campaign and identifying locations that represent the world’s two largest snow biomes: Arctic tundra and boreal forest, also called taiga.

“Tundra and taiga take up a lot of room, but they pose two different problems,” said Sturm, a senior adviser to SnowEx Alaska.

Tundra snow is shallow, stratified, and often located on permafrost. Remote sensors must recognize and respond to those conditions. Taiga is more complex. “Sorting out what’s on the ground and what’s on the trees is very difficult,” said Sturm. Snow suspended from tree branches reflects light (which is good for climate control) but may sublimate into the atmosphere without contributing to groundwater. A snow-focused satellite would need sensors attuned to both climate and water issues.

“I’m excited to see where [SnowEx] goes. They have a lot of challenges ahead of them, but I think it can be an important tool,” said Daniel Fisher, a senior hydrologist with the U.S. Department of Agriculture’s Alaska Snow Survey not involved in the project. “I don’t think [remote sensing] will ever be a silver bullet, but I do think it will play an important role in understanding and measuring the snowpack across the state,” he said.

Fixing the Data Drought

SnowEx scientists plan to fly lidar and stereophotogrammetry sensors in Alaska this winter. Another aircraft will carry the Snow Water Equivalent Synthetic Aperture Radar and Radiometer (SWESARR), a specialty SnowEx instrument developed at Goddard to calculate snow water equivalent (SWE) using active and passive microwaves. Field staff will measure snow conditions on the ground to compare observations.

Charlie Parr, a research technician at the University of Alaska Fairbanks, measures the density of layers of snow. Credit: Matthew Sturm

Better snow data could benefit a range of interests, from road crews to flood forecasters to subsistence trappers. Increased SWE data would particularly help water managers; one in six people relies on seasonal snowpack for drinking water.

Then there are the recreationalists, like backcountry skiers who scan avalanche reports while brewing their morning coffee.

“Right now, operationally, we are extremely reliant on point-based observations,” said Andrew Schauer, a lead forecaster for the Chugach National Forest Avalanche Information Center not involved in SnowEx. Avalanche centers are challenged by a lack of data, he said, but aerial observations could fill that gap if updated quickly. “I’m excited to see what becomes of the SnowEx program,” he said.

By preparing sensors for all winter conditions, SnowEx scientists hope to be ready should NASA ask for a mission proposal. Research in Alaska is an important step to reaching that goal.

“[SnowEx Alaska] positions us to be competitive,” said Sturm. “I don’t think there’s any question that a satellite for snow would help humanity.”

—J. Besl (@J_Besl), Science Writer

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer