EOS

Syndicate content
Earth & Space Science News
Updated: 4 hours 48 min ago

Heat Waves Are Blowing in the Wind

Wed, 10/02/2019 - 11:32

Between 2001 and 2010, over 136,000 people died because of heat waves, representing an increase of more than 2,000% compared to the previous decade. And as Earth continues to warm, scientists say extreme heat waves will only get worse.

Understanding the causes of heat waves is a key factor in our ability to predict them, which in turn enables the prompt implementation of emergency response plans. Understanding these causes, especially those that are related to land conditions, could potentially help us lessen the severity of heat waves in the future as well.

Heat waves were exacerbated by advected heat from upwind drought-wracked regions.Researchers from Ghent University in Belgium and Wageningen University in the Netherlands have identified an important new contributor to heat wave severity. Using data from the “mega–heat waves” in western Europe in 2003 and in Russia in 2010, researchers determined that these heat waves were exacerbated by advected heat from upwind drought-wracked regions.

Dominik Schumacher, the lead author of the study, explained to Eos that there are three main sources of heat during a heat wave. First, heat can come from below, rising up from the Sun-warmed ground. Second, heat can come from above, from a layer of air called the free troposphere directly above the planetary boundary layer. Third, said Schumacher, “you can also have heat coming in from the side, or horizontally. This is the part that we investigated more closely. Up until now, this wasn’t really looked at too much.”

Previous research had already shown that local soil conditions play an important role in mediating the first source of heat, the heat coming from the ground. “When the soils dry out, even more energy goes into heating the air above, and less energy goes into evaporation,” said Schumacher.

Researchers wanted to know if dry soils also played a role in the third contributor to heat waves, the horizontal transportation of heat from other places on Earth. By using a Lagrangian trajectory model, Schumacher said, “we can trace the air back in time, we can find out where the air came from that is residing over our heat wave region now. Where was that air 5 days ago? Even more importantly, we can also analyze if the air gains or loses heat on the way to our heat wave region.”

In the case of the 2010 heat wave in western Russia, unusual weather patterns brought warm air from the east and southeast into the heat wave region. These regions were in the middle of a drought and had extremely low soil moisture. Researchers estimate that this soil dryness was responsible for 30% of the heat that was transported from the drought region to the heat wave region in western Russia. Thus, drought and soil conditions in far-off places can substantially worsen heat waves.

Wind can augment a heat wave by transferring heat from a region suffering from drought. Credit: Modified from Schumacher et al., 2019, https://doi.org/10.1038/s41561-019-0431-6

Physical geographer David Keellings of the University of Alabama, who was not involved with the new research, said that heat waves in which a substantial amount of heat is advected from another location happen not just in Europe but across North America and Asia as well. Thus, this research could help us understand heat waves that occur across a substantial part of the globe.

Mitigating Heat Waves

Understanding how land conditions like soil moisture contribute to heat waves could also provide important clues about how to mitigate heat waves in the future.

Schumacher is involved in a project called Dry-2-Dry, which is headed by Ghent University’s Diego Miralles and is in the process of investigating how land management practices could ameliorate droughts and heat waves. In particular, Schumacher said that the positioning of irrigated cropland upwind of certain areas could help to lessen some types of heat waves.

Keellings said that understanding global (in additional to local) contributors to heat waves is important for improving seasonal heat wave predictions, that is, the likelihood of heat waves in a given summer, not just whether one will occur in the next week or so.

“We’re looking at things that go on around the world, whether it’s sea surface temperature or broad-scale movements in the atmosphere or soil moisture and drought or land cover change or influences of urbanization—all of these things are going on at large scales—and trying to see what is their relationship to the chance of a heat wave happening,” he said.

Keellings said that not only are heat waves becoming more frequent and lasting longer but preliminary work from his lab suggests that they are also covering larger areas than they used to. And these worsening heat waves will have serious consequences. Although the number of deaths caused by heat waves is certainly the most troubling, heat waves have also grounded airplanes in the United States, disrupted train travel in the United Kingdom, and even interfered with electricity output from nuclear reactors in France and Germany.

Keellings said that the human element of heat waves is a major motivator for his research. “What drives me to investigate heat waves is not just that they’re fascinating events from a climate science perspective…but also what ultimately drives this is that heat waves are hugely linked to human health.”

—Hannah Thomasy (@hannahthomasy), Freelance Science Writer

Human Activity Outpaces Volcanoes, Asteroids in Releasing Deep Carbon

Tue, 10/01/2019 - 17:06

Of the 1.85 billion billion metric tons of carbon that exist on Earth, 99.8% exists belowground, according to new reports on deep carbon.

The research estimates that human activity annually releases into the atmosphere around 40 to 100 times as much carbon dioxide as does all volcanic activity. That’s also a slightly higher rate of carbon emission than Earth experienced just after the asteroid impact that likely killed the dinosaurs, the researchers found.

Carbon “provides the chemical foundation for life…and it plays a disproportionate role in Earth’s uncertain, changeable climate and environment,” Deep Carbon Observatory (DCO) executive director Robert Hazen said in a statement. Scientists with DCO led the studies on Earth’s carbon that published today in Elements.

“We cannot understand carbon in Earth—we cannot place the changeable surface world in context—without the necessary baseline provided by deep carbon research,” Hazen said.

A Mostly Steady State

“It’s almost like a forensic detective story.”The new reports summarize 10 years of field data collection, lab experiments, and computer modeling of the origin of Earth’s carbon, how it circulates throughout the Earth system, and extreme events that can upset Earth’s carbon balance. The research estimates that Earth holds a total of 1.85 billion gigatons of carbon, although estimates of total carbon content of the core and lower mantle are speculative and likely to change with future research.

“It’s almost like a forensic detective story, putting together lots of bits of evidence using a wide range of techniques to come up with a planetary carbon budget,” DCO volcanologist Marie Edmonds of the University of Cambridge in the United Kingdom said at a 1 October press conference.

More than 90% of the carbon that exists above the crust resides in the deep ocean and marine sediments. The atmosphere contains only 1.4% of all above-surface carbon, mostly in the form of gaseous carbon dioxide (CO2).

Fig. 1. The annual rate of carbon exchange with the atmosphere from geologic ingassing and outgassing processes, in units of petagrams, or gigatons, of carbon per year (Pg C/y). “Org carbon” is organic carbon, and “MOR” is mid-ocean ridges. Credit: Deep Carbon Observatory

With few exceptions over the past 500 million years, Earth has maintained a balanced carbon cycle, returning to the ground about as much carbon as it outgasses. Silicate weathering is the fastest way to return carbon belowground, with smaller contributions from organic carbon burial, ocean crust update, and subduction (see Figure 1).

In the past 500 million years, four volcanic eruptions created large igneous provinces (LIPs) that each released massive quantities of CO2 over tens of thousands of years. These LIPs caused the above-ground quantity of CO2 to spike to about 170% of its steady state value, which led to warmer surface conditions, more acidic oceans, and mass extinctions.

Likewise, large impact events, including the Chicxulub impact 65 million years ago, released large quantities of carbon from the subsurface into the atmosphere.

“The Chicxulub event…greatly disrupted the budget of climate-active gases in the atmosphere, leading to short-term abrupt cooling and medium-term strong warming,” DCO scientists Balz Kamber and Joseph Petrus said in a joint statement.

The Volcanic Details

Volcanic regions—including fractures, faults, soil, lakes, mid-ocean ridges, and active vents—outgas 280–360 million metric tons of CO2 per year through direct venting and diffuse emissions. In a steady state carbon cycle, this is the largest contributor to aboveground carbon. Other varied geologic processes outgas an additional 20–40 million metric tons of CO2. Widespread regions like Yellowstone, the East African Rift, and the China’s Tengchong volcanic field can also have significant diffuse CO2 emissions.

Before it erupted, researchers spotted a significant composition change in the emissions from Costa Rica’s Póas volcano, seen here. Credit: Katie Pratt, University of Rhode Island

“We have achieved a much more complete picture of volcanic carbon dioxide degassing on Earth, reinforcing the importance of active volcanoes,” said U.S. Geological Survey geologist Cynthia Werner. But the degassing studies also led to the discovery “that the subtle release over large hydrothermal provinces and areas of continental rifting are also dominant regions of planetary outgassing.”

Continuously monitoring outgassing rates and compositions could also serve as a new eruption forecasting tool. At seven active volcanoes—including Italy’s Etna and Stromboli and the United States’ Kīlauea and Redoubt—the ratio of CO2 to sulfur dioxide changed significantly months or years prior to large eruptions, according to these studies. In some cases, the outgassing change happened before precursor quakes and ground deformation.

Fig. 2. The annual rate of carbon exchange with the atmosphere from large-scale perturbations compared with the sum of all geologic outgassing processes, in units of petagrams, or gigatons, of carbon per year (Pg C/y). “LIP” is large igneous province. Credit: Deep Carbon Observatory Humanity’s Outsized Carbon Impact

By far the largest disruptor of Earth’s steady state carbon cycle is anthropogenic outgassing. The new reports indicate that humans emit around 10 gigatons of CO2 into the atmosphere each year (see Figure 2). That flux is more than 10 times the rate at which natural geologic processes return it belowground, according to the studies.

The rate of anthropogenic carbon emissions is higher than that from extinction-level impacts and large outpourings of magma and is 40–100 times higher than the emission rate from all natural outgassing phenomena. Researchers noted that Earth is responding to human emissions with all the hallmarks of massive carbon perturbation of the past: hotter surface temperatures, disruptions to the hydrologic cycle, ocean hypoxia and acidification, and mass extinction.

“To secure a sustainable future, it is of utmost importance that we understand Earth’s entire carbon cycle,” Edmonds said in a statement. She added in the press conference, “Earth will rebalance itself, but it will take 100,000 years.”

These reports were released ahead of the Deep Carbon 2019 conference in Washington, D.C.

—Kimberly M. S. Cartier (@AstroKimCartier), Staff Writer

Can We Tell If Faults Grew During or Between Earthquakes?

Tue, 10/01/2019 - 11:30

Using an innovative approach, Preuss et al. [2019] provide insights into the propagation of faults both between and during earthquakes. By using a model set-up that joins long-term elasto-viscous-plastic deformation with earthquake rupture simulation, the authors are able to study faulting from fault initiation through stages of both aseismic growth between earthquakes and co-seismic growth during earthquakes. The innovative physics of the model permits the emergence of seismic slip after years of aseismic quasi-static fault growth.

These results shed insight into the potential development of fault bends, which are often observed along strike-slip faults. Because the angle of fault propagation changes as the fault slip speed evolves, cycles of earthquakes may produce a change in fault orientation along propagating faults. The findings of this study have interesting implications for the evolving behavior of active faults that are often decorated with bends, branches and secondary faults.

Citation: Preuss, S., Herrendörfer, R., Gerya, T. V., Ampuero, J.‐P., & vanDinther, Y. [2019]. Seismic and aseismic fault growth lead to different fault orientations. Journal of Geophysical Research: Solid Earth, 124. https://doi.org/10.1029/2019JB017324

—Michele Cooke, Associate Editor, JGR: Solid Earth

This Is How the World Moves

Tue, 10/01/2019 - 11:18

It didn’t exactly crack open the world, the presentation Princeton University’s Jason Morgan gave at AGU’s Spring Meeting in Washington, D.C., in April 1967. However, Morgan’s research leading up to the meeting had proved there were, in fact, cracks in the world.

The ideas were too new, too different from anything which had been done.“It seems extraordinary that, in this hall packed with the best geophysicists and geologists in the United States, nobody got excited or even interested by the implications of Morgan’s ideas. They were too new, too different from anything which had been done,” wrote Xavier Le Pichon in the journal Tectonophysics in 1990, when he aimed to “reconstruct what happened during those exciting six months.”

Morgan had, of course, proved the theory of plate tectonics through seafloor spreading measurements. “The evidence presented here favors the existence of large ‘rigid’ blocks of crust” that explained continental drift, he wrote in the conclusion of his landmark paper, “Rises, Trenches, Great Faults, and Crustal Blocks,” published in March 1968 in AGU’s Journal of Geophysical Research. (Morgan’s paper, held up in peer review, came several months after Nature published similar results from Dan McKenzie and Robert Parker, who were largely credited as the first scientists to verify plate tectonics. Le Pichon eventually found a copy of the outline Morgan had scrambled to finish the night before his 1967 talk—neither Morgan nor several colleagues had retained their copies. The document showed that Morgan should be credited for the feat.)

For AGU’s Centennial, this month in Eos we’re celebrating all the scientists who have been fascinated by the idea, since it was first proposed by Alfred Wegener in 1912, that the continents shift underneath our feet, constantly reshaping the planet.

The granite mountain formed when magma rising from the active subduction zone below squeezed between two rock strata and then cooled rapidly due to mechanisms still not entirely understood.Our cover story features scientists studying striking formations in Borneo. Mount Kinabalu in the Malaysian state of Sabah is a spectacular example of plate tectonics at work. The granite mountain formed when magma rising from the active subduction zone below squeezed between two rock strata and then cooled rapidly—due to mechanisms still not entirely understood—about 7 million years ago. Several other strange landforms nearby were also created around this time, and then, about 5 million years ago, the subduction underneath Sabah just…stopped. The scientists, as they write, want “to understand why subduction ceased and how the landforms of Sabah may be related to deeper processes in the mantle.”

Some of our biggest questions about tectonics and seismic hazards could be answered if scientists could better understand the Alaska Peninsula subduction zone. Read about a large group of researchers who launched the Alaska Amphibious Community Seismic Experiment in May 2018, deploying a huge array of seismometers that stretch from onshore far out into the water, using innovative ocean bottom seismometers that work together to take integrated observations. The group collected their instruments in August and are making the data freely available as quickly as they are recovered.

What’s next for the field of tectonics? We might ask Jacqueline Austermann of Columbia University. AGU’s Tectonophysics section recently honored her with its Jason Morgan Early Career Award. Our hearty congratulations go to all the 2019 section awardees and named lecturers, as well as to AGU’s Union medal, award, and prize recipients, and, finally, the warmest of welcomes to our 2019 class of AGU Fellows. We look forward to honoring your achievements in San Francisco at Fall Meeting 2019.

With this much knowledge and passion among our members, we look forward to feeling the metaphorical ground shift beneath us soon once more.

—Heather Goss (@heathermg), Editor in Chief, Eos

Louise Kellogg: Geoscientist, Mentor, Science Communicator

Tue, 10/01/2019 - 11:17

This article is part of a Centennial series recognizing eminent Earth and space scientists. Our series presents scientific journeys, as well as “family portraits” of the luminaries and their scientific progeny—the students, postdocs, and collaborators who have received inspiration, encouragement, and guidance from these leading lights of science.

Learn more about the lasting legacies of Warren M. Washington, Mary Pikul Anderson, and Louise Kellogg.

 

Louise Kellogg, a geoscientist who studied Earth’s interior, may be best known in some circles for designing a sandbox.

The Augmented Reality Sandbox that Kellogg and her colleagues created to teach the public about Earth science is just one example of her commitment to teaching and outreach, a passion she balanced with lauded research recognized by AGU, the American Academy of Arts and Sciences, and the American Association for the Advancement of Science (AAAS), among other organizations.

Louise Kellogg was instrumental in the creation of the Augmented Reality Sandbox. Credit: Mike Petersen/USACE

Kellogg, a professor of Earth and planetary sciences at the University of California, Davis, died on 15 April at the age of 59 from complications from breast cancer.

Kellogg grew up in Silver Spring, Md. She earned both her undergraduate and graduate degrees—in engineering physics/philosophy and engineering physics/geological sciences, respectively—from Cornell University in Ithaca, N.Y.

Donald Turcotte, a geophysicist and Kellogg’s graduate adviser, remembers her as smart and driven. She was the only woman in Cornell’s undergraduate physics program, he recalls, and she did just fine despite male students refusing to work with her on problem sets.

“I basically recruited her to be my graduate student,” said Turcotte, who was then the chair of the geology department. In 1983, Kellogg began working with Turcotte studying convective mixing in Earth’s mantle.

One year before Kellogg defended her Ph.D. dissertation, she and Turcotte traveled to France at the request of Claude Allègre, a leader in the field of isotope geochemistry working at the Institut de Physique du Globe de Paris. Allègre, along with Turcotte, had recently developed a “marble cake” model of the mantle in which strips of subducted oceanic crust stretch and thin over time. The two senior scientists were looking for someone to quantitatively justify their model, which was where Kellogg came in.

“Louise was one of the early thinkers who wanted to learn to combine geochemistry with geophysics.”“Claude was extremely happy with the work that she did,” said Turcotte.

Allègre’s praise helped Kellogg secure her first job after graduate school, a postdoctoral position at the California Institute of Technology (Caltech) in Pasadena. There, she worked with another renowned geochemist, Gerald “Jerry” Wasserburg, studying how helium is transported through the mantle.

Kellogg’s experience with both geophysics and geochemistry research differentiated her from other scientists, said Robin Reichlin, director of the Geophysics and Cooperative Studies of the Earth’s Deep Interior program at the National Science Foundation (NSF).

“Louise was one of the early thinkers who wanted to learn to combine geochemistry with geophysics,” said Reichlin, who knew Kellogg for over 20 years.

Career as a Mentor

While she was working with Wasserburg, Kellogg also spent time mentoring more junior scientists.

Scott King, a geophysicist now at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, was a graduate student at Caltech when Kellogg was a postdoc. He remembers Kellogg reading over his research statements when he was searching the job market. “We were actually even applying for some of the same jobs,” King said.

Today, King pays Kellogg’s generosity forward by baking “planet cakes”—hemispherical cakes with layers that represent the Earth’s inner core, outer core, mantle, and crust—and bringing them to schools to engage students in Earth science.

Kellogg would have approved of these cakes, King said. “She was always looking out for ways to help people understand about the Earth’s deep interior, so I’m sure she would have liked them.”

After spending 2 years at Caltech, Kellogg accepted a position as an assistant professor in the Geology (now Earth and Planetary Sciences) Department at the University of California, Davis, in 1990.

A little more than a decade later, by which time Kellogg had received tenure at Davis and was department chair, she would help hire Turcotte, her thesis adviser at Cornell.

“I was fed up with the snow—I was 70 years old,” said Turcotte. “She arranged for me to get a half-time professorship.”

The Kellogg Model and Earth Rhythms

At Davis, Kellogg continued to study Earth’s interior, specifically, its mantle. Kellogg summed up why the mantle matters at AGU’s Fall Meeting 2018 in a special plenary session focused on scientific breakthroughs of the past 100 years.

“[Convection in the mantle] cools our planet, drives the motion of the tectonic plates, creates the ocean basins and the continents, and, by shaping the surface, makes possible life on Earth,” Kellogg said during the session.

Kellogg’s most cited publication, “Compositional Stratification in the Deep Mantle,” was published in Science in 1999. Along with collaborators at the Massachusetts Institute of Technology, Kellogg proposed a model of Earth’s interior in which the mantle experiences strong chemical stratification at a depth of roughly 1,600 kilometers. Because different minerals are characterized by different densities, this stratification results in a “boundary” between the upper and lower mantle. The location of this boundary proposed by Kellogg and her colleagues is roughly 3 times deeper than previously suggested. This model satisfies the essential geochemical and geophysical observations, Kellogg and her colleagues wrote in Science.

Even now, 2 decades after it was published, this model is still frequently referenced.

“I still see the Kellogg model put up on screens at meetings all the time,” said Reichlin, who worked with Kellogg through NSF. “It’s had a hugely influential place in our thinking about the Earth’s deep interior.”

The Kellogg model of Earth’s interior, above, has “had a hugely influential place in our thinking about the Earth’s deep interior.” Credit: UC Davis

Kellogg’s research garnered awards and recognition from AGU, NSF, AAAS, then president George H. W. Bush, and the University of California, among others.

Dawn Sumner, an Earth scientist at the University of California, Davis, remembers Kellogg as an accomplished scientist and also a generous mentor.

“She was someone who was always concerned about the well-being of other people.”“She did a huge amount of mentoring for early-career faculty,” said Sumner. “She was someone who was always concerned about the well-being of other people.”

Sumner, who was hired at Davis a few years after Kellogg, became close friends with the more senior scientist. Sumner remembers frequently asking Kellogg for advice about the tenure process and how to work effectively with colleagues who were strongly opinionated. “She was always willing to talk,” said Sumner.

Kellogg and Sumner also shared an interest beyond science: Both of the women enjoyed modern dance, and they took classes together at a dance studio close to the Davis campus. Fittingly, one of the classes Kellogg and Sumner enrolled in was called Earth Rhythms.

A Leader in the Geosciences

The geoscience community benefited in many ways from Kellogg’s service. From 2010 to 2019, she was the director of Computational Infrastructure for Geodynamics, an organization that develops and shares geophysics-related software. She also served as the associate editor of the Journal of Geophysical Research (1992–1995), the tectonophysics editor of Eos (1993–1995), the editor of Reviews of Geophysics (2001–2004), and the Tectonophysics section secretary (1998–2000).

Kellogg was strongly committed to training the next generation of scientists.

In 2003, she helped found the Cooperative Institute for Deep Earth Research (CIDER). A self-described “inter-disciplinary synthesis center, research incubator, and research framework for tackling the fundamental question of the nature of global geodynamic processes,” CIDER became CIDER-II (the Cooperative Institute for Dynamic Earth Research) in 2012.

Kellogg was an early promoter of data visualization in the Earth sciences.Notable among CIDER-II programs is a summer program that brings together junior and senior scientists in a multiweek program consisting of lectures, tutorials, and workshops. The 2019 program—the 13th summer program—was held at Berkeley with over 75 participants from the United States, Chile, the United Kingdom, Australia, France, Germany, China, and Sweden.

Kellogg was also an early promoter of data visualization in the Earth sciences. Scott King, Kellogg’s colleague from Caltech, remembers talking with her about a meeting session she attended that focused on color palettes for displaying data. “It was a session she really found inspiring,” said King.

In 2002, Kellogg and other researchers started brainstorming what would become the W. M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES); she served as its director from 2004 until her death.

The showpiece in KeckCAVES consists of three 3- × 2.4-meter (10- × 8-foot) walls and a 3- × 2.4-meter floor that display three-dimensional images. A wide variety of data can be projected, including measurements related to planetary geology, plate tectonics, and oceanography. Credit: UC Davis

The showpiece of the center, housed in the Earth and Physical Sciences Building on the University of California, Davis, campus, is a facility known as the CAVE. It consists of three 3- × 2.4-meter (10- × 8-foot) walls and a 3- × 2.4-meter floor that display three-dimensional images. A wide variety of data can be projected, including measurements related to planetary geology, plate tectonics, and oceanography.

“The creative science that emerged from that collaboration changed my view of science,” Sumner wrote on a web page called Memories of Louise set up by the Davis Department of Earth and Planetary Sciences.

KeckCAVES, in partnership with another University of California, Davis–based research center and other science centers, also developed a portable version of its three-dimensional data visualization technology. The result, the Augmented Reality Sandbox, combines the tactile nature of sand with a projection system that lays down a color-coded elevation map, topographic contour lines, and simulated water on whatever sand formations users create.

“With its hands-on, interactive technology, the Augmented Reality Sandbox is well suited for education and outreach,” Kellogg said in 2016.

Instructions for creating the Augmented Reality Sandbox are freely available online, and the team that developed it estimates that over 700 copies have been made. (Confession: I played with an Augmented Reality Sandbox at my local science museum in Portland, Ore.!)

Kellogg’s numerous contributions to the geoscience community—her award-winning science, the mentorship she provided to student and early-career researchers, and her support for data visualization and science communication—will live on in spirit. The programs and organizations she helped found—the Cooperative Institute for Dynamic Earth Research, KeckCAVES, Computational Infrastructure for Geodynamics, and others—will continue.

“She put her efforts into so many areas,” said NSF’s Reichlin. “She’s contributed so much to the science enterprise.”

—Katherine Kornei (@katherinekornei), Freelance Science Journalist

Jupiter’s Galilean Moons May Have Formed Slowly

Mon, 09/30/2019 - 11:16

Although they were identified over 400 years ago by Galileo Galilei, the formation of the four major moons of Jupiter has remained a mystery. Any successful model must explain all of the observed characteristics of the satellites, but most are able to explain only some. However, in a new model suggesting a slow formation over 10 million years, scientists claim to be able to explain many of the characteristics simultaneously.

“Our scenario is unique to simultaneously reproduce the disparate properties of the Galilean satellites—mass, orbits, composition, and internal structures.” “In classical models, satellites accrete kilometer-sized bodies quickly. On the other hand, in our model, satellites accrete roughly 10-centimeter-sized particles slowly,” said Yuhito Shibaike, a researcher at the University of Bern in Switzerland and lead author of a new study on the technique. “Our scenario is unique to simultaneously reproduce the disparate properties of the Galilean satellites—mass, orbits, composition, and internal structures.”

Scientists agree that the Galilean moons formed out of the dusty disk left over after Jupiter’s formation. But the specifics, like how the initial moon seeds formed and how they reached their current orbits, are debated. The new model, detailed in a paper accepted to the Astrophysical Journal, applies a well-tested mechanism for planet formation, called pebble accretion, to satellite formation.

“After we started to work in this scenario, we realized the similarity between the Galilean system and TRAPPIST-1 system,” Shibaike said, referring to the nearby planetary system with seven known terrestrial planets. “The comparison of the two systems, and their formation scenarios, was helpful to develop our Galilean satellite formation scenario.”

Making Moons

The model proposes a slower formation period than many previous models. The seeds that would ultimately become the moons were first formed in the disk of gas left over from the Sun’s formation.

When Jupiter, coalescing out of the same disk of material, reached 40% of its current mass, the seeds were gravitationally captured by a disk of gas surrounding the infant planet. Over the next million or so years, the seeds quickly migrated closer to Jupiter, where they then slowly grew in size.

The innermost accreting infant satellite, which would grow up to become Io, was first stopped by a cavity in the disk. The cavity was created by Jupiter’s strong magnetic field, which removed material from the inner 420,000 kilometers surrounding the planet. By around 4.5 billion years ago, the migrations of the other moons also halted as they were captured at distances that resonated with the orbit of Io. .

New research proposes that Jupiter’s Galilean moons were formed from seeds that migrated quickly into their current orbits and then grew slowly over millions of years (Myr). In the illustration above, CJD is the circum-Jovian disk from which the moons developed, CSD is our solar system’s circumstellar disk, and PIM identifies the pebble isolation mass of the small particles (pebbles) that accreted to form the moons. Credit: Yuhito Shibaike Delving into Differences

This slow accretion method is able to explain the differences in composition between the moons, which have different fractions of ice and rock in their interiors.

Io, which formed closest to Jupiter, is nearly all rock.

Europa began by accreting rock, but as the snowline (the distance at which water vapor and other volatiles condense) shrank around Jupiter, Europa’s nursery became cooler. By the end of the moon’s formation, it had accreted a small fraction of ice.

Ganymede and Callisto, which always remained beyond the snowline, are both about half rock, half ice.

Though Ganymede and Callisto are approximately the same size and were formed in similar environments, they ended up with different interiors. Ganymede has a fully differentiated interior with a hot, metallic core heated by the radioactive decay of an isotope of aluminum. Callisto’s interior is only partially differentiated and composed of rock and ice.

The new model explains the differences in the two moons through a delay in Callisto’s formation. Callisto is proposed to have started forming hundreds of thousands of years after Ganymede, when more of the aluminum isotope had already decayed away.

Although the model is able to explain many of the characteristics of the moons, there are some things it can’t yet explain, such as the orbit of Callisto, which currently doesn’t orbit in resonance with the other Galilean moons.

“Shibaike et al.’s study is definitely a good alternative to the previously proposed models. The fact that not many generations of satellites could have formed around Jupiter and that the migration of the moons should have been stopped seems robust. However…I believe that there is still a lot to understand on the formation of the Galilean satellites,” said Thomas Ronnet, a researcher at Lund Observatory in Sweden who was not involved with the new study.

—Mara Johnson-Groh, Freelance Science Writer

Nuclear Winter May Bring a Decade of Destruction

Fri, 09/27/2019 - 11:36

The thought of nuclear war may conjure images of looming mushroom clouds, duck and cover drills, or local radiation fallout. These immediate effects are terrifying, but scientists say the fallout of a nuclear war would likely last well beyond the initial explosions.

In a new paper in the Journal of Geophysical Research: Atmospheres, researchers detail a decade of destruction following a nuclear war between the United States and Russia. Smoke from smoldering cities is projected to make its way to the stratosphere, where it will trigger a nuclear winter.

The team used new climate models to approximate just how long—and how severe—the nuclear winter might be if Russia and the United States engage in nuclear conflict. They estimate a decade-long winter could linger after the explosions, wreaking havoc on temperatures, sunlight, and precipitation worldwide.

A Dark Decade of Winter

If nuclear war broke out between the United States and Russia, the global repercussions would extend beyond politics and trigger major climatic trauma—specifically, a nuclear winter. A nuclear winter would occur in the aftermath of nuclear blasts in cities; smoke would effectively block out sunlight, causing below-freezing temperatures to engulf the world.

To weigh the intensity of a nuclear war between two well-armed nations, the team considered the current arsenals of the two countries. They noted that important metropolitan areas—population centers or cities with strategic value, for example—would likely be targeted. And if those cities were hit, everything would likely burn.

“Even asphalt can burn at the temperatures these bombs get to,” said Joshua Coupe, an atmospheric science doctoral candidate at Rutgers University. He said these urban fires would burn everything in sight, producing “a very dirty, sooty, smoke.”

This soot, or black carbon, is the key factor in producing a nuclear winter. “What’s important about black carbon is it absorbs radiation very, very efficiently,” said Coupe. He explained that the lofted black carbon absorbs radiation and heats up and “the air surrounding it becomes very buoyant and it’s able to lift [the soot] into the stratosphere.”

If the black carbon stayed in the troposphere, it could eventually be removed by precipitation, but Coupe said that once black carbon is in the stratosphere, it can last for years, triggering a long-term, global climate response.

There are other nonnuclear events that can trigger aerosol releases into the stratosphere, including volcanic eruptions and wildfires, but Coupe noted that neither produce the same effects as soot released as a result of nuclear war.

“Volcanoes produce sulfate aerosols that don’t absorb much radiation,” he said, and “because they don’t absorb as much, their lifetime is somewhere between 1 and 2 years.”  Comparatively, he said their research shows nuclear-produced soot lasts up to a decade.

Wildfire soot can also reach the stratosphere, but Coupe noted that wildfires produce black carbon on a much smaller scale than what would result from dedicated nuclear attacks on cities. For example, the researchers state that a 2017 forest fire in British Columbia injected a few tenths of a teragram of black carbon into the stratosphere (1 teragram is 1 billion kilograms). In contrast, they estimated a nuclear war would blast 180 teragrams of soot into the atmosphere.

Climate Modeling Black Carbon

The team used climate modeling to predict what might happen to Earth after the influx of an enormous amount of black carbon into the stratosphere.

Coupe said their Whole Atmosphere Community Climate Model version 4 (WACCM4) has a much higher resolution than that found in previous studies, allowing the researchers to add more detail to predictions. Specifically, WACCM4 allowed the team to reach higher elevations, into the stratosphere—an important step in capturing lofting soot effects, said Coupe.

The team also used an aerosol module called Community Aerosol and Radiation Model for Atmospheres (CARMA) to better represent how airborne particles might grow and stick together. Coupe said using CARMA allowed the team to treat the aerosol particles more realistically.

The models show that a nuclear winter would mean an almost 10°C reduction in global mean surface temperature, extreme changes in precipitation, and a 90% reduction in the growing season across many parts of the midlatitudes.They found that after simulated nuclear blasts, almost the entire Northern Hemisphere was engulfed in stratospheric soot within the first week. In 2 weeks, soot had invaded the Southern Hemisphere.

“That’s the power of black carbon,” said Coupe. “It lofts very high very quickly and it spreads very, very fast.”

The researchers looked at changes in global average temperatures, radiation, and precipitation over a 15-year period following a nuclear war. Coupe said their results can be summed up in one word: grim.

“Our research shows that in this U.S./Russia nuclear war scenario, nuclear winter would happen,” he said, adding that the models show an almost 10°C reduction in global mean surface temperature, extreme changes in precipitation, and a 90% reduction in the growing season across many parts of the midlatitudes.

To put things into perspective, Coupe said that the temperature change from preindustrial times to today was only 1°C. “But in nuclear winter, it approaches 10°C below the climatological mean after 2 or 3 years.”

Solar radiation, important not only for surface temperatures but also for photosynthesis, drops precipitously. Within the first couple of years of a nuclear winter, “there’s around a 75% decrease in surface radiation—which is substantial,” said Coupe.

Precipitation rates don’t fare any better, and global averages drop about 58% after soot injection into the stratosphere. Patterns of rainfall also shifted, including the weakening or disappearing of monsoons and new rainfall over desert regions.

Worst-Case Scenario

What exactly happens during a nuclear winter is a complex scenario, said Jon Reisner, a numerical modeler at Los Alamos National Laboratory. Reisner was not involved in the study but researches how nuclear weapons can affect global climate.

“The impact on climate from a nuclear exchange is still an unresolved issue,” Reisner said. He added that the researchers’ predictions appeared to be on the upper end of the spectrum for global cooling. “They’re assuming the worst, worst-case scenario,” said Reisner.

“There are dire consequences if nuclear weapons were used, not just for countries involved, but for the rest of the world.”Reisner said he thinks the researchers are “exaggerating how much soot is being produced from fires” and noted that soot produced from urban fires is not well understood. “The big question is: What is the actual fuel loading?” He noted the intensity and duration of a fire can also affect soot production.

Although he thinks more work needs to be done to better define global climate effects, Reisner noted “at the end of the day, the direct impacts [of a nuclear war] will be significant—you can’t downplay those.”

A U.S.-Russia nuclear conflict reaches far beyond the two warring nations, said Coupe. “There are dire consequences if nuclear weapons were used, not just for countries involved, but for the rest of the world.”

Coupe hopes that their study will help inform governments and the military in their risk assessment of using nuclear weapons. “The decisions between generals could affect the entire world for years,” he said.

—Sarah Derouin (@Sarah_Derouin), Freelance Journalist

Golden State Blazes Contributed to Atmospheric Carbon Dioxide

Fri, 09/27/2019 - 11:36

In October 2017 in California, more than 4,850 square kilometers and 10,000 structures were burned by about 9,000 wildfires, according to a 4 August Earth and Space Science study investigating the causes and impacts of the fires.

Yuan Wang, a coauthor on the study, emphasized the urgent need to better understand the drivers and effects of wildfires. Wang is an atmospheric scientist at the California Institute of Technology (Caltech) studying human impacts on weather systems and climate. He was also the 2016 recipient of AGU’s James R. Holton Junior Scientist Award.

A First for California

After the October 2017 fires, atmospheric carbon dioxide levels increased by 2 parts per million.Wang and his team calculated how much the fires contributed to atmospheric carbon dioxide, a first in the study of California wildfires. Using data from the Orbiting Carbon Observatory-2 (OCO-2) satellite, they determined that after the October 2017 fires, atmospheric carbon dioxide levels increased by 2 parts per million. The environmental effect creates a feedback loop: An increase in atmospheric carbon dioxide contributes to greater drought and wildfire-prone conditions, which in turn lead to more wildfires and subsequent increases in atmospheric carbon dioxide.

Future climate models need to account for how wildfires contribute to carbon dioxide levels, which requires the use of similar satellite measurements, Wang said.

Since this particular satellite was launched in 2014, few wildfire studies have used its data to examine the contribution of wildfires to atmospheric carbon dioxide, according to Jonathan Jiang, supervisor of the aerosol and cloud group for the Earth science section of NASA’s Jet Propulsion Laboratory. (Jiang wasn’t involved with the study, but Wang was previously a postdoctoral researcher working for him.)

“The identified impacts on the atmospheric greenhouse gas concentrations should bring much attention on future research of wildfires, and also impact policy making,” Jiang wrote in an email to Eos.

Probing Factors Contributing to Wildfires

The researchers analyzed a variety of meteorological and atmospheric data—including surface temperature and pressure, liquid and ice water contents of clouds, precipitation, and wind measurements—to determine what caused the fires. In many cases, human factors triggered the fires, but environmental conditions worsened their magnitude, Wang said.

Multiple factors created wildfire-prone conditions. January-to-April precipitation “triggered a massive growth of weeds/vegetation,” which later dried out and fueled the fires, the researchers write.

Increased drought conditions also contributed to the likelihood of wildfires. A 1.7 kelvin increase of California’s mean surface temperature over the past 39 years, along with decreases in precipitation and cloud water path (the total amount of water in a cloud area), worsened droughts, researchers noted. High pressure over the Pacific Ocean and intensification of the Santa Ana winds also intensified drought conditions. The analysis yielded negative correlations between surface temperature, cloud water content, and precipitation.

Researchers didn’t find a clear relationship between El Niño, the Pacific Decadal Oscillation, and summer precipitation values. However, their analysis of atmospheric vertical motion data detected negative velocity anomalies, which suggests the presence of strong sinking air when the fires occurred. This air behavior “enhances the atmospheric stability and does not favor the formation of precipitation.”

The findings are consistent with previous theories, Wang said.

Wang’s coauthor Andy Li, first author on the study, is a senior at Clements High School in Sugar Land, Texas. He participated in the research through Caltech’s Summer Research Connection, a program for science teachers and high school students.

“I consider [the study] to be a very big achievement for a high school student,” Wang said.

—Rachel Crowell (@writesRCrowell), Freelance Science Journalist

600 Years of Grape Harvests Document 20th Century Climate Change

Fri, 09/27/2019 - 11:35

Climate change isn’t just captured by thermometers—grapes can also do the trick.

By mining archival records of grape harvest dates going back to 1354, scientists have reconstructed a 664-year record of temperature traced by fruit ripening. The records, from the Burgundy region of France, represent the longest series of grape harvest dates assembled up until now and reveal strong evidence of climate change in the past few decades.

Science with Grapes

As far back as the 19th century, scientists have been using records of grape harvest dates to track climatic changes.

“Wine harvest is a really great proxy for summer warmth,” said Benjamin Cook, a climate scientist at the NASA Goddard Institute for Space Studies in New York not involved in the research. “The warmer the summer is, the faster the grapes develop, so the earlier the harvest happens.”

But there are potential pitfalls to using this method, said Thomas Labbé, a historian specializing in the Middle Ages at the Leibniz Institute for the History and Culture of Eastern Europe in Germany and the Maison des Sciences de l’Homme de Dijon in France. For instance, he said, some studies have compiled grape harvest dates from vineyards in different locations. That’s problematic because of climatic differences due to latitude—for Northern Hemisphere vineyards, grapes growing farther south tend to ripen earlier than grapes located farther north. Other investigations have relied on secondary sources of grape harvest dates riddled with transcription errors, said Labbé.

Vineyards of an Ancient City

Now, Labbé and his colleagues have assembled a 664-year record of grape harvest dates for one French city using information gleaned from original sources.

“The vineyards still surround the city, so we could extend the series to the present day.”The city of Beaune is an excellent site for long-term analysis of grape harvests, said Labbé. Rows of pinot noir, sauvignon, and gamay grapes have dotted its slopes for centuries and still do so today. (Dijon, 45 kilometers northeast and the capital of the Burgundy region, isn’t as good a site. It’s undergone pronounced urbanization since the 19th century and has accordingly lost many of its vineyards.)

“The vineyards still surround the city, so we could extend the series to the present day,” said Labbé.

The researchers mined original sources (such as medieval accounts of wage payments to vineyard laborers, city council records, and newspaper reports) to determine when Beaune’s grapes were harvested each year from 1354 to 2018. When data were missing from archival records, the researchers used harvest dates from Dijon and adjusted them to account for the capital’s more northerly location.

They were careful to analyze each date in the context of history. For instance, the harvests of 1636 and 1637 were “certainly disorganized” by warfare and an outbreak of plague, the scientists concluded. All in all, Labbé and his colleagues recovered harvest dates ranging from 16 August to 28 October.

Outliers “Become the Norm”

Labbé and his colleagues showed that the dates of Beaune’s grape harvests correlated strongly with both instrumental temperature records from Paris and tree ring–based temperature reconstructions from western Switzerland. These correlations demonstrate that grape harvest dates indeed are an accurate proxy for local temperature, Labbé and his team concluded. “It’s possible to reconstruct temperature backward,” he said.

The scientists found that the series of grape harvest dates could be clearly divided into two regimes. Grapes were, on average, picked on 28 September or later before 1988. But from 1988 onward, grapes were harvested roughly 13 days earlier.“Global warming is very, very visible.”

“Hot and dry years in the past were outliers, while they have become the norm since the transition to rapid warming in 1988,” Labbé and his team wrote in their paper, which was published in August in Climate of the Past.

It’s not surprising that the climate is warming, said Labbé. “The surprise is rather that the grape harvest date series reflects so well the temperature trend of the last 30 years. Global warming is very, very visible.”

—Katherine Kornei (@katherinekornei), Freelance Science Journalist

Standardizing the Surge of Paleoclimate Data

Fri, 09/27/2019 - 11:33

Paleoclimatology, the study of ancient climates, has entered the era of big data, with massive quantities of digital information pouring into databases from research groups all over the world. The effort required to make all these data compatible is staggering, taking up to 80% of researchers’ time in some cases. Terminology is often poorly defined among different data sets, so that even common phrases like “the present” may not mean the same thing across studies. (Some researchers use “the present” to mean the time since 1950, whereas others use it more narrowly to indicate the year a study was published.)

Now Khider et al., building off discussions held during an international workshop in 2016 as well as a data platform called LinkedEarth, have published the first effort to standardize this abundance of paleoclimate data, called the Paleoclimate Community Reporting Standard. The standards will allow researchers to curate and access data sets in one online hub while also creating a common vocabulary to describe the data, following the findable, accessible, interoperable, and reusable, or FAIR, principles.

The project includes input from the wider paleoclimate community and creates tailored guidelines for a large variety of data sets based on a variety of sources, such as historical documents, ice cores, lake sediments, and tree rings, among many others. Now that the standards have been established, the challenge is to adopt them, the authors write. Funding agencies and publishers could incentivize adoption by requiring that researchers use the online hub and guidelines, they suggest. (Paleoceanography and Paleoclimatology, https://doi.org/10.1029/2019PA003632, 2019)

—Emily Underwood, Freelance Writer

A Thermochemical Recording Mechanism of Earth’s Magnetic Field

Fri, 09/27/2019 - 11:30

An important aspect in studies of the Earth’s past magnetic field strength is how chemical changes in rocks, arising for example from metamorphic overprinting, may have affected magnetic remanence and hence paleointensity estimates.

A new laboratory study by Shcherbakov et al. [2019] combining petrological and magnetic methods scrutinizes the acquisition of thermochemical remanence within titanomagnetite, one of the most abundant iron oxides present in igneous rocks. Samples from São Tomé island and the Red Sea Rift, heated to 570 °C and subsequently cooled back to room conditions in a controlled magnetic field, reveal the formation of ilmenite lamellae within titanomagnetite grains. This chemical alteration process leads to the creation of Ti-poor intragrain regions, of composition and Curie temperature representative of magnetite, with the interesting property that the obtained paleointensity estimates are like those predicted from a strictly thermal remanence.

The results contribute to a long-standing debate among paleomagnetists about how to treat exsolution lamellae in iron oxides, which can give experimentally superior results in a paleointensity experiment, but whose thermochemical remanence nature has subjected them to suspicion. This opens for a new set of potential targets of paleointensity studies and may have application to older rocks that have undergone thermochemical alteration and metamorphism, such as in the Precambrian eon.

Citation: Shcherbakov, V. P., Gribov, S. K., Lhuillier, F., Aphinogenova, N. A., & Tsel’movich, V. A. [2019]. On the reliability of absolute palaeointensity determinations on basaltic rocks bearing a thermochemical remanence. Journal of Geophysical Research: Solid Earth, 124. https://doi.org/10.1029/2019JB017873

—Bjarne S. G. Almqvist, Associate Editor, JGR: Solid Earth

Indigenous Knowledge Puts Industrial Pollution in Perspective

Thu, 09/26/2019 - 12:03

Giant Mine was once among the world’s most productive gold mines. Until it closed in 2004, the mine produced 220,000 kilograms of gold and more than a thousand times that amount of arsenic.

Highly toxic arsenic trioxide dust was a by-product of the roasting process used to extract gold from ore at Giant Mine, a facility on the shores of the Great Slave Lake in Canada’s Northwest Territories. During its earliest days in the 1940s and 1950s, Giant Mine’s smokestack was pumping out as much as 7,400 kilograms of arsenic each day, without any environmental protection or regulation.

Arsenic was carried on the wind and settled across local forests, lakes, and rivers.

Almost immediately, it affected people who lived nearby. Fish caught in certain lakes were linked to arsenic poisoning. Local precipitation was so contaminated that in 1951, a child died after eating snow riddled with arsenic.

Today, debate continues over acceptable levels of arsenic detected in areas surrounding Giant Mine.

Reductive Dissolution

Some arsenic released by Giant Mine was sequestered in sediments on lake bottoms, but climate change could cause it to be released.Some arsenic released by Giant Mine was sequestered in sediments on lake bottoms, but some researchers say climate change could cause it to be released.

Longer, warmer summers result in more plant and algal growth, which reduces the amount of oxygen in the lake. Lower oxygen levels create the conditions for a chemical reaction in which sequestered arsenic is released.

“When plant or algal growth reduces the amount of available oxygen, the arsenic can be released through a process called reductive dissolution. It’s liberated to a dissolved form and can enter the water,” said Jennifer Galloway, a research scientist with the Geological Survey of Canada and adjunct assistant professor at the University of Calgary.

To better understand the historic climate of the region and how changes in climate are affecting the sequestration of metalloids like arsenic in the region’s lakes, Galloway and Tim Patterson, a professor of geology at Carleton University, needed a historical baseline more robust than what they could obtain using paleoecological tools such as dendrochronology and fossil records. Such tools aren’t able to discern key information like the date that lakes froze or whether autumn precipitation fell as rain or as snow.

Collaboration with Métis and Dene

In research recently published in the Polar Knowledge Canada journal Aqhaliat, Galloway and Patterson looked to two indigenous communities for help with their 3-year project. Each group contributed a distinct type of traditional knowledge to the collaboration. The Yellowknives Dene First Nation drew on oral traditions of ecological knowledge, whereas the North Slave Métis brought written archival records with detailed weather data.

The Métis are of mixed First Nations and European ancestry, which positioned them to serve as a sort of cultural interpreter between the two groups during Canada’s fur trade era.

“Hudson’s Bay Company trading posts were often staffed by Métis people,” said Shin Shiga of the North Slave Métis Alliance, an organization that represents Métis people in the Great Slave Lake region. “Métis were able to communicate with both European fur traders and indigenous people. At trading posts, they kept detailed records of weather conditions that have been preserved in the archives.”

The North Slave Métis Alliance’s archives are a repository of daily weather information like temperature, precipitation, and wind speed, as well as the dates that lakes froze and thawed.

Métis also contributed their knowledge of more recent climate conditions to the project.

“A lot of Métis people work on the land today,” said Shiga. “Whether they are building roads or working in construction, they’re always watching what’s happening on the land.”

The Yellowknives Dene First Nation community sits on the shores of Great Slave Lake, but the traditional lands of the Dene stretch from the mountains of the Yukon to the shores of Hudson Bay. The community’s traditional knowledge holders shared their knowledge of local conditions through interviews.

The Yellowknives observed that not only had the timing of the freeze-up changed but the way that lakes were freezing was changing too. Historically, the freeze-up had occurred quickly: A thin veneer of ice would form in October, and within the course of a few weeks the ice would grow thick enough to drive a truck on. In recent years, the lakes have been experiencing numerous freeze-thaw cycles. Autumn precipitation that once fell as snow is now falling as rain and melting ice when it does.

This rain keeps water open longer in the fall and means there’s less ice to melt in spring, so the water can warm more quickly. This warming increases the growing season for the plants and algae, which allows for sequestered arsenic to be released.

“Traditional knowledge told us that freeze-up had changed from mid-October to early November,” said Galloway. “And that it didn’t happen quickly as it used to. That’s really important for the mobility of elements. Late fall precipitation falling as rain instead of snow can profoundly impact chemical cycling in lakes.”

—Ty Burke, Science Writer

Jennifer Galloway and Tim Patterson would like to acknowledge the contributions made to their research by Yellowknives Dene First Nation, North Slave Métis Alliance, Polar Knowledge Canada, and the government of the Northwest Territories.

Explosive Volcanic Eruption Powered by Water-Saturated Magma

Thu, 09/26/2019 - 12:00

Some volcanoes erupt almost gently, lava moving placidly down their sides. Others erupt explosively, launching plumes of ash high into the atmosphere and triggering gale-force pyroclastic flows. Kelud, an active stratovolcano in Indonesia, does both.

Scientists have now shown that the explosivity of Kelud’s last eruption, in 2014, was likely driven internally by volatile-triggered overpressure rather than an influx of magma. These results shed light on the causes of explosive eruptions, the most dangerous type of volcanic activity, the research team suggests.

“It’s particularly active and also particularly unpredictable because it changes its eruption style.”Kelud (also known as Kelut), on the highly populated island of Java, has erupted both explosively and effusively eight times in the past 100 years, killing thousands of people. “It’s particularly active and also particularly unpredictable because it changes its eruption style,” said Mike Cassidy, a volcanologist at the University of Oxford in the United Kingdom.

Kelud’s most recent eruption, on 13 February 2014, launched a Plinian column 26 kilometers into the atmosphere. The explosive eruption blew down trees, caused roofs to collapse under thick layers of ash, and triggered the closure of multiple airports in Southeast Asia. There were few fatalities, however, thanks to a massive evacuation effort from Indonesia’s Center for Volcanology and Geological Hazard Mitigation.

Studying the 2014 Eruption

Using preeruption satellite imagery, seismic records, and samples of pumice and ash collected after the 2014 eruption, Cassidy and his colleagues analyzed Kelud’s magma system. They began by looking for evidence of ground deformation prior to the eruption, which is associated with moving magma and fluids. They didn’t find any. Furthermore, they found little to no preeruptive seismic activity, a puzzle because ground shaking often precedes eruptions.

“All of these things that we associate with big explosive eruptions just didn’t happen,” said Cassidy.

In the laboratory, Cassidy and his collaborators analyzed Kelud’s pumice to determine the temperature and pressure under the volcano. They combined coarsely crushed samples of Kelud pumice with water and carbon dioxide (volatiles that had been lost during the eruption), heated the material, and placed it under high pressure for several days.

The scientists conducted different experiments at various temperatures, pressures, and volatile ratios. “We do this for multiple experiments and compare this to chemical composition of the natural pumices and ash,” said Cassidy. “The best matches of the chemical compositions of the experiments to the natural products then represent the conditions that the magma was stored at.”

Cassidy and his team found that Kelud’s magma was stored at roughly 1,000°C–1,050°C and 50–100 megapascals. These conditions correspond to a depth of about 2–4 kilometers. That’s shallower than the magma reservoirs of other volcanoes, which typically exist 4–9 kilometers below the surface. “Kelud’s shallow storage depth may have added to the lack of pre-warning,” said Cassidy.

“Those volatiles will act as hot air balloons and drag the magma upwards.”Cassidy and his colleagues furthermore showed that magma from Kelud’s explosive eruption in 2014 was more water rich than magmas from previous effusive eruptions.

That’s an important clue about the nature of this eruption, said Cassidy. Water-rich magmas contain volatiles, which increase the melt’s buoyancy and drive its ascent. Water is particularly effective at driving the upward movement of magma because it comes out of solution at a range of pressures.

“Those volatiles will act as hot air balloons and drag the magma upwards,” said Cassidy.

Overpressure caused by volatiles was the likely trigger of Kelud’s explosive eruption in 2014, Cassidy and his team suggest. “We attribute the greater explosivity of the 2014 eruption to its water-saturated nature,” the researchers concluded in their study, which was published in Geochemistry, Geophysics, Geosystems in July.

It’s important to study a “Jekyll-and-Hyde” volcano like Kelud, said Shane Cronin, a volcanologist at the University of Auckland in New Zealand not involved in the research. These results pave the way for new statistical forecasting models that combine “multiple independent pathways and processes to eruption.”

Cassidy and his team are looking forward to studying other volcanoes that erupt explosively and effusively, like Chile’s Volcán Quizapu and Indonesia’s Anak Krakatau. “We’re trying to understand where Kelud sits in the context of other eruptions.”

—Katherine Kornei (@katherinekornei), Freelance Science Journalist

Climate Refugees, Thinned Forests, and Other Things We’re Reading

Thu, 09/26/2019 - 11:59

America’s Great Climate Exodus Is Starting in the Florida Keys. The scale of managed retreat required to relocate residents of coastal areas vulnerable to impending sea level rise is difficult to comprehend. “By the end of the century, 13 million Americans will need to move just because of rising sea levels,” according to one estimate. The migration is under way in some parts of the United States, but progress is slow going.

—Timothy Oleson, Science Editor

 

Forest Thinning Projects Won’t Stop the Worst Wildfires. So Why Is California Spending Millions on Them? I was surprised to learn that Paradise, Calif., had already thinned much of its surrounding forest before the Camp Fire. And yet this line of defense couldn’t stop the tragedy that killed 86 people. If thinning forests may not be the most important line of defense against fires, why is California investing so much money in it? This article gives me a lot to think about.

—Jenessa Duncombe, Staff Writer

 

“Glass Pearls” in Clamshells Point to Ancient Meteor Impact.

These microspherules may hint at a previously unknown meteorite impact. Credit: Kristen Grace/Florida Museum of Natural History

A million years ago—maybe much longer—a meteor hit Earth and smacked debris into the air, causing microscopic pieces to settle into clamshells and encase themselves in glass. An undergrad science student on summer fieldwork came across the tiny “weird” objects and 15 years later published work tracing the likely origin of these “glass pearls.” Science is fantastic.

—Heather Goss, Editor in Chief

 

The Mirpur Earthquake in Pakistan: Images of Lateral Spreading. As pointed out by Dave Petley in The Landslide Blog, the Mirpur earthquake was small but shallow, which caused a larger area to experience “high peak ground accelerations.” This violent surface shaking causes liquefaction (surface soil layers lose stiffness and coherence due to the shaking), which leads to the lateral spreading seen in photos from the area of the epicenter in Pakistan.

—Liz Castenson, Editorial and Production Coordinator

 

‘Oumuamua, Meet Borisov.

C/2019 Q4 (Borisov) is one of the first interstellar comets ever identified. This image was obtained using the Gemini North Multi-Object Spectrograph from Hawaii’s Mauna Kea. The image was obtained with four 60-second exposures in bands (filters) r and g. Blue and red dashes are images of background stars, which appear to streak because of the motion of the comet. Credit: Gemini Observatory/NSF/AURA

Two years ago, the first interstellar object, 1I/‘Oumuamua, paid a quick visit to our solar system. Amateur astronomer Gennady Borisov spotted the second. Unlike its predecessor, 2I/Borisov—“2I” stands for “second interstellar”—is on its way into the solar system, and astronomers will have months to study its properties.

—Kimberly Cartier, Staff Writer

 

Unfurling the Waste Problem Caused by Wind Energy. What the heck do you do with the fiberglass blades from old wind turbines that have reached the end of their service life?

—Nancy McGuire, Contract Editor

 

Transient 2.



Lightning lights up the Midwest in this breathtaking 3-minute video documenting 2 years of storm chasing.

—Caryl-Sue, Managing Editor

Gas Bubble Forensics Team Surveils the New Zealand Ocean

Thu, 09/26/2019 - 11:58

In July 2018, our research group set sail for the Calypso hydrothermal vent field in New Zealand’s Bay of Plenty aboard the New Zealand flagship R/V Tangaroa. The voyage was dubbed Quantitative Ocean-Column Imaging (QUOI). Our mission: Find bubbles—lots of them—and record everything they told us about themselves.

The vent field (Figure 1) did not disappoint: It emitted diffuse streams of carbon dioxide and discrete seeps of methane. Our ship’s echo sounders sent out acoustic waves, and we recorded the echoes that these bubbles generated in response. The ways that these echoes varied, depending on the depth of the bubbles and the angles and frequencies of the incoming acoustic waves, provided a wealth of information.

We wondered whether we could use acoustics to fingerprint bubbles in the ocean.The world’s oceans are full of bubbles. They come from natural sources like dissolving organic matter and gases seeping from great depths below the seafloor and from human sources like leaky oil and natural gas wells and pipelines. Many of these bubbles rise to the surface and release greenhouse gases like carbon dioxide (CO2) and methane (CH4) into the atmosphere.

Some research expeditions have collected preliminary information on bubble streams that they discovered by chance while using echo sounders to map the morphology of the seafloor or search for fish aggregates and even using seismic equipment for subseafloor investigations. Our group wanted to dive deeper into the information that ocean bubbles provide. We wondered whether we could use acoustics to fingerprint bubbles in the ocean.

Fig. 1. The Quantitative Ocean-Column Imaging (QUOI) mission visited the Calypso hydrothermal vent field off the northern coast of New Zealand in the Bay of Plenty. The black and white inset shows a wider view of northern New Zealand, with the area of the larger figure outlined by the black box. Gathering Clues from Bubbles

Many scientific, industrial, and environmental enterprises make use of the ability to detect, characterize, and quantify bubbles generated from liquid or gaseous features in the ocean, but observations enabling us to measure the volume or identify the types of gases that are released from the seafloor are scarce. Such observations are important as inputs into global carbon cycle models, in helping monitor deep-sea wells and pipelines for potential gas leaks, and in mitigating the associated environmental and economic risks.

Modern echo sounders have been able to image bubble streams in the ocean and trace their movement and behavior as they rise through the water column. To date, analysis of such acoustic data has been largely restricted to localizing and mapping sources of bubbles on the seafloor [e.g., Skarke et al., 2014]. Although there has been some recent progress, it has been more difficult to use echo sounders to quantify and characterize bubble streams (numbers of bubbles, geometry, and composition) and, ultimately, the associated volumetric and mass flow of gas [e.g., Greinert and Nützel, 2004; Weber et al., 2014; Veloso et al., 2015; Weidner et al., 2019].

Differentiating between gas types such as CH4 and CO2, the two most common greenhouse gases released at the seafloor, solely from their acoustic response remains a major challenge. Recent advances in echo sounders, including in our ability to calibrate and use them over a wide range of frequencies and to visualize and analyze their data, have provided a technological arsenal that we can bring to bear on this challenge.

Fig. 2. The QUOI voyage deployed a variety of acoustic devices over the Calypso hydrothermal vent site to study acoustic flares associated with gas bubbles. Red stars indicate very active flares of interest (FOI). Crossed circles indicate failed sampling stations. Brown dots show where sediment samples were collected to ground truth data on seafloor backscatter associated with the flares.

The 3-week QUOI voyage on board R/V Tangaroa used this arsenal to generate a unique data set over the Calypso hydrothermal vent field (Figure 2). The voyage benefited from a 3-year Catalyst: Seeding project, funded by the Royal Society of New Zealand. The project, which launched in April 2017, established a consortium of experts in marine geology, geophysics, and acoustics from seven countries. The data set is now enabling the team to advance the research looking at multidimensional acoustical analysis of the water column, a kind of marine acoustic forensics analysis.

How Do You Characterize a Bubble?

One of the most remarkable of the numerous gas seepages over the New Zealand continental shelf is the Calypso hydrothermal vent field [Stoffers et al., 1999]. Vents emit large bubble streams that can show up on ships’ sonars as gas flares (also called gas plumes). Yet many previous surveys only serendipitously imaged the seeps. These images enable qualitative observations, but gas bubble streams cannot be characterized with conventional oceanographic surveys, which provide only partial physical information on individual bubbles.

A synthetic image of the Calypso hydrothermal vent field showing sites where streams of gas bubbles rise from the seafloor. Credit: Erin Heffron

Our project enables us to move beyond locating and defining the shape of the flares to assess the bubble size distribution, differentiate between CH4 and CO2 gases (both of which are known to be released at the Calypso vents), and eventually develop rapid, cost-effective methods of estimating the bubbles’ greenhouse gas flux through the water column.

The intensity of a bubble’s acoustic signal—and whether it generates an echo on a ship’s sonar screen—is related to the bubble’s acoustic scattering cross section, which quantifies the ability of a bubble to reflect the acoustic energy of an incoming acoustic wave back to the echo sounder. This physical (measurable) parameter depends on the geometry, resonance frequency, and damping properties of the bubbles, as well as on the nominal frequency and incident angle of the incoming wave [Clay and Medwin, 1977].

The need to understand how these physical parameters influence acoustic measurements highlights specific requirements for quantitative description of water column gas bubbles. These requirements include calibration of the acoustic devices used to study the bubbles [Ladroit et al., 2018], multifrequency measurements of the backscatter strength of individual bubbles within a plume [Le Gonidec et al., 2002] and how the frequency response of a bubble changes with the ensonification angle (the angle between the sonar beam and the vertical at the point where the beam reaches the bubble), and visual observations for ground truthing. We carried out all of these experiments in 180-meter water depths during the QUOI voyage.

The design of the QUOI voyage included very large overlaps of the sounder seafloor footprints, with up to 95% overlay of adjacent swaths (Figure 3). This enabled us to generate high-resolution images of both the seafloor and the water column, as well as full ensonification of bubbles—immersing them in sound waves with incidence angles from the vertical to a grazing angle of 60°. We were thus able to observe bubble flares from a range of incidence angles. We conducted a robust and complete calibration of all systems at the start of the survey to enable cross correlation between the multiple echo sounder systems. The swath echograms revealed the presence of multiple acoustic flares, and the map of the sum of all echoes within the water column projected on the seafloor highlighted the importance of both the geometry and intensity of the flares and the spatial distribution of the vents on the seafloor (Figure 4).

Fig. 3. Sketch of the QUOI acquisition protocol over a bubble seep to assess the frequency and angular dependency of the bubble backscatter strength using multibeam and single-beam echo sounders simultaneously in the range of 18–200 kilohertz. The image demonstrates the high swath overlap on the seafloor. Fig. 4. Swath echograms recorded by a multibeam echo sounder (illustrated in the inset) over a gray scale map of the seafloor reflectivity (backscatter). The elongated red patches on the seafloor correspond to the vertical echo integration of water column backscatter data: They highlight positions and extensions of flares on the seafloor (processed using the SonarScope software by Institut Français de Recherche pour l’Exploration de la Mer (Ifremer)). Our Technological Arsenal

Fully characterizing the bubbles required us to explore different echo sounder deployment methods and data acquisition procedures. We used multiple synchronous echo sounders to assess the backscatter strength of targets at different frequencies:

EM302 and EM2040 multibeam echo sounders collected echo amplitudes athwart and along track simultaneously at 30 and 200 kilohertz. An EK60 vertical echo sounder operated simultaneously at 18, 38, 70, 120, and 200 kilohertz. A wideband EK80 transducer mounted on a rotatable system emitted pulses in the frequency range of 84–270 kilohertz at different incidence angles.

The voyage deployed three technologically and scientifically innovative systems.The voyage deployed three technologically and scientifically innovative systems. A mobile unit with a split-beam echo sounder that operates at 38 or 120 kilohertz was deployed directly on the seafloor to ensonify bubble streams horizontally. This generated contrasting images from a different point of view—sideways rather than top down. For 5 days, an autonomous hydrophone placed in the center of a large flare field recorded bubble sounds. We also deployed a synthetic seep generator (or bubble maker) on the seafloor to establish a control data set for many of the bubble parameters. The seep generator creates bubbles with known properties and at known rates and sizes. It allows us to develop an empirical model to which data from natural sites can be compared and to develop a quantitative approach to estimate bubble flux rates.

We collected sediment and water samples for ground truthing of the acoustic signals across the entire Bay of Plenty survey site (Figure 2, brown dots). Video transits allowed us to directly observe most of the active vents identified from high-amplitude echo integration of water column acoustic data (Figure 5, large red spot).

Fig. 5. Example of a vertical echo integration in the Calypso hydrothermal vent field. The red patches identify active vent fields.

This original, diverse, and multifaceted oceanographic survey has provided us with a data set from which we can develop robust water column surveying methodologies. We anticipate that this will lead to new protocols in identifying, quantifying, and classifying gas bubble streams in the ocean water masses. The QUOI experiments will advance understanding of active geological systems and the relationship the seabed has with the atmosphere.

Acknowledgments

This project is funded by the Royal Society of New Zealand Catalyst: Seeding fund. The QUOI voyage on board R/V Tangaroa was funded by the National Institute of Water and Atmospheric Research. V.L. was supported by the Marine Biodiversity Hub through funding from the Australian government’s National Environmental Science Program.

How Are Microplastics Transported to Polar Regions?

Thu, 09/26/2019 - 11:55

In recent years, the prevalence of plastics in the world’s oceans has become a major environmental issue. From drinking straws and water bottles to single-use plastic bags and much smaller particles, up to 12.7 million metric tons of plastic waste are dumped into the oceans each year, with devastating consequences for seabirds and marine wildlife and potentially harmful effects on human health.

Because ocean currents can carry this waste thousands of kilometers from where it enters the water, plastic pollution has quickly become a global issue. Debris, including microplastics (particles <5 millimeters across), has been found at the poles, in sea ice, and—most visibly—in enormous sea surface garbage patches trapped by recirculating oceanic gyres.

Although transport models have successfully described the distribution of microplastics observed floating on the ocean surface, these buoyant bits account for a small fraction of the total expected plastics volume. Turbulent mixing, biofouling, and a decrease in particle size appear to cause a large proportion of microplastics to sink, but what happens to these particles deeper in the water column, and whether they can account for the prevalence of plastic in the polar regions, is not yet known.

Now Wichmann et al. have assessed how subsurface currents disperse microplastics. Using the Parcels framework, the team modeled the trajectories of 1 million virtual microplastic particles around the globe over a 10-year period. After running the same simulations for four types of particles located at depths ranging from the surface to 120 meters, the researchers compared the resulting microplastic distributions and their transport pathways between different oceanic regions.

The results indicate that submerged microplastics are controlled by different dynamics and therefore follow very different routes than particles found floating on the surface. The simulations suggest that near-surface currents carry large quantities of microplastics from subtropical and subpolar regions toward the poles—a finding that may explain why this material is commonly detected even in those remote regions.

In addition to presenting a plausible explanation for how microplastics can be transported from low latitudes to Earth’s polar regions, this study offers novel insights into subsurface oceanic transport as well as the distribution of plastic particles in marine ecosystems. (Journal of Geophysical Research: Oceans, https://doi.org/10.1029/2019JC015328, 2019)

—Terri Cook, Freelance Writer

Grim Report on Climate Change Impacts on Oceans and Cryosphere

Wed, 09/25/2019 - 19:26

A new scientific report provides a dire picture of the impacts of climate change on the world’s oceans and cryosphere and their ecosystems; people who depend on their natural resources; and those at increased risk from sea level rise, extreme weather, and other threats associated with climate change.

The report, released just 2 days after the United Nations Climate Action Summit, is not entirely pessimistic. It also underlines how measures including immediate, ambitious, and coordinated actions to curb greenhouse gas emissions could still limit climate change and its related impacts.

“The oceans and cryosphere, the frozen parts of the planet, might feel very remote to some people, but they impact all of us,” said Hoesung Lee, chair of the Intergovernmental Panel on Climate Change (IPCC), at a 25 September briefing to release its Special Report on the Ocean and Cryosphere in a Changing Climate. The report follows several other recent IPCC special reports, including its report on limiting global warming to 1.5°C above preindustrial levels.

“If we reduce emissions sharply, consequences for people and their livelihoods will still be challenging, but they will be potentially more manageable for those who are most vulnerable.” “This report is showing that if greenhouse gas emissions continue to increase, global warming will drastically alter the ocean and the cryosphere,” Lee said. “However, if we reduce emissions sharply, consequences for people and their livelihoods will still be challenging, but they will be potentially more manageable for those who are most vulnerable. The report reveals the benefits of ambitious and effective adaptation for sustainable development. Conversely, there may be escalating costs and risks associated with delayed action.”

The report states, “Enabling climate resilience and sustainable development depends critically on urgent and ambitious emissions reductions coupled with coordinated sustained and increasingly ambitious adaptation actions.”

The report cites other vital measures to address climate change, including education and climate literacy, monitoring and forecasting, financing, sharing of data, addressing social vulnerability and equity issues, and intensifying cooperation and coordination among governing authorities.

Rapidly Changing Oceans and Cryosphere

“It is virtually certain that the global ocean has warmed unabated since 1970 and has taken up more than 90% of the excess heat in the climate system.”Among the report’s findings about the oceans is that “it is virtually certain that the global ocean has warmed unabated since 1970 and has taken up more than 90% of the excess heat in the climate system.”

The report also states that over the 21st century, the global ocean “is projected to transition to unprecedented conditions,” with increased temperatures, upper ocean stratification, further acidification, increased marine heat waves, and other factors. The impacts of the changing oceans on people include coastal communities being exposed to tropical cyclones and extreme sea level rise. The acceleration in recent decades of global mean sea level is due to increasing rates of ice loss from the Greenland and Antarctic Ice Sheets, along with continued glacier mass loss and ocean thermal expansion, the report notes.

Findings about the cryosphere are equally grim. “Over the last decades, global warming has led to widespread shrinking of the cryosphere” with mass loss from ice sheets and glaciers, reductions in snow cover and Arctic sea ice extent and thickness, and increased permafrost temperature. The impact of the shrinking cryosphere in the Arctic and high-mountain regions has significantly affected people, with threats to food security and water resources.

Responses from Environmental Groups

Environmental groups applauded the report and the climate change threats that it outlines, and they called for more action to deal with climate change.

Miriam Goldstein, managing director for energy and environment policy and director for ocean policy at the Center for American Progress, said that “the report offers more scientific evidence of the impact of climate change and the urgent need for real solutions.”

George Leonard, chief scientist with the Ocean Conservancy, said that “the world needs to take ambitious climate action now,” including reducing emissions, advancing responsible ocean renewable energy, protecting ecosystems, and preparing for changes to come.

“We Are in a Race Between Two Factors”

“We are in a race between two factors. One is the human ecosystem’s capacity to adapt, and the other one is the speed of impact of climate change.” “We are in a race between two factors,” IPCC chair Lee said at the briefing. “One is the human ecosystem’s capacity to adapt, and the other one is the speed of impact of climate change.”

Lee said that this report and other recent IPCC reports indicate that “we may be losing in this race.”

He added that “we need to take immediate and drastic action to cut emissions right now, and especially right from the next year, if we want to achieve this carbon neutrality in the mid of the next century.”

—Randy Showstack (@RandyShowstack), Staff Writer

Atlantic Circulation Consistently Tied to Carbon Dioxide

Wed, 09/25/2019 - 12:32

As temperatures and atmospheric compositions on Earth have varied considerably over the past million years or so, the Northern Hemisphere has experienced large-scale advances and retreats of continental ice sheets at intervals of between 80,000 and 120,000 years. (Currently, the planet is in an interglacial period—when ice sheets are smaller—that began about 10,000 years ago.) These planetary-scale freeze-thaw cycles have pronounced effects on nearly every aspect of the planet’s climate.

In a new study, Barker et al. present a record of North Atlantic sea surface conditions and argue that the relationship between the Atlantic Meridional Overturning Circulation (AMOC)—the pattern of mixing of between surface and deep waters in the North Atlantic—and millennial-scale changes in atmospheric carbon dioxide (CO2) concentrations has remained consistent over the past 800,000 years.

To create the record, the scientists used samples from a sediment core from Ocean Drilling Program Site 983, which is located off the southwest coast of Iceland. They made counts of ice-rafted terrestrial material, such as quartz grains, as well as of Neogloboquadrina pachyderma, a planktic foraminifer that thrives in polar regions, in successive layers of the core.

The authors used changes in the presence of N. pachyderma and the ice-rafted material as proxies for surface ocean conditions and the movement of sea ice to infer patterns of ocean circulation. This information served as an indicator of whether conditions in past intervals were polar or subpolar at Site 983. Because the AMOC is believed to be the main driver of millennial-scale changes in surface conditions (polar versus subpolar), the authors believe they can infer a picture of millennial-scale changes in the Atlantic’s large-scale overturning circulation.

In summary, they report that anomalously cold and icy surface conditions recorded in the core correspond to past periods of weakened AMOC, whereas warmer conditions correspond to stronger AMOC.

The team then compared this new record of Atlantic Ocean variability to atmospheric CO2 changes over the same period, which revealed that the two variables have varied in tandem for the past 800,000 years. Broadly speaking, CO2 levels rise when Atlantic circulation is weak and fall when it’s strong.

The authors also say their results show that it might take thousands of years for the planet’s climate to reequilibrate during deglaciation after each glacial period ends (and before the subsequent interglacial begins). If that’s the case, big spikes in atmospheric CO2 previously observed at the onset of past interglacial periods were, more likely, part of the deglacial process. This scenario would mean that researchers have potentially overestimated the trend of CO2 change during some past interglacial periods. (Paleoceanography and Paleoclimatology, https://doi.org/10.1029/2019PA003661, 2019)

—David Shultz, Freelance Writer

Did Bacterial Enzymes Cap the Oxygen in Early Earth’s Atmosphere?

Wed, 09/25/2019 - 12:31

Oxygen is essential for life on Earth, but this was not always the case. Before about 2.4 billion years ago, Earth was a virtually oxygen-free environment. The appearance of cyanobacteria, or blue-green algae, changed all that.

Cyanobacteria injected the atmosphere with oxygen, setting the scene for the development of complex life as we know it. But a funny thing happened: Although conditions were ripe for algae to pump more oxygen into the atmosphere, oxygen levels remained low. And they stayed low for the next 2 billion years.

Researchers have proposed many theories to explain the oxygen lag, including a lack of nutrients for the cyanobacteria or limited supplies of nitrogen. But the exact mechanism that kept oxygen production low remains a mystery.

In a new paper in Trends in Plant Science, scientists suggest that an enzyme contained only in cyanobacteria may have acted as a regulator of oxygen for billions of years. The enzyme may have essentially capped the amount of oxygen produced by the algae until the evolution of complex plants overrode its limits about 450 million years ago.

The Origins of Oxygen

Until about 2.4 billion years ago, bacteria lived in an environment with no oxygen. To photosynthesize, bacteria would use electrons from hydrogen sulfide, hydrogen, or iron to trigger photosynthesis.

But around that time, “the cyanobacteria really discovered something that changed everything,” said John Allen, a biochemist at University College London. They “invented a brilliant new way of doing photosynthesis, which was to take electrons from water,” he explained.

The by-product of photosynthesis is oxygen, and the new gas accumulated in the atmosphere. This event, called the Great Oxidation Event, marked the end of the Archean.

Although cyanobacteria cracked the photosynthesis code and introduced oxygen to the atmosphere, atmospheric oxygen levels were stagnant and rose to only about 10% of our present-day levels.

Those low levels persisted for almost 2 billion years, even though blue-green algae had everything they needed to thrive, said Allen. “They should be unstoppable, because water is everywhere,” he added.

What Held Oxygen Back?

Researchers have long been trying to uncover why the oxygen levels stayed low. Some scientists think that the availability of metals in early Earth limited cyanobacteria producing oxygen.

But Allen was unconvinced.

“These trace metals act like lubricants,” he said. “They’re not being used for fuel.” He compares trace metals to the oil in a car—it just lubricates the engine, whereas gasoline actually powers the car. Allen and his colleagues were looking for a model that had a simpler solution with a feedback mechanism.

Coauthor Brenda Thake, of Queen Mary University of London, noted that when cyanobacteria are grown in laboratory conditions in which light and trace elements abound but combined nitrogen is limited, the cyanobacteria fix their own nitrogen by using an enzyme called nitrogenase.

The cyanobacteria photosynthesize and produce oxygen, explained Allen, but the oxygen feeds back into the system to inhibit the enzyme nitrogenase. He pointed out that the process is like a thermostat telling a heater to shut off instead of heating a room indefinitely.

The result is that lab-grown cyanobacteria will produce oxygen but to no more than 10% of our present levels—exactly the amount of oxygen produced in the Proterozoic.

Allen said, “This was too much of a coincidence to be a coincidence.” He noted that the team thought the very low oxygen levels “could simply be that the oxygen being produced inhibited nitrogenase, which prevented cell growth.”

Their hypothesis is an interesting one, said Christopher Junium, a stable isotope geochemist at Syracuse University in New York. He was not involved with the study.

“I think what’s key is that they present a broadly testable hypothesis” about why oxygen was limited after the Great Oxidation Event, said Junium. He said that this study focuses on one particular organism, but there’s room for scientists to test other cyanobacteria to see how they respond in a laboratory setting.

Junium also noted that the organic microfossil record is pretty limited. But expanded investigations might shed light on the evolutionary process even more. “Just because we’ve only found heterocysts from 408 million years ago (and younger) doesn’t mean that they don’t exist in deeper time,” said Junium.

After the Oxygen

“I think the Boring Billion, under the surface, was not at all boring. There were all sorts of very interesting things going on in this world of very low oxygen concentration.”After the first appearance of oxygen, there’s quite a stretch of time where oxygen levels stayed the same—what scientists have dubbed the “Boring Billion.” But Allen takes umbrage with that term.

“I think the Boring Billion, under the surface, was not at all boring,” Allen said. “There were all sorts of very interesting things going on in this world of very low oxygen concentration.”

He explained that complex, multicellular organisms evolved, eventually dying and accumulating in organic-rich deposits. Allen says plants becoming firmly rooted on land allowed for the evolution of a physical separation between aerial oxygen and nitrogen fixing in soil.

“The bigger apparent increase in oxygen content really does coincide with the evolution of land plants,” Junium said. He added that there was a “pretty broad range of evidence” that atmospheric oxygen levels rose during this time.

The connection between plants and oxygen makes sense, said Junium. “The consequence of carbon burial, when it’s produced by oxygenic photosynthetic organisms like vascular plants (Devonian ferns, for example), is increasing oxygen.”

The new paper will generate a lot of scientific discussions, said Junium. A theory-focused paper is a rarity these days, but he added that “it was neat—I feel like there should be more papers written like this.”

—Sarah Derouin (@Sarah_Derouin), Freelance Journalist

Artificial Intelligence May Help Predict El Niño

Wed, 09/25/2019 - 12:29

The artificial intelligence technique deep learning is everywhere in our daily lives if you know where to look. Siri’s voice commands, online banking, and photo tagging on social media all use deep learning to uncover powerful structures hidden in data.

Yoo-Geun Ham can now add another item to the list: El Niño forecasting.

Ham and his collaborators created a model using deep learning that forecasts El Niño and La Niña events 18 months in advance, beating current models that forecast only 1 year ahead. Using simulated data from a climate model, they trained their data-driven model to sidestep barriers faced by many contemporary models, resulting in what Ham called “the world’s best” El Niño forecasting model.

Forecasters’ Dilemma

Forecasting El Niño and La Niña events could help regions better manage food prices, disease, and water shortages, but predicting when an event will occur is challenging. Researchers have poured decades of study into understanding the global climate phenomenon that drives El Niño and La Niña events, the El Niño–Southern Oscillation (ENSO). Despite major advancements, scientists cannot predict events more than 1 year away, even though the physical precursors to El Niño and La Niña, such as shifting ocean temperatures, may occur more than a year in advance.

Many state-of-the-art forecasting models use mathematical equations to power their predictions. These models elegantly simulate the physical relationships between the ocean and the atmosphere, but they contain slight errors that compound over time, rendering long-term forecasting unmanageable. On the other hand, models based solely on analyzing data, called statistical models, have traditionally lacked a sufficient number of measurements to make them robust.

The latest study led by Ham from Chonnam National University in South Korea built a statistical model that skirts the problem of data scarcity. Ham and his colleagues fed their model data from both sophisticated climate models and an ocean reanalysis model that gave them global snapshots of ocean temperatures since the late 19th century. Using these model outputs increased the number of available data from about 150 measurements to nearly 3,000 per month.

Deep Learning and ENSO

With a new trove of available data, the scientists used an artificial intelligence technique called deep learning to analyze it. Deep learning is often used in image recognition: The technique identifies noteworthy qualities in an image and systematically classifies it through a series of steps. For example, a deep learning model will discern a cluster of pixels denoting an “edge” in an image, such as the black and gray edge of a cat’s ear, and through a series of steps determine whether that edge belongs to a recognizable object, like the head of a British shorthair cat.

Ham and his colleagues used a similar technique: For each global snapshot of ocean temperatures from the climate model, they taught their deep learning model to identify telltale signs that an ENSO event was on its way, such as abnormally warm water in the Indian Ocean or tropical Pacific. By feeding the model snapshots from 1871 to 1973, as well as telling it the correct answer for when an ENSO event occurred, they trained the model to make future forecasts.

To test their model, they fed it global snapshots from 1984 to 2017 but withheld the answers of when ENSO events occurred. Their model successfully forecasted events 1.5 years in advance and nicely predicted an event’s amplitude. The new method outperformed eight other current forecasting models and was even able to predict the specific subclasses of El Niño and La Niña. The researchers published their results in Nature on 18 September 2019.

Ham said that this technique works even though the climate model seeding their statistical model contains errors. The deep learning technique corrected the systematic errors, he said, for reasons the researchers do not fully understand.

“Breakthrough Work”

“This looks like breakthrough work,” Youmin Tang, a professor of environmental science and engineering at the University of Northern British Columbia, told Eos. The study “may excite a new round of application of machine learning on climate predictions.”

When asked how far future models using deep learning may forecast, Ham said, “I’m not sure what is the upper limit.”Michael Tippett, an associate professor of applied mathematics at Columbia University, agreed that it was an exciting development. “I hope we will see these forecasts used operationally to make predictions about the future so that their practical skill can be assessed.” Ham now posts future ENSO predictions using this method on his research website.

Ham said he and his colleagues are working with artificial intelligence experts to optimize their deep learning technique. When asked how far future models using deep learning may forecast, he said, “I’m not sure what is the upper limit.”

Ham said he plans to apply the same methodology to other climate signals, which would be “very easy to do in its current stage.” He said forecasting the Indian Ocean Dipole, which affects the monsoons in India, might be next.

—Jenessa Duncombe (@jrdscience), News Writing and Production Fellow

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer