GeoSpace: Earth & Space Science

Syndicate content
By AGU staff and collaborators
Updated: 1 day 5 hours ago

Chemical tracers untangle natural gas from agricultural methane emissions

Thu, 03/21/2019 - 17:19

COCCON (Collaborative Carbon Column Observing Network) network of column sensors to measure excess columns of methane (CH4) during tests atop the NCAR Foothills Laboratory. Photo: Mahesh Kumar Sha/KIT/BIRA-IASB

By Katie Weeman

With natural gas booming across the Front Range, drilling rigs may operate within feet from cattle farms. That shared land use can confound attempts to understand trends in methane, a greenhouse gas and air pollutant—the gases emitted from these different sources blend together.

To untangle them, a CIRES-led team has innovated a new, cost-effective technique to efficiently measure methane and a cocktail of associated chemicals in the atmosphere, and to create a kind of chemical identification tag for methane sources.

“Methane is an important greenhouse gas. But it has a high global concentration so it can be challenging to see its specific sources,” said Natalie Kille, CIRES PhD student and lead author on the study published today in the AGU journal Geophysical Research Letters. “This technique allows us to remove the background methane concentrations in our analysis to clearly see unique chemical tracers.”

“Tracers” are chemicals unique to a single source: ethane is a great tracer for oil and gas operations, for example; and ammonia is a tracer for cattle farms, responsible for that unmistakable cow smell. Measuring levels of those two tracers helped the team disentangle sources of methane produced locally by both agriculture and oil and gas operations.

In Colorado, oil and gas operations sit within feet from cattle farms. Photo: Frank Flocke/NCAR

Using instruments that sit on the ground and measure the air above, they can instantly capture a snapshot of chemical concentrations for methane and its tracers in the column of air reaching from the surface all the way up to the top of the atmosphere. The team then uses this information to remove the methane background—a concept known as “excess column”—so that the tracers can take center stage.

“This was the first study to measure excess columns of all these molecules simultaneously,” said Rainer Volkamer, CIRES Fellow, CU associate professor of chemistry, and corresponding author on the study. “This gives us a better handle to separate and quantify methane sources on a regional scale.”

The team set up a network of these small instruments across Colorado’s Front Range. Frank Hase and Thomas Blumenstock with the German Karlsruhe Institute of Technology developed a novel, portable spectrometer capable of highly precise methane measurements. And CIRES/CU Boulder provided Volkamer’s University of Colorado “CU mobile Solar Occultation Flux” instrument that measured the chemical tracers ethane and ammonia. Both devices harness sunlight to identify each molecule by its light absorption fingerprint.

“These two instruments were set up side-by-side in Eaton, Colorado, within what we call the ‘methane dome’ of the Denver-Julesburg Basin,” said Volkamer. “In the areas where natural gas and cattle farming sites are present, methane is emitted, and mixes together from both sources, forming a bubble inside the atmospheric boundary layer that expands and contracts as if its breathing.”

To measure the background concentrations of methane, the team set up two additional KIT instruments (one operated by the National Center for Atmospheric Research) outside the methane dome, in Boulder and Westminster, each about 60 miles away from Eaton. These data helped Kille’ calculate—and then remove—the background concentration of methane to isolate locally produced methane and those two key chemical tracers.

In previous work to untangle sources of methane, scientists have often collected flask samples of air, either from the ground or by aircraft, for detailed analysis back in a laboratory. But some chemicals, including ammonia, can stick to the insides of some canisters, creating challenges.

In this work, the small and portable instruments could be deployed almost anywhere for real-time measurements of the open atmosphere. In Eaton, the team set up in the parking lot behind a bed and breakfast.

Based on data from five days’ worth of measurements in 2015, the team found oil and natural gas operations were responsible for most of the methane produced in the Denver-Julesburg Basin, with agricultural sources providing an important but minor source.

The study also uncovered some baffling observations that will require further exploration: for example, when methane concentrations are very low, the agricultural sources are relatively more significant.

These results could help natural gas operators, cattle farmers, and their regulators make more informed decisions about methane mitigation.

In the future, the researchers hope to generate a long-term time series over multiple seasons to see how methane sources in the region change over time—a feat that becomes possible with low-cost, autonomous sensor networks like this. Scientists could also work towards comparing these data with those gathered from satellites, to develop best practices to inform satellite observations, said Volkamer.

This story originally appeared on the CIRES website. Katie Weeman works for the CIRES Communications Office. 

The post Chemical tracers untangle natural gas from agricultural methane emissions appeared first on GeoSpace.

Where do microplastics go in the oceans?

Wed, 03/20/2019 - 16:12

By Liza Lester

Where do tiny bits of plastic go when they are flushed out to sea?

Previous research finds most plastic ends up in the subtropical ocean gyres circling the mid-latitudes of the Atlantic and Pacific oceans. These rotating currents encircle large areas sometimes called “garbage patches” because they are the destination for so much persistent floating junk.

A new modeling study in AGU’s Journal of Geographical Research: Oceans finds more microplastic may be reaching Arctic waters than previously thought.

The new study looked at what oceanographers know about ocean currents to ask which types of current are most influential on how microplastics drift.

Generally defined as plastic bits smaller than 5 millimeters, this durable, non-biodegradable flotsam ranges from the size of polystyrene beads to microscopic nanoparticles small enough to squeeze through cell membranes. They can persist in surface waters for years.

Microplastics are unhealthy for animals to ingest, causing physical and metabolic damage to sea life, from tiny plankton to whales. Microplastics can also spread chemical pollutants and living organisms carried on their surfaces.

The new simulations of plastics from the millimeter to meter scale show wind-driven surface currents called Ekman currents mostly determine the fate of microplastics in the subtropical gyres.

But the new research also finds ocean waves push microplastics toward the poles. The new research shows Stokes drift, an element of fluid dynamics theory that describes the influence of waves, may have led to underestimation of microplastic pollution in the Arctic in previous studies. Stokes drift is not always included in ocean models and is currently not observed from satellites.  

The post Where do microplastics go in the oceans? appeared first on GeoSpace.

Western droughts caused permanent loss to major California groundwater source

Tue, 03/19/2019 - 13:59

By Joshua Rapp Learn

California’s Central Valley aquifer, the major source of groundwater in the region, suffered permanent loss of capacity during the drought experienced in the area from 2012 to 2015.

California has been afflicted by a number of droughts in recent decades, including one between 2007 and 2009, and the millennium drought that plagued the state from 2012 to 2015. Due to lack of water resources, the state drew heavily on its underground aquifer reserves during these periods.

According to new research, the San Joaquin Valley aquifer in the Central Valley shrank permanently by up to 3 percent due to excess pumping during the sustained dry spell. Combined with the loss from the 2007 to 2009 drought, the aquifer may have lost up to 5 percent of its storage capacity during the first two decades of the 21st Century, according to Manoochehr Shirzaei, an assistant professor of earth sciences at Arizona State University in Tempe and one of the co-authors of a new study published in AGU’s Journal of Geophysical Research: Solid Earth.

Measures of land Subsidence in San Joaquin Valley. Credit: USGS)

Groundwater exists in the pore spaces between grains of soil and rocks. When fluids are extracted from aquifers, the pore spaces close. There is a range for which these spaces can shrink and expand elastically. But if the pore spaces close too much, they start to collapse, causing the land to shrink irreversibly.

Figuring out how much the aquifer shrank permanently could help water managers prepare for future droughts, according to the study’s authors. The San Joaquin Valley aquifer supplies freshwater to the Central Valley – a major hub that produces more than 250 different crops valued at $17 billion per year, according to the U.S. Geological Survey.

“If we have even one drought per decade, our aquifers could shrink a bit more each time and permanently lose more than a quarter of their storage capacity this century,” said Susanna Werth, a research assistant professor of earth sciences at Arizona State University, and a co-author of the new study.

The new study could also help scientists understand how other areas might be affected by drought.

“That was a curiosity for us to understand how much groundwater has been lost in those particular regions and will give us a picture of what we can expect for arid areas around the globe if groundwater practices are not sustainable,” said Chandrakanta Ojha, a post-doctoral researcher at Arizona State and the lead author of the new study.

Underground water from space
The researchers measured water volume changes due to groundwater variation in the aquifer using data from the Gravity Recovery and Climate Experiment (GRACE), a twin satellite mission that has been measuring the Earth’s gravity field every month from April 2002 until June 2017. The study’s authors compared the groundwater losses based on GRACE measurements with those calculated from vertical land motion measurements obtained by GPS. Land depressions were also measured by a radar technique called InSAR and multiple extensometers, devices which are installed in a borehole of a groundwater observation well. They also examined groundwater level records.

The study’s authors found that from 2012 to 2015, the aquifer of the San Joaquin Valley lost a total volume of about 30 cubic kilometers (7.2 cubic miles) of groundwater. The aquifer also shrank permanently by 0.4 percent to 3.25 percent, according to the new study.

Previous research found the 2007 to 2009 drought caused the San Joaquin aquifer to permanently lose between 0.5 percent to 2 percent of its capacity. Cumulatively, the authors said both drought periods – 2007 to 2009 and 2012 to 2015 — caused the San Joaquin aquifer to shrink permanently by as much as 5.25 percent.

Surface deformation map over San Joaquin Valley during 2015-2017 using Satellite radar interferometry. Credit: Chandrakanta Ojha.

Forecasting future drought effects
Shirzaei said the information they have gathered is important for future planning—particularly since the loss of permanent storage capacity is unsustainable in the long-run.

By using this type of calculation, Shirzaei said land and water resource managers can predict the effect of droughts on the aquifer system. This can help to make better regulations for groundwater conservation during those periods and prevent permanent loss of aquifer storage capacity.

Shirzaei said the compaction of the aquifer may also cause fissures and cracks on the surface as the land subsides. This could affect roads, power lines, railroads or other infrastructure, but more research is needed to understand the details of these effects.

Joshua Learn is a freelance science writer based in Washington, DC.

The post Western droughts caused permanent loss to major California groundwater source appeared first on GeoSpace.

Arctic change has widespread impacts

Thu, 03/07/2019 - 15:00

By Audrey Payne

As the Arctic warms faster than the rest of the globe, permafrost, land ice and sea ice are disappearing at unprecedented rates. And these changes not only affect the infrastructure, economies and cultures of the Arctic, they have significant impacts elsewhere as well – according to a commentary in the AGU journal Earth’s Future, led by research scientist Twila Moon of the National Snow and Ice Data Center (NSIDC) at the University of Colorado Boulder.

“To many, the Arctic seems like a distant universe – one that could never impact their lives,” said Moon. “But the reality is, changes in the Arctic are increasingly affecting the rest of the world, causing amplified climate change, sea level rise, coastal flooding and more devastating storms.”

As the Arctic warms faster than the rest of the globe, permafrost, land ice and sea ice are disappearing at unprecedented rates. Credit: NASA.

Sea Level Rise
The melting of land ice has contributed to 60 percent of sea level rise since 1972. Arctic land ice comprises over two million square acres, and studies have confirmed this area is diminishing rapidly due to climate change. In addition, most land ice in this region is thinning. If current warming trajectories are maintained, Arctic land ice is expected to be a major contributor to projected global sea level rise, contributing up to one meter this century. Three out of four of the U.S.’s largest cities – New York, Los Angeles and Houston – are coastal and 39 percent of the U.S. population lives in shoreline counties. As sea levels continue to rise, coastal cities around the U.S. and world will be increasingly forced to deal with the impacts, including flooding, freshwater contamination, coastal erosion, higher storm surges and more.

Extreme Weather Events
In addition to the increased storm surges and flood events caused by sea level rise, a current hypothesis states that changes in the Arctic jet stream may be significantly affecting storms and extreme weather events, including snow storms and droughts, in the continental U.S. as well as Canada, Europe and Asia. For example, Arctic warming has been linked to a recent extreme drought in California.

Infrastructure Damage
Under the “business as usual” emission scenario, the Intergovernmental Panel on Climate Change RCP8.5 estimates that Alaska will face $5.5 billion dollars in infrastructure damage between 2015 and 2099. Almost half of this will be directly due to permafrost thaw. In addition, this permafrost thaw will release significant amounts of carbon dioxide and methane into the atmosphere, contributing to further warming of the planet.

Coastal Erosion and Arctic Amplification
Sea ice extent and sea ice thickness have both declined in the past several decades. This sea ice loss has caused dramatic coastal erosion in Siberia and Alaska, and has serious global consequences as sea ice helps to regulate Earth’s climate by reflecting incoming solar radiation. As sea ice cover declines, Arctic warming is amplified due to these decreases in surface reflectivity.

Looking forward
“As the Arctic continues to warm faster than the rest of the globe, we’ll continue to see impacts worldwide, including in tropical and temperate countries with big cities, large economies, and lots of infrastructure,” Moon said. “If we want to safeguard our people and society, we need to act now to both reduce emissions to curb warming and to prepare for the inevitable changes already set in motion.”

Audrey Payne is a communications specialist at the National Snow and Ice Data Center (NSIDC), which is part of the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder.

The post Arctic change has widespread impacts appeared first on GeoSpace.

Deep diving robots find warming accelerating in South Pacific Ocean waters

Wed, 03/06/2019 - 15:00

By Monica Allen

New research analyzing data from deep-diving ocean robots and research cruises shows that the coldest, near-bottom South Pacific waters originating from Antarctica are warming three times faster than they were in the 1990s. 

“Measuring the warming occurring in these deep ocean waters helps us understand one of the drivers of sea level rise and will help to improve predictions of future sea level,” said Gregory C. Johnson, a NOAA oceanographer and co-author of two recently published research papers appearing in the AGU journals Geophysical Research Letters and Journal of Geophysical Research: Oceans. As ocean waters warm, they expand, contributing to rising seas.

New autonomous ocean robots called Deep Argo floats are able to dive down to depths of nearly four miles to collect data. Operating year round, they are improving our ability to monitor how heat is taken up by the ocean. The warming ocean affects not only sea level rise, but also weather patterns and long-term climate.

Dr. Elizabeth Steffen (left) and Marine Tech Elizabeth Ricci (right) deploy at Deep SOLO float. Credit: NOAA

Deep Argo enhances data from ship surveys

The research combines temperature data taken from ship-based surveys by U.S. researchers and international partners conducted at decadal intervals with the continuous, near real-time data from an array of 31 Deep Argo floats, most of which were designed, built, and deployed by Scripps Institution of Oceanography scientists.

The ship-based data show that deep ocean temperatures rose an average rate of 1-thousandth of a degree Celsius per year between the 1990s and the 2000s and that rate doubled to 2-thousandths of a degree per year between the 2000s and the 2010s. The Deep Argo floats reveal a tripling of the initial warming rate to 3-thousandths of a degree per year over the past four-plus years.

This warming rate of near-bottom temperatures is only a fraction of that of the surface ocean, but is striking for an area of the ocean long considered more stable.

This new research underlines the importance of expanding Deep Argo to improve the timeliness and accuracy of observations.

Working with Paul G. Allen Family Foundation, NOAA is poised to deploy Deep Argo floats in the Atlantic Ocean. With funding from the late visionary philanthropist, NOAA scientists will travel aboard R/V Petrel to deploy a large array of Deep Argo floats in the international waters off Brazil next year.

Deep SOLO float starts its mission by sinking after deployment during HOT Cruise 302. Credit: NOAA

— Monica Allen is Director of Public Affairs for NOAA Research. This post was also originally published on the NOAA website.  

The post Deep diving robots find warming accelerating in South Pacific Ocean waters appeared first on GeoSpace.

Seemingly dormant geologic fault damaged famous Roman buildings 1,500 years ago

Mon, 03/04/2019 - 17:13

New research finds a geologic fault system in central Italy that produced a deadly earthquake in 2016 is also responsible for a fifth-century earthquake that damaged many Roman monuments, including the Colosseum.
Credit: David Iliff, CC-BY-SA 3.0

By Lauren Lipuma

A geologic fault system in central Italy that produced a deadly earthquake in 2016 is also responsible for a fifth-century earthquake that damaged many Roman monuments, including the Colosseum, according to new research.

The Mount Vettore fault system, which winds through Italy’s Apennine Mountains, ruptured in the middle of the night on August 24, 2016. The magnitude 6.2 earthquake it generated killed nearly 300 people and destroyed several villages in the surrounding region. The fault ruptured again in October 2016, producing two more earthquakes with magnitudes greater than 6.

Scientists had thought the Mount Vettore fault system was dormant until it ruptured in 2016. They knew it could produce earthquakes, but as far as anyone knew, this was the first time the fault had ruptured in recorded history.

But a new study in the AGU journal Tectonics combining geologic data with historical records shows the fault produced a major earthquake in 443 A.D. that damaged or destroyed many well-known monuments from Roman civilization.

Among the damaged buildings were the Colosseum, made famous by the Roman Empire’s gladiator contests, as well as Rome’s first permanent theater and several important early Christian churches.  

The finding suggests dormant faults throughout the Apennines are a silent threat to Italians and the country’s numerous historical and cultural landmarks, according to the authors. Quiescent faults could be more destructive than active faults, because researchers don’t fully consider them when evaluating seismic hazards, said Paolo Galli, a geophysicist at Italy’s National Civil Protection Department in Rome and lead author of the new study.

The surface of the Mount Vettore fault system, where it ruptured in 2016.
Credit: Paolo Galli.

Reconstructing Italy’s geologic past

Italy lies on the southern end of the Eurasian tectonic plate, close to where it meets the Adriatic, African, and Ionian Sea plates. The movement of these plates relative to each other created the Apennine mountains millions of years ago, and makes Italy seismically and volcanically active today.

Hundreds of kilometers of geologic faults snake through the Apennine Mountains. Seismologists consider some of these faults to be silent or dormant because they haven’t been linked to any known historical earthquakes.

Scientists thought Mount Vettore was one of these silent fault systems until it ruptured in 2016. After Galli and his colleagues mapped the fault’s rupture in 2016, they decided to look for evidence of it having ruptured in the past.

To do so, they dug deep trenches around parts of the fault system that ruptured in October 2016. The trenches allowed them to see the various sediment layers on either side of the fault and to determine whether the two sides of the fault had moved relative to each other at any other times in the past – in other words, if the fault had generated past earthquakes.

In the new study, Galli and his colleagues analyzed the sediment layers in the trenches and found the Mount Vettore system ruptured five other times in the past 9,000 years, in addition to 2016. One of those ruptures occurred in the middle of the fifth century, at the very end of the Roman period. Averaging the time between ruptures, they found the Mount Vettore system produces major earthquakes every 1,500 to 2,100 years.

Combining science and history

Using data from past archaeological digs in Italy and historical records from the Roman Empire, Galli and his colleagues matched the fifth-century rupture of Mount Vettore to an earthquake that rocked central Italy in 443 A.D., just three decades before the final Roman emperor was deposed.

A recently reassembled epigraph, or stone inscription, of the prefect Rufius Caecina Felix Lampadius, describing restorations for the Colosseum after the 443 A.D. earthquake.
Credit: Paolo Galli.

The 443 earthquake destroyed many towns in the Italian countryside and damaged numerous landmarks in Rome, including the Colosseum and the Theater of Pompey, Rome’s first permanent theater. The earthquake also damaged several famous early Christian churches, such as Saint Paul’s Basilica and the Church of Saint Peter in Chains, currently home to Michelangelo’s statue of Moses. Inscriptions written by Pope Leo I, emperors Valentinianus III and Theodosius II in the fifth century refer to restorations made to these structures likely as a result of this earthquake.

The new study’s results suggest the 2016 earthquake was not as unexpected as scientists thought, and other Apennine faults considered dormant by scientists may in fact pose a seismic hazard to central Italy. Considering the immense historical and cultural value of Roman ruins in this region, Galli’s priority is to better understand the rest of the silent faults on the Italian peninsula.

Lauren Lipuma is a senior public information specialist at AGU.

A timeline of past earthquakes that have damaged Roman ruins during the common era (A.D.).

The post Seemingly dormant geologic fault damaged famous Roman buildings 1,500 years ago appeared first on GeoSpace.

First evidence of planet-wide groundwater system on Mars

Fri, 03/01/2019 - 13:34

By Nicky Jenner and Emily Baldwin

Mars Express has revealed the first geological evidence of a system of ancient interconnected lakes that once lay deep beneath the Red Planet’s surface, five of which may contain minerals crucial to life.

Mars appears to be an arid world, but its surface shows compelling signs that large amounts of water once existed across the planet. We see features that would have needed water to form – branching flow channels and valleys, for example – and just last year Mars Express detected a pool of liquid water beneath the planet’s south pole.

A new study in AGU’s Journal of Geophysical Research – Planets now reveals the extent of underground water on ancient Mars that was previously only predicted by models.

“Early Mars was a watery world, but as the planet’s climate changed this water retreated below the surface to form pools and ‘groundwater’,” says lead author Francesco Salese of Utrecht University, the Netherlands. “We traced this water in our study, as its scale and role is a matter of debate, and we found the first geological evidence of a planet-wide groundwater system on Mars.”

This image shows the distribution of a number of deep craters (marked as dots) recently explored as part of a study into groundwater on Mars. The background image is shown in colors representing topography: reds and oranges are lower elevations, and blues and greens are higher ones. The study found that the floors of the basins, which sit over 4000 m deep, show signs of past water – the first geological evidence that the Red Planet once had a system of interconnected groundwater-fed lakes that spanned the entire planet.
Credit: Topography: NASA/MGS/MOLA; Crater distribution: F. Salese et al (2019)

Salese and colleagues explored 24 deep, enclosed craters in the northern hemisphere of Mars, with floors lying roughly 4000 meters below martian ‘sea level’ (a level that, given the planet’s lack of seas, is arbitrarily defined on Mars based on elevation and atmospheric pressure).

They found features on the floors of these craters that could only have formed in the presence of water. Many craters contain multiple features, all at depths of 4000 to 4500 meters – indicating that these craters once contained pools and flows of water that changed and receded over time.

Features include channels etched into crater walls, valleys carved out by sapping groundwater, dark, curved deltas thought to have formed as water levels rose and fell, ridged terraces within crater walls formed by standing water, and fan-shaped deposits of sediment associated with flowing water.

The water level aligns with the proposed shorelines of a putative martian ocean thought to have existed on Mars between three and four billion years ago.

“We think that this ocean may have connected to a system of underground lakes that spread across the entire planet,” adds co-author Gian Gabriele Ori, director of the Università D’Annunzio’s International Research School of Planetary Sciences, Italy. “These lakes would have existed around 3.5 billion years ago, so may have been contemporaries of a martian ocean.”

This diagram shows a model of how crater basins on Mars evolved over time and how they once held water. This model forms the basis of a new study into groundwater on Mars, which found that a number of deep basins – with floors sitting over 4000 m deep – show signs of having once contained pools of water. Images (from the context camera onboard NASA’s Mars Reconnaissance Orbiter) show examples of the different features observed in the basins. There are three main stages: in the first (top), the crater basin is flooded with water and water-related features – deltas, sapping valleys, channels, shorelines, and so on – form within. In the second stage (middle), the planet-wide water level drops and new landforms emerge as a result. In the final stage (bottom), the crater dries out and becomes eroded, and features formed over the previous few billions of years are revealed.
Credit: Images: NASA/JPL-Caltech/MSSS; Diagram adapted from F. Salese et al. (2019)

The history of water on Mars is a complex one, and is intricately linked to understanding whether or not life ever arose there – and, if so, where, when, and how it did so.

The team also spotted signs of minerals within five of the craters that are linked to the emergence of life on Earth: various clays, carbonates, and silicates. The finding adds weight to the idea that these basins on Mars may once have had the ingredients to host life. Moreover, they were the only basins deep enough to intersect with the water-saturated part of Mars’ crust for long periods of time, with evidence perhaps still buried in the sediments today.

Exploring sites like these may thus reveal the conditions suitable for past life, and are therefore highly relevant to astrobiological missions such as ExoMars – a joint ESA and Roscosmos endeavour. While the ExoMars Trace Gas Orbiter is already studying Mars from above, the next mission will launch next year. It comprises a rover – recently named after Rosalind Franklin – and a surface science platform, and will target and explore martian sites thought to be key in the hunt for signs of life on Mars.

“Findings like this are hugely important; they help us to identify the regions of Mars that are the most promising for finding signs of past life,” says Dmitri Titov, ESA’s Mars Express project scientist.

“It is especially exciting that a mission that has been so fruitful at the Red Planet, Mars Express, is now instrumental in helping future missions such as ExoMars explore the planet in a different way. It’s a great example of missions working together with great success.”

— Nicky Jenner is a freelance science writer and Emily Baldwin is a space science editor at ESA. This post originally appeared on the ESA website

The post First evidence of planet-wide groundwater system on Mars appeared first on GeoSpace.

Old stone walls record history of Earth’s magnetic wanderings

Wed, 02/27/2019 - 20:07

By Liza Lester

An old stone wall marks a boundary of a long-abandoned farm near Grafton, New York, once part of the colonial Manor of Rensselaerwyck. A section of the 700,000-acre manor west of the Hudson River was was surveyed for rental allotments in 1787.
Credit: John Delano

Under the forests of New York and New England, a hidden tracery of tumbledown stone walls marks the boundaries of early American farms, long abandoned for city jobs and less stony pastures in the West.

These monuments of an agrarian past also mark past locations of Earth’s itinerant magnetic north pole, a record leveraged by geochemist and local history buff John Delano to reconstruct a history of our planet’s magnetic field in eastern North America in a new study published in AGU’s Journal of Geophysical Research: Solid Earth.

“As a little kid growing up in the country in New Hampshire, I was fascinated with stone walls that were in the middle of the woods. Who made them? Why?” said Delano, an emeritus professor at the State University of New York at Albany. “Post-retirement, I had the time to pick that study up and wondered, now with a different skill set: What do they remember? What are they telling us?”

On a visit to his local historical society, Delano had a eureka moment while looking at map from 1790. From the jumble of stone walls on his present-day map, a grid pattern emerged that looked much like the property boundaries of the many 100-acre farms in the late 18th century township.

Delano got his hands on hundreds of original, 18th and 19th century surveys from the New York State archives and overlaid modern aerial images of the stone walls to find the walls that marked the old property boundaries. Using GPS, he measured the walls’ present-day bearings with respect to True North and compared them to compass bearings for the boundary lines, recorded by the 18th and 19th century surveyors.

The discrepancy between Delano’s measurements and the historical compass readings is not the error of the early surveyors, but the magnetic declination at the time of the survey. The difference between True North and magnetic north shifts over time due to changes in Earth’s outer core.

Modeling the wanderings of Earth’s magnetic field can provide clues to the enigmatic motions deep in the Earth that generate it.

“Some geophysicists who are trying to model these complex motions of fluids are helped in their analysis by having a very accurate record of how the declination has moved over time,” Delano said.

Wayward poles

Historical magnetic declination 1590-2020.
Credit: NOAA National Centers for Environmental Information

Geologists and navigators have long known that the north pole found by magnetic compass is not the same as True North, the point at 90 degrees N latitude that sits on the axis of Earth’s spin. The angle of deviation is called magnetic declination, and its degree varies depending on where the compass holder stands on the globe.

This difference matters not just for bushwhackers making their way without a GPS device, but for military and commercial aircraft, ships, submarines and even smartphones, which still use Earth’s magnetic field for orienteering.

Declination information must be updated frequently, because, unlike True North and South, Earth’s magnetic north and south poles are not fixed points. They move by tens of meters (yards) every day at erratic rates and directions. Motions in the fluid, molten metals that surround Earth’s core are believed to generate the magnetic field, although the deep layers of the planet cannot be observed directly. The dynamics of the system are not well understood.

Scientists can only model future positions of the magnetic north and south poles a few years into the future. The magnetic north pole has moved so quickly in the last few years, at 50 kilometers (30 miles) per year, the U.S. National Geophysical Data Center and the British Geological Survey released an early update to the joint World Magnetic Model’s 5-year cycle earlier this month.

Finding north with old stone walls

Historical data on the movements of the magnetic north pole provide clues to the behavior of Earth’s dynamo. Previous research groups have modeled magnetic declination from 1590 to today, based primarily on extensive sailing ships’ log records of magnetic north and the position of the North Star.

“That’s magnificent work, enormously tedious, with great detail and commitment on their part to do it, but they had little land-based data. And that’s where my work came in,” Delano said. “Using land-based methods, an entirely different approach, how would it match up, or not, with the current geophysical model?”

Walking the walls to map them with a hand-held GPS unit proved too time consuming for systematic measurements—Delano spent more than 200 hours in the woods mapping stone walls in a single square mile—so he learned a new technology in his retirement. Although a new forest has long since grown up over the abandoned fields, the old walls are easily visible to the penetrating laser gaze of aerial based “light radar” or lidar. Public lidar scans revealed 1,200 kilometers (750 miles) of boundaries surveyed in 1685-1910 and marked by remnant stone walls.

The glaciers of the most recent ice age left plentiful rock in the soil of New York and New England to trouble farmers. Old world colonists and their descendants moved the stones to the field edges, which often lay on surveyed property boundaries. The rock piles were later built into walls. The wall persisted long after the farms were forgotten, allowing Delano to orient the survey maps of centuries past on the present day landscape and recreate the surveyors’ compass measurements.

Delano calculated historical declinations for 22 regions in New York and New England by matching old stone walls to original survey maps.

He also used Google Earth Pro to take the bearing of Boundary Street and the original main street in Colonial Williamburg, Virginia, for comparison with compass bearings recorded in 1699.

The results of the new study fit well with historical magnetic declination models with the exception of a 35-year span from about 1775 to 1810, when the evidence of the stone walls places magnetic declinations up to 1.5 degrees east of the existing model values. Magnetic declinations in 1750-1780 were up to 2.0 degrees westward of U.S. Historic Magnetic Declination Estimated values.

“The stone wall study, with only a small difference during a small interval of time, matched up beautifully with the existing geophysical model,” Delano said.

The approach has the potential to be used in Europe, Southeast Asia, and other parts of the world that have old roads, walls and deep records, according to Delano.

“Whatever sample, whatever thing I’m studying, I ask myself, what memory does it contain?” Delano said. “There are stories in the walls.” 

The post Old stone walls record history of Earth’s magnetic wanderings appeared first on GeoSpace.

New map reveals geology and history of Pluto’s moon Charon

Tue, 02/26/2019 - 15:22

By Larry O’Hanlon

What a difference a planetary flyby makes. Pluto’s moon Charon — once no more than a fuzzy blob of pixels beside a larger blob — now has its first geological map, published in AGU’s Journal of Geophysical Research: Planets.

The new map was made with data and images collected by the 2015 flyby of the New Horizons spacecraft, which managed to gather enough data to map about a third of Charon’s surface.

In that area, the scientists have identified 16 different kinds of geological units, or areas with similar landscapes, along with 10-kilometer-tall cliffs; more than a thousand grooves and other long, linear features; and a patchwork of light and dark ground.

To get the elevations of the cliffs, troughs, craters and other features, the team used multiple images of Charon taken as the spacecraft flew past to create stereo 3D images. These images are taken from different positions, so they can be processed using the same principle that our own brain uses to take images from two eyes and give us depth perception.

The new map shows possible evidence of a world that may have once split open like a chapped lip, or a rising cake, then released icy materials from its interior to flood over large areas – what are called cryoflows. In fact, the researchers have found that Charon has perhaps one of the most convincing examples of large cryoflows found in the solar system so far.

Albedo-based map of Charon’s encounter hemisphere Albedo is a measure of the amount of light reflecting off a surface. Credit: Robbins, et al., 2019

Crater Enigma

The new map has revealed many puzzling features of Charon, including its craters.

“Surprisingly we see very, very few degraded craters,” said Stuart Robbins of the Southwest Research Institute and lead author of the new paper. “On Mars we see old (degraded) and new craters. On Charon pretty much every crater we see looks like it was created recently.” Either that or the craters they see have been around a long time without anything changing them, he added.

One explanation for the lack of aged-looking craters might be that some process erased the older craters. That process might be ancient icy flows – cryoflows – that welled up through cracks in the surface of Charon and buried the older craters.

If so, then perhaps sometime in Charon’s past its interior warmed up and underwent a chemical or physical change that caused it to expand slightly. That expansion cracked the surface – analogous to how the surface of a cake cracks as the cake rises while baking, Robbins explained. Then warmer materials from below oozed out over Charon’s surface. That material would have hidden a lot of Charon’s original surface, along with craters that were on that surface. This would also explain features that look like broken blocks of the moon’s crust caught and surrounded by a flood of fresher material.

Geomorphologic unit map of Charon’s encounter hemisphere in (A) cropped Molleweide projection and (B) polar stereographic projection. Credit: Robbins, et al., 2019. 

Oz, Vulcan and Spock

To organize Charon’s features based on the cryoflows, the authors of the map described and named three major epochs in the history of Charon: Ozian, Vulcanian and Spockian.

The Ozian epoch was more than 4 billion years ago, when the informally named Oz Terra part of the crust of Charon was formed, shown in the upper part of the map.

The Vulcanian came next, perhaps starting more than 4 billion years ago as well, with cryoflows forming the Vulcan Planum in the lower part of the map, near Charon’s equator. The Vulcanian probably continued for quite some time as different parts of Charon cooled.

The final epoch, the Spockian, represents the time after the Vulcan Palum solidified. That’s the period of time when the same area got pockmarked with impact craters, up until the present day.

This is just one possible plot for Charon’s story, Robbins points out.

“We could be entirely wrong,” he said about the cryoflows.

It’s a matter planetary scientists can puzzle over while they await more data from Charon, which could be a very long time coming since no follow up missions are currently in the works to that very remote part of the solar system.

Formal and informal nomenclature for regions and features of Charon used in the new map. Oz includes the mid to high latitudes. Vulcan includes the equatorial and near-equatorial regions.

Larry O’Hanlon is a freelance science writer, editor and online producer. He manages the AGU Blogosphere. 

The post New map reveals geology and history of Pluto’s moon Charon appeared first on GeoSpace.

Earth’s atmosphere stretches out to the Moon – and beyond

Wed, 02/20/2019 - 13:10

By Nadjejda Vicente and Claudia Mignone

The gaseous layer that wraps around Earth reaches up to 630,000 kilometers away, or 50 times the diameter of our planet, according to a new study based on observations by the ESA/NASA Solar and Heliospheric Observatory, SOHO, and published in AGU’s Journal of Geophysical Research: Space Physics

“The Moon flies through Earth’s atmosphere,” says Igor Baliukin of Russia’s Space Research Institute, lead author of the paper presenting the results. “We were not aware of it until we dusted off observations made over two decades ago by the SOHO spacecraft.”

Where our atmosphere merges into outer space, there is a cloud of hydrogen atoms called the geocorona. One of the spacecraft instruments, SWAN, used its sensitive sensors to trace the hydrogen signature and precisely detect how far the very outskirts of the geocorona are. These observations could be done only at certain times of the year, when the Earth and its geocorona came into view for SWAN.

The extent of Earth’s geocorona. Where Earth’s atmosphere merges into outer space, there is a cloud of hydrogen atoms called the geocorona. Note: the illustration is not to scale. Credit: ESA

For planets with hydrogen in their exospheres, water vapor is often seen closer to their surface. That is the case for Earth, Mars and Venus.

“This is especially interesting when looking for planets with potential reservoirs of water beyond our Solar System,” explains Jean-Loup Bertaux, co-author and former principal investigator of SWAN.

The first telescope on the Moon, placed by Apollo 16 astronauts in 1972, captured an evocative image of the geocorona surrounding Earth and glowing brightly in ultraviolet light.

“At that time, the astronauts on the lunar surface did not know that they were actually embedded in the outskirts of the geocorona,” says Jean-Loup.

Earth’s geocorona from the Moon. The Earth and its hydrogen envelope, or geocorona, as seen from the Moon. This ultraviolet picture was taken in 1972 with a camera operated by Apollo 16 astronauts on the Moon. Credit: NASA

Cloud of hydrogen
The Sun interacts with hydrogen atoms through a particular wavelength of ultraviolet light called Lyman-alpha, which the atoms can both absorb and emit. Since this type of light is absorbed by Earth’s atmosphere, it can only be observed from space.

Thanks to its hydrogen absorption cell, the SWAN instrument could selectively measure the Lyman-alpha light from the geocorona and discard hydrogen atoms further out in interplanetary space.
The new study revealed that sunlight compresses hydrogen atoms in the geocorona on Earth’s dayside, and also produces a region of enhanced density on the night side. The denser dayside region of hydrogen is still rather sparse, with just 70 atoms per cubic centimeter at 60,000 kilometers above Earth’s surface, and about 0.2 atoms at the Moon’s distance.

“On Earth we would call it vacuum, so this extra source of hydrogen is not significant enough to facilitate space exploration,” says Igor. The good news is that these particles do not pose any threat for space travelers on future crewed missions orbiting the Moon.

“There is also ultraviolet radiation associated to the geocorona, as the hydrogen atoms scatter sunlight in all directions, but the impact on astronauts in lunar orbit would be negligible compared to the main source of radiation – the Sun,” says Jean-Loup Bertaux.

On the down side, the Earth’s geocorona could interfere with future astronomical observations performed in the vicinity of the Moon.

“Space telescopes observing the sky in ultraviolet wavelengths to study the chemical composition of stars and galaxies would need to take this into account,” adds Jean-Loup.

The power of archives
Launched in December 1995, the SOHO space observatory has been studying the Sun, from its deep core to the outer corona and the solar wind, for over two decades. The satellite orbits around the first Lagrange point (L1), some 1.5 million kilometers from Earth towards the Sun.

SOHO observation of the geocorona. The intensity of hydrogen atom emission in the outermost part of Earth’s atmosphere, the geocorona, as measured by the SWAN instrument on board the ESA/NASA Solar and Heliospheric Observatory, SOHO. Low intensity is indicated in blue, high intensity in red. The data revealed that the geocorona extends well beyond the orbit of the Moon, reaching up to 630,000 kilometers above Earth’s surface, or 50 times the diameter of our planet. Earth is located at the centre of the white area, too small to be visible at this scale; the extent of the Moon’s orbit around Earth is indicated as a dotted ellipse for reference. Credit: ESA/NASA/SOHO/SWAN; I. Baliukin et al (2019)

This location is a good vantage point to observe the geocorona from outside. SOHO’s SWAN instrument imaged Earth and its extended atmosphere on three occasions between 1996 and 1998.
Jean-Loup and Igor’s research team in Russia decided to retrieve this data set from the archives for further analysis. These unique views of the whole geocorona as seen from SOHO are now shedding new light on Earth’s atmosphere.

“Data archived many years ago can often be exploited for new science,” says Bernhard Fleck, ESA SOHO project scientist. “This discovery highlights the value of data collected over 20 years ago and the exceptional performance of SOHO.”

 — Nadjejda Vicente  is a Writer & Content Producer for ESA’s Human and Robotic Exploration Directorate. Claudia Mignone is a Communication Officer at the ESA Directorate of Science, ESTEC – Noordwijk, The Netherlands. This post originally appeared on the ESA website

The post Earth’s atmosphere stretches out to the Moon – and beyond appeared first on GeoSpace.

Mining climate models for seasonal forecasts

Wed, 02/13/2019 - 15:47

New study: Existing climate models useful in forecasting, model testing

By Karin Vergoth

A team of scientists has figured out a shortcut way to produce skillful seasonal climate forecasts with a fraction of the computing power normally needed. The technique involves searching within existing global climate models to learn what happened when the ocean, atmosphere and land conditions were similar to what they are today. These “model-analogs” to today end up producing a remarkably good forecast, the team found—and the finding could help researchers improve new climate models and forecasts of seasonal events such as El Niño.

Caption: Satellite image showing El Niño sea surface temperature departure from the norm for October 2015, where orange-red colors are above normal and indicative of El Niño. Credit: NOAA

“It’s a big data project. We found we can mine very useful information from existing climate models to mimic how they would make a forecast with current initial conditions,” said Matt Newman, a CIRES scientist working in NOAA’s Physical Sciences Division and co-author of the study published today in the AGU journal Geophysical Research Letters.

Scientists typically make seasonal forecasts by observing the current global conditions, plugging that estimate into a climate model, and then running the model’s equations forward in time several months using supercomputers. These computationally intensive calculations can only be done at a few national forecast centers and large research institutions.

However, scientists use similar computer models for long simulations of the Earth’s pre-industrial climate. Those model simulations—and there are many—already exist and are freely available to anyone doing climate change studies. Newman and his colleagues decided they would try developing seasonal forecasts from these existing climate model simulations, instead of making new model computations.

Hui Ding, the paper’s lead author and also a CIRES scientist working in NOAA’s Physical Sciences Division, wrote a computer program that searched the huge database of climate model simulations to find the best matches to the current observed ocean surface conditions in a given region of interest. To get the seasonal forecast, the researchers tracked how these model-analogs evolved within the simulation over the next several months.

They found that the model-analog technique was as skillful as more traditional forecasting methods. This means that long-existing climate model simulations are useful as an independent way to produce seasonal climate forecasts, including seasonal El Niño-related forecasts. “Instead of relying only on sophisticated forecast systems to forecast El Niño, we can mine these model runs and find good enough analogs to develop a current forecast,” he said.

Researchers can also use this technique to test models during the development phase. “They can look at how well forecasts from these new models compare to forecasts from pre-existing climate models matched for current conditions. That’s a quick test to see if the new models are improved,” Newman said.

— Karin Vergoth is a CIRES-NOAA science writer. This post originally appeared as a press release on the CIRES website. 

The post Mining climate models for seasonal forecasts appeared first on GeoSpace.

Mediterranean hurricanes expected to increase in strength by end of century

Tue, 02/12/2019 - 23:18

By Liza Lester

Hurricane-strength storms in the Mediterranean could hit the region with increasing power by the end of the 21st century, growing to robust Category 1 strength, according to a new study in the AGU journal Geophysical Research Letters.

Although Mediterranean tropical-like cyclones, known as “medicanes,” are predicted to be less frequent in the future, they will develop a more robust hurricane-like structure and last longer, with higher winds and more rain, according to the new study. The new research predicts the change in storms will begin to emerge at the end of the century, from 2081 to 2100, with stronger storms appearing in the autumn.

Cyclone Numa, a, rare medicane, curls over the Ionian sea with a hurricane-like structure and sustained winds of 101 kilometers (63 miles) per hour, equivalent to tropical storm intensity. Flooding from the storm killed 21 people in Greece and caused damage estimated in the tens of millions.
Credit: NASA Worldview Snapshots

Medicanes arise in the Mediterranean when an extratropical cyclone blunders into the sea basin and transforms to a more tropical cyclone-like storm, with symmetric structure and convective clouds circling a warm core and an eye-like center.

Extratropical cyclones are driven by temperature contrasts across the storm front and blow strongest in the tropopause, about 12 kilometers (8 miles) above Earth’s surface. In contrast, tropical cyclones’ strongest winds are at the surface, which can often be more damaging. Tropical cyclones are called hurricanes in the Atlantic and typhoons in the Pacific west of the International Dateline.

Currently, medicanes of tropical depression intensity or stronger occur a few times a year, but rarely reach the strength of a Category 1 hurricane. Tropical depressions have a maximum wind speed of 63 kilometers (39 miles) per hour.

“In their mature stage, medicanes are similar to hurricanes in the Caribbean,” said Juan J. González-Alemán, a researcher at the University of Castilla-La Mancha in Toledo, Spain, and the lead author of the new study. “Even under an intermediate climate scenario, we will likely see a lower frequency of these storms, but when they occur, they will have a higher chance of reaching Category 1.”

Other studies have predicted an increase in storm intensity in the Mediterranean with climate change, but the new study is the first to utilize a global coupled model, which combines atmospheric and ocean circulation models. Including ocean processes in the model is important for a realistic representation of medicanes, González said. The new model is more robust and addresses the precipitation intensity, tropicality and power dissipation indexes of future storms, according to González.

Tropicality is an indicator of how hurricane-like the storm’s structure is, and power dissipation describes its potential for destruction. The new study predicts both tropicality and power dissipation will increase in magnitude in the coming century. The model could not discard the possibility that storms may reach Category 2 strength, although the main finding is for strong Category 1 storms, González said.

A tropical storm becomes a Category 1 hurricane when sustained winds exceed 119 kilometers (74 miles) per hour and a Category 2 at 154 kilometers (96 miles) per hour.

“The Mediterranean Sea is over-populated, so though smaller than Caribbean hurricanes, the impact on society from medicanes may be worse. They have a high chance of impacting people and societal interests,” González said.

— Liza Lester is a public information specialist and writer at AGU. Follow her on twitter @lizalester.

The post Mediterranean hurricanes expected to increase in strength by end of century appeared first on GeoSpace.

Cracks herald the calving of a large iceberg from Petermann Glacier

Wed, 02/06/2019 - 15:13

The location of Petermann Glacier in Northwest Greenland.
Credit: NASA/Jesse Allen and Robert Simmon.

By Folke Mertens

Cracks in the floating ice tongue of Petermann Glacier in the far northwest reaches of Greenland indicate the pending loss of another large iceberg.

Glaciologists from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) in Bremerhaven, Germany report in a new study that the glacier’s flow rate has increased by an average of 10 percent since the calving event in 2012, during which time new cracks have also formed – a quite natural process.

However, the experts’ model simulations also show that, if these ice masses truly break off, Petermann Glacier’s flow rate will likely accelerate further and transport more ice out to sea, with corresponding effects on the global sea level. The new study was recently published in the Journal of Geophysical Research: Earth Surface, a journal of the American Geophysical Union.

Located in the outermost northwest corner of Greenland, Petermann Glacier is one of the most prominent glaciers in the region: partly because its catchment encompasses four percent of the Greenland Ice Sheet, and partly because it is one of only three glaciers in Greenland with a floating ice tongue. That tongue currently extends roughly 70 kilometers (43 miles) into Petermann Fjord. Cracks 12 kilometers (8 miles) above the previous glacier edge indicate that, in the near future, another major iceberg could calve from Petermann Glacier.

Glaciologists at AWI in Bremerhaven came to this conclusion after analysing satellite imagery of the glacier from the past ten years. “The satellite data shows that Petermann Glacier had a flow speed of roughly 1,135 meters [3,800 feet] per year in the winter of 2016,” said AWI glaciologist Niklas Neckel, co-author of the new study. “That equates to an acceleration of about 10 percent in comparison to the winter of 2011, and we asked ourselves what was responsible for the increased speed.”

Experts consider the acceleration of Petermann Glacier to be an important climate change signal. Unlike the glaciers in southeast and southwest Greenland, those in the island’s northern reaches have remained largely stable; now that appears to have changed, according to the study’s authors.

Fjord sidewalls serve as stabilizing effect to the glacier

The researchers subsequently simulated the glacier’s observed ice transport in a computer model and were able to confirm that the loss of a large iceberg in August 2012 is what triggered the acceleration. “On their way to the sea, the glacier’s ice masses rub along the rock walls that enclose the fjord to the left and right,” said AWI ice modeller and study first author Martin Rückamp. “If a major iceberg breaks away from the end of the glacier’s tongue, it will reduce the tongue’s overall length, and with it, the route along which the ice masses scrape against the stone. This in turn limits the walls’ braking effect, so that the glacier begins flowing faster.”

Left: ASTER satellite image of Petermann Glacier acquired shortly after the 2012 calving event. Right: Sentinel-2 image of the acquisition on 2018/07/31 indicating newly developing fractures in the terminus region.
Credit: NASA/JPL and European Space Agency.

The computer model predicts that a new calving event would produce a similar acceleration. “We can’t predict when Petermann Glacier will calve again, or whether a calving event would actually calve along the cracks we identified in the ice tongue,” Rückamp said. “But we can safely assume that, if it does come to a new calving event, the tongue will retreat considerably, and the rock’s stabilizing effect will further decline.”

An effect of climate change?

To what extent Petermann Glacier’s accelerated ice transport is due to various global warming impacts is a question that the experts haven’t yet investigated in depth.

“We now know that the loss of icebergs increases the glacier’s flow rate. In addition, we’ve observed that calving events on Petermann Glacier are happening more frequently. But the question of whether these changes are due to the warming atmosphere over Greenland, or to warmer seawater, isn’t an aspect that we could investigate using the satellite data,” Neckel said.

Since 2002, the Greenland Ice Sheet and the island’s glaciers have lost an average of 286 billion tonnes of ice per year. This loss of mass is above all due to intensified surface melting in the summer.

Iceberg calving has also increased: Greenland’s glaciers are now losing a forth more ice in the form of calving events than in the comparison period (1960 to 1990). Potential causes include warmer ocean currents, which melt the glaciers’ floating tongues from below; and meltwater, which percolate into cracks and crevasses until it reaches the glacier bed, where it acts like a lubricant, causing ice flows to accelerate.

Folke Mertens is a press officer in AWI’s Communications and Media Relations department. Contact her at +49 (0)471 4831-2007 or medien@awi.de.

The post Cracks herald the calving of a large iceberg from Petermann Glacier appeared first on GeoSpace.

Climate change may push Santa Ana fire season into winter months

Fri, 02/01/2019 - 19:22

By Liza Lester

Dry Santa Ana winds blowing downslope from the Great Basin through the mountains to the coast have historically fanned fires on the Southern California coast in fall, during the driest part of the year. Although the winds peak in winter, rain helps to wet vegetation and douse fire vulnerability.

Santa Ana winds blew smoke from the fast-moving Thomas, Creek and Rye fires over the Pacific on 5 December 2017. Wind gusts topped 112 kilometers (70 miles) per hour that week. An extended span of dry weather left the ground parched well into Southern California’s “wet” season, which starts in October.
Credit: NASA Earth Observatory

Now, new climate modeling published in AGU’s journal Geophysical Research Letters predicts the warm, dry Santa Ana winds will be less frequent, and to some extent less intense, during the 21st century. The change is predicted to be greatest in spring and fall, with winds remaining strong during the peak season in December and January.

A pressure difference between the inland Great Basin and the Pacific Ocean drives the Santa Ana winds. This difference is predicted to diminish as the climate grows warmer. The Santa Ana season, currently from September through May, will effectively shrink, according to the new study.

Independent studies have predicted a parallel decrease in rain during spring and fall, leaving vegetation dry until winter. This combination of changes in the Santa Ana winds and rain may push Southern California’s coastal fire season from fall into the early winter months, according to the new study.

With less frequent autumn winds, fire risk is predicted to decrease during October and November under predicted climate warming, but dry fuels will remain on the ground when the worst of the winds arrive in December. Southern California’s fire future may look more like 2017, according to the authors of the new study, when the Thomas Fire and several other fires ignited in December. The Thomas Fire burned 281,893 acres of Ventura and Santa Barbara Counties.

— Liza Lester is a public information specialist and writer at AGU. Follow her on twitter @lizalester.

The post Climate change may push Santa Ana fire season into winter months appeared first on GeoSpace.

Climate change could make corals go it alone

Thu, 01/31/2019 - 22:31

By Monica Kortsha

A colorful coral reef in the Red Sea offshore of the Sinai Peninsula. According to new research, the coral communities of the future could be much more desolate, with the corals best suited to survive climate change living solitary lives.
Credit: Mal B/ Flickr.

Climate change is bad news for coral reefs around the world, with high ocean temperatures causing widespread bleaching events that weaken and kill corals. However, new research finds corals with a solitary streak – preferring to live alone instead of in reef communities – could fare better than their group-dwelling relatives.

The findings, which could potentially give clues about where modern reef conservation efforts should be focused, are based on a survey of coral species that survived during a period of warming in Earth’s past that resembles the climate change of today. And while the research suggests that corals may cope better with climate change than expected, the isolated lifestyles of the survivors could mean that the coral ecosystems of the future could be bleak.

“Although corals themselves might survive, if they’re not building reefs, that’s going to cause other problems within the ecosystem,” said Anna Weiss, a Ph.D. candidate at the University of Texas Jackson School of Geosciences, who led the research. “Reefs support really big, diverse communities.”

The environment isn’t the only thing facing a bleak future. The coral species with the best odds of survival are drab in comparison to colorful reef corals.

The research was published in the AGU journal Paleoceanography and Paleoclimatology on Jan. 21. Weiss co-authored the paper with her adviser Rowan Martindale, an assistant professor at the Jackson School. 

The study examined coral species that lived about 56 million years ago during the transition of the late Paleocene to the Early Eocene, a time interval that lasted about 200,000 years and that included spikes in temperature and atmospheric carbon dioxide. The spikes created global temperatures that are about 14 degrees Fahrenheit (8 degrees Celsius) warmer than they are today and made oceans more acidic. The researchers tracked coral over this period for insight about how coral living today might respond to contemporary climate change. 

A modern day coral Flabellum pavoninum (right) and a fossilized relative that lived during a period of rapid climate change about 56 million years ago. Corals of this type are solitary and are thought to be better suited to survive warming and acidification caused by climate change than reef-forming corals.
Credit: opencage/Wikimedia, Anna Weiss/ University of Texas Jackson School of Geosciences.

They carried out the work using an international fossil database. The database includes information about when hundreds of coral species lived and their physical traits such as how a species ate, the type of environment where it lived, how it reproduced and whether it was able to form colonies. The research revealed that at the global level, solitary coral species increased in diversity during the warm period. They also found that certain traits that probably helped corals cope with the effects of climate change were associated with coral survival.

One of the traits is catching food independently rather than getting nutrients from heat-sensitive algae that live in certain coral tissues but leave, causing coral bleaching, when the water gets too warm. Another trait is preferring to live on stony seafloor bottoms where the water is cooler rather than on carbonate rock in warmer and shallower areas. Researchers said that understanding which traits are connected to coral survival in the past could be a useful lens for predicting how corals today might respond to ongoing warming and could help focus conservation efforts.

“Conservationists want to know what traits might help different species survive global change. If we can find patterns to survival, we may be able to help our reefs do better today and in the coming years,” said Martindale.

Carl Simpson, a paleobiologist and assistant professor at the University of Colorado Boulder who was not involved with the research, said it was interesting to see how different coral traits were linked to different survival outcomes.

“It can be a little bit of a subtle thing, because you would think that they’re all susceptible to environmental change and warming and acidification,” he said. “But it turns out that there’s enough variety in the way that they live that they actually respond differently.”

Finding out the corals on the global level were able to adapt to climate change in the past suggests that they may be able to do it again in the future. However, Weiss notes that perspective is a “best-case scenario.” Warming during the Paleocene happened over thousands of years, whereas the rate of warming today is occurring over decades to centuries. It’s unknown whether corals will be able to cope with the rapid pace of change that is happening in the present. Weiss said that more research that explores how specific communities of corals — rather than corals as a whole — responded to warming in the past could help improve scientists’ understanding of how corals in different environments around the world might respond to climate change today.

— Monica Kortsha is a science writer at the University of Texas Jackson School of Geosciences. This post originally appeared as a press release on the UT website. 

The post Climate change could make corals go it alone appeared first on GeoSpace.

Beach building is keeping the Atlantic Coast from going under

Thu, 01/31/2019 - 14:00

By Joshua Learn

The artificial build-up of beaches is buffering the U.S. Atlantic coastline against the effects of sea level rise, but that benefit may not last as sand gets harder to come by in the coming decades.

To analyze patterns of shoreline change along the Atlantic Coast, researchers used U.S. Geological Survey records from 1830 to 2007 along more than 2,500 kilometers (1,553 miles) of shoreline, from Massachusetts to South Florida.

The study’s authors were surprised to find a stark contrast between historical and recent rates of shoreline change. The authors expected to see a broad pattern of intensified erosion. Instead, they saw the opposite. Their results indicate that after 1960, shoreline position tended to shift seaward at an average rate of 5 centimeters (2 inches) per year.

“The weird thing was that in these long-term records, we would have expected to see a lot more erosion than we’re seeing. The question was why,” said Eli Lazarus, a lecturer in geomorphology at the University of Southampton in the U.K. and one of the authors of a new study published in Earth’s Future, a journal of the American Geophysical Union.

Contractors sift through sand delivered via truck to Buckroe Beach, in Hampton Va., on Dec. 21 2011, to remove any unwanted material prior to final placement on the beach. (Credit: U.S. Army photo/ Robert Huntoon, source: https://www.army.mil/article/71602/hampton_corps_renourish_buckroe)

The authors turned to a database of beach nourishment projects maintained by the Program for the Study of Developed Shorelines at Western Carolina University. Beach nourishment, or the importing of sand to buffer shorelines from storms and build up beaches for tourism, became the predominant form of coastal protection in the U.S. in the 1960s, according to reports from the U.S. National Research Council. Using the locations and volumes of nourishment projects in the database, the authors compared these records to rates of shoreline change and coastal population density.

The study’s authors found compelling evidence to suggest beach nourishment may be masking the true measure of erosion along the U.S. Atlantic Coast.

Beach nourishment does not occur everywhere. Natural and sparsely developed reaches of coast may still show signs of significant erosion, according to the study. Furthermore, “Sea level rise doesn’t automatically mean that you’re going to see shoreline erosion everywhere,” Lazarus said. Other processes factor in, such as the redistribution of sediment alongshore by waves.

But where beach nourishment does happen, waves may push the imported sand up or down the coast, spreading the effects of nourishment alongshore, even to coastal areas where humans didn’t directly add sand.That means even coastal areas without beach nourishment could benefit from sand being added to a beach nearby.

U.S. Army Corps of Engineers’ contractors pump sand dredged from the bottom of the Chesapeake Bay up to Norfolk, Virginia’s Ocean View Beach. (U.S. Army photo/Patrick Bloodgood. Source: https://www.nao.usace.army.mil/Media/Images/igphoto/2001705368/)

Lazarus notes that while these beach nourishment projects along the U.S. Atlantic Coast were undertaken at a local level, they all add up to an incredible feat of humans reshaping the natural landscape across a very large spatial scale. He calls it a “geomorphic example of geo-engineering.”

“It’s an absolutely massive intervention if you look at it all together,” he said, adding that humans move around more earth materials than all geomorphic processes combined. “We are an agent of geomorphic change just the way you would think about rivers or glaciers or wind or waves.”

While beach nourishment may be appearing to keep sea level rise at bay, Lazarus believes some communities won’t be able to continue at the current pace.

“Sand is getting less and less easy to come by,” he said. “The cost of these beach nourishment projects keeps going up.”

Local communities that rely on tourism to support their economies may suffer in the future if they can’t afford to import sand, Lazarus said.

“As that sand availability dwindles, these coastal towns are going to be in real straits,” he said, predicting a gap will develop between wealthier beach communities and those with less means.

Joshua Learn is a freelance science writer based in Washington, DC.

The post Beach building is keeping the Atlantic Coast from going under appeared first on GeoSpace.

New study estimates amount of water in near-Earth space rocks

Wed, 01/30/2019 - 15:39

By Larry O’Hanlon

Scientists have come out with an estimate of how much water might be available in near-Earth asteroids.

A new study in the Journal of Geophysical Research: Planets, a publication of the American Geophysical Union, suggests there are between 26 and 80 hydrated near-Earth asteroids larger than a kilometer in diameter. Of those, 8 to 26 of the asteroids are easier to get to than the surface of the Moon. The new study also estimates there are between 350 and 1,050 smaller hydrated objects easier to reach than the Moon.

The study’s authors estimate there are between 400 and 1200 billion kilograms (440 to 1.3 billion U.S. tons) of water that could be extracted from the minerals in these asteroids. In liquid terms, that’s between 400 billion and 1,200 billion liters (100 billion and 400 billion U.S. gallons) of water. That’s enough to fill between 160,000 and 480,000 Olympic-sized swimming pools.

The near-Earth asteroid Bennu is 500 meters (1,600 feet) wide and contains hydrated minerals, according to scientists working on the OSIRIS-REx spacecraft. It could one day be mined for water by future explorers. Credit: NASA/Goddard/University of Arizona

Asteroids are rocks in orbit the Sun, generally between Mars and Jupiter, while comets are made up of rock and ice and originate in the outer solar system. Both asteroids and comets are made up of material from the early formation of the solar system. Approximately 19,000 near-Earth asteroids of various sizes have been discovered.

Water in asteroids can provide hints about the nature of the early solar system, including clues about where Earth’s water and the Moon’s polar ice came from. It could also supply water and fuel to future interplanetary space missions, according to the authors of the new study.

“We know that there are minerals with water in them on asteroids. We know that from meteorites that have fallen to the ground.” said Andrew Rivkin of Johns Hopkins University Applied Physics Research Laboratory in Laurel, Maryland. Rivkin is the lead author on the new paper with F. E. DeMeo, of MIT. “It’s also possible that Earth’s water came largely from impacts.”

Meteors are bits of asteroids and comets that enter and burn up in the Earth’s atmosphere and cause meteor showers. If these bits are large enough to make it to the ground, those surviving chunks of space rocks are called meteorites.

For water, follow the iron

Piecing together estimates of how many near-Earth asteroids are rich in hydrated minerals – minerals containing water or hydroxide — is tricky, Rivkin said.

Using telescopes to detect the spectral signals of hydrated minerals in the light from asteroids should be straightforward. Water and hydroxides absorb very specific wavelengths of infrared light, which leaves dips in a hydrated asteroid’s spectrum – what are called absorption bands. Unfortunately, in this case, those absorption bands are the same parts of the spectrum that are either filtered out by most ground-based telescopes or are flooded with signals from water in Earth’s atmosphere.

“Ideally we’d do it in space, but don’t have that capability right now,” Rivkin said. “So instead, we look for absorption due to iron in hydrated minerals.”

Previous research found asteroids that contained the signal of a particular kind of oxidized iron also contained hydrated minerals, so that oxidized iron can serve as a proxy for these hydrated minerals. “Where ever we see the water band, we also see this specific oxidized iron band,” Rivkin said.

The study’s authors combined these iron signals with information about known trajectories of small bodies near Earth to come up with their estimate of the amount of water in near-Earth asteroids.

To get a better estimate would probably require a space telescope, like the James Webb Space Telescope, which is scheduled to launch in 2021, said Rivkin.

Water is expected to be a hot commodity in space, as it is essential for human survival and can be used to propel spacecraft to other parts of the solar system, or to make propellant to refuel Earth-orbiting satellites.

“It’s been argued that it makes sense to mine water,” Rivkin said.

— Larry O’Hanlon is a freelance science writer, editor and online producer. He manages the AGU Blogosphere. 

The post New study estimates amount of water in near-Earth space rocks appeared first on GeoSpace.

Two- to three- fold increase in heatwave occurrence and severity seen directly in UK temperature records

Tue, 01/22/2019 - 13:46

By Peter Thorley

Lyme Regis Beach in the U.K. on an August afternoon.
Credit: Wikimedia Commons

A two- to three-fold increase in heatwave activity in the United Kingdom since the late 19th century has been identified in a new analysis of historical daily temperature data.

Scientists from the Department of Physics at the University of Warwick and at the London School of Economics examined data from the Central England Temperature (CET) record, the longest available instrumental records of temperature in the world.

Their results show that although heatwaves have occurred in the past, their frequency, duration, and severity have increased. The analysis, published in Geophysical Research Letters, a journal of the American Geophysical Union, is a new take on one of the few continuous, long term temperature time-series in existence.

The conclusions do not rely on identifying and counting heatwaves directly but instead use observations of daily temperatures to show how the likelihood of different temperatures has changed. By applying a method called crossing theory to these probabilities, the scientists have provided information on the changing relationship between frequency, duration and intensity of heatwaves. This allows for more robust statements about how climate change has affected the characteristics of the heatwaves we experience.

Heatwaves are by definition rare events and estimating their likely severity and frequency based on the past is a challenge. However, as hotter days become more frequent, heatwaves become more likely and longer lasting on average. This work quantifies the link between more frequent observations of hotter days, and increased heatwave occurrence rates, intensity and average duration

As there are several definitions for heatwaves, this work defines a heatwave as a number of successive days where the maximum daily temperature is above a threshold. The threshold of interest depends on what is societally important, so 28 degrees Celsius (82 degrees Fahrenheit), the UK guidelines for the overheating of buildings, is useful for analyzing the Central England Temperature record, but in a hotter country a higher threshold would be more relevant.

By focusing on occurrences of higher temperatures, the researchers could identify changes in the occurrence of heatwave activity over time and the proportion of time spent in a heatwave. For heatwaves at temperatures exceeding 28 degrees Celsius, they found a two- to three-fold decrease in the average return period (the average time between two successive occurrences) of a six-day long heatwave and a two-to three- fold increase in the duration of a heatwave with an average five year return period. The temperature threshold of a six day long heatwave with five year return period has increased, to being typically above 28 degrees Celsius.

A 2-3 fold increase in heatwave activity since the late 1800s is found in the Central England Temperature record. Crossing theory provides estimates of average heatwave properties from the observed cdf. These are plotted versus time in years, where a heatwave is defined as consecutive days with maximum summer daily temperatures above a threshold. For heatwaves at a threshold of 28 degrees C, the UK threshold for building overheating, we see a 2-3 fold decrease in the return period of a 6 day long heatwave (panel a) and a 2-3 fold increase in the duration of a heatwave with an average 5 year return period (panel b). The temperature threshold of a 6 day long heatwave with 5 year return period has increased, from being typically below 28 degrees C to typically above it (panel c).
Credit: AGU

When studying recent heatwave activity, climate scientists will often use computer models. This study provides an additional method, based upon observational data, that complements established methods and will provide scientists with a baseline with which to compare recent heatwave activity.

Lead author Professor Sandra Chapman from the University of Warwick said: “Heatwaves are by definition rare events, so that putting numbers on their frequency, duration and severity is a challenge. However, as hotter days become more frequent, heatwaves will on average become more likely and longer lasting and if we have the data, this is something we can quantify.

“How these temperature extremes are changing may not simply follow changes in the average temperature. We have seen intense heatwaves in the UK several times before, but at the same time we see heatwaves becoming more intense and severe on average,” she added. 

— Peter Thorley is a media relations manager at the University of Warwick. This post originally appeared as a press release on the University of Warwick’s website. 

The post Two- to three- fold increase in heatwave occurrence and severity seen directly in UK temperature records appeared first on GeoSpace.

New study quantifies deep reaction behind “superdeep” diamonds

Wed, 01/16/2019 - 17:28

By Joshua Rapp Learn

The Cullinan Diamond, the largest gem-quality diamond found, was discovered in South Africa in 1905. Superdeep diamonds have been uncovered at the same mine. Credit: Wikicommons.

Whether they are found in an engagement ring or an antique necklace, diamonds usually generate quick reactions from their recipients. Now, new research shows deep inside the Earth, fast reactions between subducted tectonic plates and the mantle at specific depths may be responsible for generating the most valuable diamonds.

The diamonds mined most often around the world are formed in the Earth’s mantle at depths of around 150-250 kilometers (93-155 miles). They are created by extreme pressure and temperature of at least 1050 degrees Celsius (1922 degrees Fahrenheit). Only a small amount of these diamonds make it to mineable regions since most are destroyed in the process of reaching the Earth’s crust via deep source volcanic eruptions.

But a tiny portion of mined diamonds, called sub-lithospheric or superdeep diamonds, are formed at much deeper depths than others, mostly in two rich zones at depths of 250-450 kilometers (155-279 miles) and 600-800 kilometers (372-497 miles). These diamonds stand out from others due to their compositions, which occasionally include materials from the deep Earth like majorite garnet, ferropericlase and bridgmanite.

“Although only composing 1 percent of the total mined diamonds, it seems lots of large and high-purity diamonds are superdeep diamonds, so they have good value as gems,” said Feng Zhu the lead author of the new study in Geophysical Research Letters, a journal of the American Geophysical Union, who was a post-doctoral geology researcher at the University of Michigan when he performed the research.

No previous theory has completely explained the reason why very few diamonds have been found near the surface from the area at depths of 450-600 kilometers (372-497 miles) – the region between the zones where most superdeep diamonds are formed.

The new study seeks to explain this phenomenon. Zhu, now a post-doctoral researcher at the University of Hawai’i, and his colleagues believe the two superdeep areas where diamonds are formed are rich in the gems due to high production rates. The new study explains what drives the diamond-producing reaction in some areas and what slows it down in other areas.

Diamond formation

According to the authors, diamonds can form anywhere in the mantle, which extends from about 35 to 2,890 kilometers (21-1,800 miles) below the Earth’s surface. However, humans rarely see most of the diamonds formed. Very few diamonds survive the volcanic trip to the Earth’s crust where we can sample them.

That means the chances of finding diamonds from deep regions in the mantle, which produce relatively few of the gems, is extremely small. Only 1 percent of mined diamonds come from superdeep regions.

“In our hypothesis, the production of diamonds at any depth in the mantle is possible, it’s just the production rate is different, so they have a different chance to be sampled in the crust,” Zhu said.

Creating diamonds

In order to mimic the extreme pressures experienced deep inside planets, the study’s authors used diamond anvil cells and a 1,000-ton multi-anvil apparatus at the University of Michigan.  Both these devices allow researchers to compress sub-millimeter-sized material in extreme pressures. They compressed magnesium carbonate powder with iron foil in extreme heats and managed to create minuscule diamond grains visible through scanning electron microscopes.

They found that when conditions are right, diamond grains can form as quickly as every couple of minutes, and never took longer than a few hours to form, although the growth of gem diamonds may take much longer time in an actual melting fluid environment.

In the shallower region rich in superdeep diamond formation, 250-450 kilometers (155-279 miles) down, a subducting tectonic plate pushes under the Earth’s mantle. This supplies plenty of carbonate, which creates “factories on a conveyor belt” for diamonds when combined with the iron from the mantle, the authors said.

High temperatures promote reactions which form diamonds, but pressure does the opposite. At depths roughly 475 kilometers (295 miles) below the surface, the pressure increases, and the reactions slow down drastically, the authors said. That’s why few diamonds are found near the Earth surface coming from between 450-600 kilometers (372-497 miles).

“When your pressure reaches the diamond stable region, it will form. But when you increase pressure it will form at lower rates. You have a trade off there,” Zhu said.

One exception to this rule is in the deeper region of 600-800 kilometers (372-497 miles) beneath the surface. In this region, accumulation of carbonate due to the stagnation of tectonic slabs pushing downwards makes up for the overdose in pressure. So while the reactions slow down, higher temperatures and an abundance of carbonate makes for a diamond-rich region.

Zhu said the new study adds to scientists’ understanding of the Earth’s mantle, about which relatively little is known for sure.

“Superdeep diamond inclusions bring us the only mineral samples from the Earth’s deep mantle,” he said. “Seeing is believing, and these inclusions provide a solid ground for the studies on the inaccessible mantle.”

—Joshua Rapp Learn is a freelance science writer based in Washington, DC.

 

The post New study quantifies deep reaction behind “superdeep” diamonds appeared first on GeoSpace.

New research shows significant decline of glaciers in Western North America

Tue, 01/15/2019 - 23:27

By Andrea Johnson

Evidence of recent glacier retreat can be seen in this aerial photo of the terminus of the Saskatchewan Glacier, in Jasper National Park, Canadian Rockies.
Credit: J. Shea

Alpine glaciers have existed in North America for thousands of years. They represent important, frozen reservoirs for rivers – providing cool, plentiful water during hot, dry summers or during times of prolonged drought.

Glaciers are faithful indicators of climate change since they shrink and grow in response to changes in precipitation and temperature. The first comprehensive assessment of glacier mass loss for all regions in western North America (excluding Alaskan glaciers) suggests that ice masses throughout western North America are in significant decline: glaciers have been losing mass during the first two decades of the 21st century.

Their findings, entitled Heterogeneous changes in western North America glaciers linked to decadal variability in zonal wind strength, was been published today in Geophysical Research Letters, a journal of the American Geophysical Union. 

The research team included scientists at the University of Northern British Columbia (UNBC), the University of Washington, NASA’s Jet Propulsion Laboratory, Ohio State University and the Université de Toulouse in France.

The research team used archives of high-resolution satellite imagery to create over 15,000 digital elevation models covering glaciers from California to the Yukon. These elevation models were then used to estimate total glacier mass change over the period of study. Over the period 2000 – 2018, glaciers in western North America lost 117 Gigatonnes of water or about 120 cubic kilometers – enough water to submerge an area the size of Toronto by 10 meters each year. Compared to the first decade of the 21st Century, the rate of ice loss increased fourfold over the last 10 years.

Satellite (Landsat) images showing snout of Klinaklini Glacier in late summer 2000.

Satellite (Landsat) images showing snout of Klinaklini Glacier in late summer 2018.

UNBC’s team involved in the study include Brian Menounos, a professor of Geography and Canada Research Chair in Glacier Change; Assistant Geography Professor Joseph Shea, and two PhD students Ben Pelto and Christina Tennant.

“Our work provides a detailed picture of the current health of glaciers and ice outside of Alaska than what we’ve ever had before,” said Menounos, the lead author of the paper. “We determined that mass loss dramatically increased in the last 10 years in British Columbia’s southern and central Coast mountains, due in part to the position of the jet stream being located south of the US-Canada border.”

The jet stream is an area of fast flowing upper winds that can steer weather systems over mountains and nourish glaciers with precipitation, mostly in the form of snow that builds up over time and later becomes ice.

“Frequent visitors to America’s glacierized National Parks can attest to the ongoing glacier thinning and retreat in recent decades. We can now precisely measure that glacier loss, providing a better understanding of downstream impacts,” said co-author David Shean of the University of Washington. “It’s also fascinating to see how the glaciers responded to different amounts of precipitation from one decade to the next, on top of the long-term loss.”

— Andrea Johnson is a communications officer at the University of Northern British Columbia. This post originally appeared as a press release on the UNBC website. 

The post New research shows significant decline of glaciers in Western North America appeared first on GeoSpace.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer