Syndicate content
Earth & Space Science News
Updated: 20 hours 4 min ago

Hail Causes the Most Storm Damage Costs Across North America

Thu, 08/16/2018 - 12:20

On 18 June, Katy Human and her family watched as tiny chunks of ice fell from the sky.

At first, Human and her family, of Louisville, Colo., were delighted.

“Hail is cool and relatively rare, so we all went out on the front porch to watch it,” she said. However, “Over the next 10 minutes, it got bigger and bigger.” The family backed up, pressing against the wall of their house as golf- and tennis-ball-sized hailstones pummeled their yard.

Leaves and hailstones litter a car that was smashed in a hailstorm on 18 June in Louisville, Colo. Resident Katy Human said that her family’s two vehicles, which were parked outside during the storm, were totaled. Credit: Katy Human

“We have a garage, but both cars were out,” she said. “It went from ‘Oh, my God, isn’t this beautiful’ to ‘Holy crap, this is doing a lot of damage.’”

Human’s family was witness to something that people in the insurance industry have long known. Tornadoes and hurricanes may grab headlines, but when it comes to property damage, the biggest extreme weather culprit is an often overlooked weather phenomenon: hail.

Hailstorm destruction exceeds $10 billion each year across North America, accounting for almost 70% of the property damage in insurance claims from severe storms, said Ian Giammanco, lead research meteorologist at the Insurance Institute for Business and Home Safety’s research center in Richburg, S.C. But, he said, funding for hail studies is limited, and the phenomenon is often treated by the public as a curiosity.

“It’s the Rodney Dangerfield of perils,” he said of hail. “It just doesn’t get any respect.”“It’s the Rodney Dangerfield of perils,” he said of hail. “It just doesn’t get any respect.”

Giammanco spoke Tuesday on a panel of scientists at the North American Workshop on Hail & Hailstorms. The 3-day symposium, the first of its kind in the United States, was hosted by the National Center for Atmospheric Research (NCAR) in Boulder, Colo.

Making Hail

The workshop occurred just a week after a severe hailstorm pelted Colorado Springs, about an hour’s drive from Boulder, with hailstones reported to be as large as softballs. The storm caused millions of dollars in property damage, injured more than a dozen people, and killed five animals in the Cheyenne Mountain Zoo.

Colorado and the central United States are hot spots for hailstorms, explained panelist Andreas Prein, an NCAR project scientist expert on climate modeling with a focus on severe storms. Colloquially called “hail alley,” the area has the right conditions for severe thunderstorms that can produce large hailstones.

A severe hailstorm needs unstable air with a strong updraft, differing wind speeds and wind directions, air that’s humid close to the ground and dry at higher altitudes, and a freezing point that’s relatively close to the ground.

These conditions work together to form the largest hailstones, by drawing moisture up from the ground and quickly cooling it into hailstones; circulating those hailstones around and around within the storm, allowing them to pick up moisture and grow larger; and giving them a short distance to fall back to Earth, so they don’t melt on the way down, Prein said.

Climate Effects Unclear

Hail damage is expected to increase in coming years, largely driven by population growth and suburban sprawl.Hail damage is expected to increase in coming years, largely driven by population growth and suburban sprawl. Sprawl means more buildings, and thus “bigger targets for hailstorms to hit,” Giammanco said.

But humans are also thought to be influencing hail itself, through climate change. The effect, however, isn’t straightforward, influencing hail-forming conditions in varied and sometimes contradictory ways, Prein explained.

For example, climate change is expected to increase air buoyancy for strong updrafts, but at the same time it appears to be causing the freezing level to move higher above the ground. “The question is, How are these things interacting, and how are they affecting hail frequency?” Prein said.

He noted that whereas most of the country has seen hail decrease over the past 100 years, certain areas, including the central United States and the mid-Atlantic states, have seen it become more frequent and severe.

“We need more research on that to really understand how climate change and climate variability [are] changing hail hazard,” Prein said.

Efforts to Study Hail

Hail expert and panelist Andrew Heymsfield, NCAR senior scientist and cochairman of the workshop, noted that a hailstorm-penetrating aircraft that had been used to gather important data since the 1970s was decommissioned in 2003, leaving scientists without an important tool for research for the past 15 years.

Another panelist, Kristen Rasmussen, an atmospheric scientist at Colorado State University in Fort Collins, hopes to help reset the scales. She travels to Argentina later this year to study one of the most hail-prone areas in the world.

“We’re trying to study the whole convective process from beginning to end,” Rasmussen said.Storms producing hail the size of oranges and grapefruits are a yearly occurrence in areas like Mendoza, and hail causes significant monetary damage to the region’s many vineyards. But the area lacked any storm forecasting ground radar or warning systems—such as those provided by the National Weather Service in the United States—until about a year ago, she explained.

For her field project, funded by the National Science Foundation and the U.S. Department of Energy, Rasmussen plans to bring a suite of ground-based radars, hydrological gauges, lightning mapping instruments, and other equipment to learn more about the science of severe storms and hail formation. “We’re trying to study the whole convective process from beginning to end,” she said.

Extensive Damage

For Human, who works with storm researchers as communications director for the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder, the science of hail is now personal.

Still, Human didn’t lose her sense of wonder for hail. “There are these intricately layered concentric circles, like tree rings,” she said. “They’re gorgeous.”After the 18 June storm, she and her family stumbled out into the street; it was difficult to walk because they kept slipping on the smooth spheres of ice that littered the ground. There they joined their shocked neighbors in inspecting the damage. “Both cars were totaled,” she recalled. Her home’s solar panels were smashed and the roof was battered. Human would later learn that the whole roof had to be replaced. Human estimates the total damages at around $50,000, which will be covered by insurance.

Still, Human didn’t lose her sense of wonder for hail. “They’re beautiful when they crack open,” she said. “There are these intricately layered concentric circles, like tree rings. They’re gorgeous.”

—Ilima Loomis (email: ilima@ilimaloomis.com; @iloomis), Freelance Journalist

The post Hail Causes the Most Storm Damage Costs Across North America appeared first on Eos.

Great American Eclipse Data May Fine-Tune Weather Forecasts

Thu, 08/16/2018 - 12:19

A year ago, the skies across the United States darkened as the Moon passed in front of the Sun. The 21 August 2017 celestial event dubbed the Great American Eclipse was the first total solar eclipse since 1918 to traverse the full width of the continental United States.

As millions of onlookers witnessed the extraordinary midday darkness and stillness brought on by the Moon’s shadow, a national meteorological observing network was doing what it always does. At 114 automated stations across all 50 U.S. states, the U.S. Climate Reference Network (USCRN) was taking accurate and precise readings every 5 minutes of surface temperature, air temperature, humidity, and other environmental conditions [Diamond et al., 2013].

Although taking those readings was just routine work for the network, our team foresaw that the coincidence of the USCRN’s ordinary data gathering with this remarkable eclipse could yield something extremely useful. That’s because many phenomena, from the daily setting of the Sun to fleeting events, such as dust storms and passing clouds, suddenly disconnect some piece of the land-atmosphere system from its main energy source, the Sun.

Data from the Great American Eclipse allows us to identify deficiencies in weather forecasting models and make improvements to them.Many computer models developed to simulate and predict the land-atmosphere system’s behavior—in particular, weather forecasting and climate models—have difficulty accurately reproducing the system’s response to such disconnections and reconnections when they occur quickly or locally. In other words, problems arise when simulating such events on a scale of a few minutes to a few hours or when they occur in only small patches of a wider region.

Although eclipses might seem unrelated to weather and too rare to have implications for weather forecasting, they have the same effect of rapidly reducing the amount of incoming sunlight as other passing events that occur frequently and do affect the weather. Using data from the Great American Eclipse’s meteorological effects allows us to identify deficiencies in weather forecasting models and make improvements to them. These improvements help lead to better weather forecasts.

Coast-to-Coast Laboratory

For us, the Great American Eclipse was a grand, controlled experiment in a laboratory the size of a continent. Much like the results from any controlled laboratory experiment, USCRN’s measurements during the eclipse, from stations equipped with uniform suites of instruments, captured a telling set of responses to one type of change to the system. The eclipse applied that change across a wide range of geographic regions, climate types, and percentages of totality.

Across the entire USCRN, complete obscuration, or darkness, occurred at nine stations during the eclipse. Totality among those stations ranged from 0.52 minute at Lincoln, Neb., to 2.55 minutes at Crossville, Tenn. Fifty-four more stations had at least 75% obscuration, and all but one had at least 50% (Figure 1).

Fig. 1. USCRN stations in the path of the 21 August 2017 total solar eclipse (blue circles). Note that the sixth blue circle from the left (in southeastern Nebraska) depicts two stations that are near each other.

To gather the measurements relevant to the Great American Eclipse from USCRN’s database (all free and publicly available), we extracted the data collected every 5 minutes by each station from 2 hours before the moment at which the greatest obscuration of the Sun’s disk took place at that location until 2 hours after that moment. We then calculated the changes in select meteorological variables (i.e., air temperature, surface temperature, and relative humidity) during that period.

This unique data set is now helping meteorologists, climatologists, and environmental scientists to better understand and characterize feedbacks between the surface of the land and the overlying atmosphere during brief and/or localized interruptions in solar radiation. Those interruptions include a broad range of events such as when dense fog blankets an area, a dust storm arrives, prolonged wildfires break out, or a volcano’s ash cloud temporarily obscures the Sun.

Efforts to use the data to improve models of the land-atmosphere system are under way.Likewise, efforts to use the data to improve models of the land-atmosphere system are under way. For instance, some of our colleagues at the National Oceanic and Atmospheric Administration (NOAA) laboratories in Boulder, Colo., are using our eclipse data to help evaluate high-resolution weather forecasting models, such as one called the High-Resolution Rapid Refresh (HRRR) model. A next-generation weather forecast model, HRRR assimilates radar data every 15 minutes and generates an updated forecast every hour. In contrast, the Global Forecast System and other more traditional weather forecast models, used widely in the United States by the National Weather Service, military branches, TV meteorologists, and others, produce new forecasts typically every 6 hours.

Wide-Ranging Responses

What effects did this grand experiment actually reveal from the eclipse’s switching off and on of local sunlight? The findings themselves are unremarkable in that the observed effects, such as the air and surface temperature decreases (Figures 2a and 2b, respectively) and relative humidity increases (Figure 3) as the Moon darkened the Sun’s disk, were well known and expected. Rather, capturing at multiple and varied locations precisely how much change took place and at what rates is the key result.

Fig. 2. Decreases in (a) air temperature and (b) surface infrared temperature between the maximum temperature within the 2 hours prior to the closest approach to totality and the minimum temperature during the eclipse event at the USCRN stations in the conterminous United States. Fig. 3. Increases in relative humidity between the minimum relative humidity within the 2 hours prior to closest approach to totality and the maximum relative humidity during the eclipse event at the USCRN stations in the conterminous United States.

As expected, the largest impacts from the eclipse were found along its centerline. Other factors such as cloudiness and vegetation cover also affected land-atmosphere responses. Fortunately for this attempt to gauge exactly the impacts of solar obscuration, the Moon blocked the Sun at a time of day when solar radiation was strong and most of the United States had little cloud cover.

The effects were also stronger in the eastern part of the country, where daytime heating had progressed further and air masses had higher moisture content than in western states.

Overall, maximum cooling at USCRN stations ranged from 2°C to 5°C near the centerline (Figure 2a). Surface temperatures, which often undergo greater variations than air temperatures because the ground does a better job radiating heat than the air, fell at 109 sites, with decreases ranging from 5°C to 15°C (Figure 2b).

To illustrate the changes in solar radiation and temperature and the rapid pace at which those variations occurred because of the eclipse, our team created an animated map. The map uses colors to depict solar radiation intensity and temperature differences recorded by USCRN stations every 5 minutes during the Great American Eclipse. Watch it here (to start or stop the animation, click on the Time Slider icon at the top left). In addition, available for free download at FTP sites online is the full set of 5-minute data from the eclipse, as well as the animated map mentioned above and other eclipse-related animations.

Eclipse totality at Ten Mile, Tenn. Credit: Michael Buban, NOAA/ARL/ATDD A Different Perspective

To independently validate the findings from the USCRN experiment, our team also monitored the eclipse with another set of instruments deployed near the town of Ten Mile, Tenn., which lies 75 kilometers southwest of Knoxville and on the path of totality. This site was ideal because it was in totality for 2.63 minutes, and fair weather conditions allowed for eclipse effects to be maximized.

There, NOAA scientists from the Air Resources Laboratory (ARL) Atmospheric Turbulence and Diffusion Division (ATDD) in Oak Ridge, Tenn., installed instruments, including some mounted on a drone, to measure ground surface and air temperature (also measured at all USCRN sites) and incoming and outgoing shortwave radiation (from the Sun) and longwave radiation (from Earth). Horizontal and vertical winds were also measured to study land-atmosphere interactions during the eclipse. Changes in all of the above meteorological conditions at Ten Mile proved consistent with findings from USCRN (see Figure 4 for Ten Mile temperature and solar radiation data).

Fig. 4. We found that consistent with the USCRN sites in the path of totality, temperatures at Ten Mile, Tenn., during the day of 21 August 2017 rapidly decreased during and shortly after eclipse totality. Surface temperatures (red) decreased by nearly 12°C, and the air temperature (blue) decreased 5°C. Following these temperature minima, both surface and air temperatures returned to near preeclipse values. The black line depicts incoming solar radiation (in watts per square meter) at the site. Times are in local standard time (LST).

Other measurements provided additional insights into the rapid changes in near-surface energy during the eclipse. Sensible heat flux, or transfer of heat from Earth’s surface into the atmosphere, decreased to near 0 watts per square meter around totality but increased toward the end of the partial phase.

We also found similar patterns in turbulent kinetic energy, or how much the air motion varies, during the eclipse. These large-scale, eclipse-driven patterns suggest that small-scale changes that happen whenever the surface energy is rapidly removed—for instance, by thick clouds or heavy aerosol loads obscuring the Sun—might likewise decrease the amount of turbulence in the lower atmosphere. This decrease could lead to less energy exchanged between the surface and the atmosphere, which would further reduce the turbulence.

Moving Forward

Whereas the continental-scale data set we gathered from USCRN offers us one way to study feedbacks between the land surface and atmosphere, targeted, regional field studies provide another.We expect to gain new perspectives on processes occurring within the lowest part of our atmosphere.

For example, several authors of this eclipse study are involved with the Land Atmosphere Feedback Experiment (LAFE) [Wulfmeyer et al., 2018], a monthlong experiment last year in northern Oklahoma in August that used a dense network of sophisticated, near-surface meteorological observations to seek ways of better representing very complex interactions between the land surface and the atmosphere. Although the eclipse was not the focus of LAFE, the site did experience 89% obscuration, thereby providing another rich data set on the rapid, near-surface changes that occurred during the Great American Eclipse [Turner et al., 2018]

By combining such observations from fieldwork on relatively small scales with continental-scale observations like those from USCRN, we expect to gain new perspectives on the interactions and processes occurring within the lowest part of our atmosphere. Increasing our knowledge about these processes and learning how to better represent them ultimately will improve the weather forecasting models that we all rely on for our day-to-day activities.


We gratefully acknowledge the hard work of the USCRN technicians from NOAA/ARL/ATDD for installing and maintaining the network. We thank Mark Heuer from NOAA/ARL/ATDD for assembling the instruments used on the tower at Ten Mile and Kym, Tom, and Jerry Swanks, who allowed us to install the tower. We thank Devin Thomas of ERT, Inc., at NOAA National Centers for Environmental Information for assistance with graphics. We thank Michael Potter and Rick Saylor from NOAA/ARL/ATDD for developing the eclipse web pages. We acknowledge Rick Saylor and an anonymous reviewer, whose suggestions helped improved the quality of the manuscript. Finally, we note that the results and conclusions, as well as any views expressed herein, are those of the authors and do not necessarily reflect those of NOAA or the Department of Commerce.

The post Great American Eclipse Data May Fine-Tune Weather Forecasts appeared first on Eos.

How Much Land Surface Is Under Water at Any Given Time?

Wed, 08/15/2018 - 12:24

Measurement of inundation extent in rivers, lakes, reservoirs, and wetlands is of vital importance to addressing scientific and societal problems ranging from flood prediction to quantification of the global carbon cycle. Boundaries between dry land and open water extend for long distances, and they change over time, so ground-based measurement of inundation extent is difficult. Instead, remote sensing is a promising way to comprehensively monitor surface water extent at large spatial scales.

Recent deployment of satellite-based sensors capable of measuring inundation extent presents an opportunity to reimagine monitoring inundation extent from space.Recent deployment of satellite-based sensors capable of measuring inundation extent—such as the Landsat Operational Land Imager (OLI), Phased Array type L-band Synthetic Aperture Radar (PALSAR), and Sentinel-1 and –2 and the anticipated launch of new missions, including NASA’s Surface Water and Ocean Topography (SWOT) and the joint NASA–Indian Space Research Organisation’s (ISRO) Synthetic Aperture Radar (NISAR) missions—presents an opportunity to reimagine monitoring inundation extent from space.

At a workshop held at the University of Colorado, more than 30 U.S. and international scientists discussed the potential for global inundation extent data products, and they developed recommendations for relevant government agencies. Meeting participants identified key science questions requiring inundation extent measurements, assessed capabilities of current remote sensing products, and explored the potential for advances in inundation measurement. Attendees had expertise in remote sensing methods development, global hydrology, aquatic ecology, carbon cycle science, and flood modeling.

The meeting produced five key recommendations:

Participants strongly urge all space agencies to develop and maintain free and open data policies because global inundation extent measurement depends on making mosaics from many images acquired across space and time. Attendees recommend that surface water hydrologists and their sponsors create an Inland Waters Science Team to coordinate science and data product development internationally.Space agencies from the United States, Europe, France, Japan, and Canada maintain or are developing satellite instruments capable of measuring inundation extent and other variables critical to understanding inland and coastal water bodies. To optimally leverage these tools, attendees recommend that surface water hydrologists and their sponsors create an Inland Waters Science Team to coordinate science and data product development internationally. This science team should use data from synthetic aperture radar, passive microwave, and optical instruments to develop consistent, dynamic, global inundation extent products in open water and vegetated environments at high temporal resolution. An effort to develop consistent products will benefit and will later use measurements from future missions like SWOT, NISAR, and Sentinel sensors and other optical and radar missions scheduled for launch in the next decade. Current inundation extent products are often not well validated. Participants recommended development of a system of inundation extent validation sites and data sets including extensive airborne and ground-based measurements. This network should include sites for long-term monitoring and intensively instrumented campaigns focused on individual events, such as major floods. It could be coordinated through the Inland Waters Science Team. Attendees recommend focused development of two static data sets critical to global inundation science: a high-resolution, global floodplain digital elevation model with submeter vertical accuracy and a very high spatial resolution (<5 meter) data set of maximum and minimum open water extent based on optical imagery.

The workshop was supported by the NASA Terrestrial Hydrology Program, with travel organized by ATA Aerospace LLC. It was generously hosted by the Cooperative Institute for Research in Environmental Sciences at the University of Colorado.

—Tamlin M. Pavelsky (email: pavelsky@unc.edu), Department of Geological Sciences, University of North Carolina at Chapel Hill; and J. Toby Minear, Cooperative Institute for Research in Environmental Sciences, University of Colorado Boulder

The post How Much Land Surface Is Under Water at Any Given Time? appeared first on Eos.

Brown Carbon from Increased Shipping Could Harm Arctic Ice

Wed, 08/15/2018 - 12:23

It’s nearly impossible to escape air pollution in today’s industrialized world. Even far out at sea, where there are almost no other sources of pollution, massive ships burn through vast quantities of oil and diesel fuels. A new analysis shows how brown carbon (a general term for complex mixtures of light-absorbing organic molecules) in pollution from marine vessel engines operated on heavy fuel oil may potentially be a contributor to climate change, particularly in regions such as the Arctic, where the use of these fuels is common and where shipping activities are expected to significantly increase in the near future.

Marine engine exhaust contains black carbon: sooty, black particles produced by the incomplete combustion of fossil fuels. This particulate exhaust absorbs sunlight directly as aerosol and after deposition on snow and ice as well, such as in the Arctic. Depending on the types of fuels used, such exhaust can also be rich in brown carbon, which has also recently been recognized as an important contributor to climate impacts of aerosols.

To better understand how air pollution from ships affects Earth’s atmosphere, Corbin et al. carefully analyzed emissions from a four-stroke, single-cylinder research ship engine at the University of Rostock in Germany. The engine was similar to those used on many smaller ships as a main power supply and on larger ships for extra power or as a backup.

The team ran the engine on several different types of fuel, including light fuels such as diesel and marine gasoline oil, and heavy fuel oil, a black, viscous by-product left over from distilling lighter, more transparent fuels. They measured the light absorption—and related properties—of pollutant particles in situ, as well as particles that were collected on a filter. They then combined a suite of instrumentation to retrieve the effective refractive index of brown carbon in the particles.

Whereas burning lighter distillate fuels produced no brown carbon, the team found that burning heavy fuel oil produced large quantities of particulates, especially brown carbon. The additional brown carbon increased the warming influence that heavy fuel oil exhaust would have over snow surfaces by 18% compared to that of the lighter fuels and by even larger amounts at the low engine loads used in icy waters—a major difference that needs to be considered in future climate models, the authors say. (Journal of Geophysical Research: Atmospheres, https://doi.org/10.1029/2017JD027818, 2018)

—Emily Underwood, Freelance Writer

The post Brown Carbon from Increased Shipping Could Harm Arctic Ice appeared first on Eos.

Radar Data Highlights Areas Damaged by Wildfire and Debris Flows

Tue, 08/14/2018 - 12:19

NASA’s Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) flies 12½ kilometers above the Earth and collects data at high spatial resolution. Its observations have been used to detect a variety of changes in the Earth’s surface including crustal deformation and fault slip due to earthquakes, and surface motion from groundwater withdrawal and recharge. Donnellan et al. [2018] demonstate how UAVSAR imagery can be applied to natural hazard mapping and disaster response. Using the example of the 101 Freeway in California, affected in late 2017 by wildfire and in early 2018 by debris flows caused by a winter storm, the authors post-processed UAVSAR data to analyze changes in the landscape. Their results are presented in a before-and-after fashion. Data products using this method could be a useful tool for disaster response for fires and debris flows.

Citation: Donnellan, A., Parker, J., Milliner, C., Farr, T. G., Glasscoe, M., Lou, Y., et al. [2018]. UAVSAR and optical analysis of the Thomas fire scar and Montecito debris flows: Case study of methods for disaster response using remote sensing products. Earth and Space Science, 5. https://doi.org/10.1029/2018EA000398

—Benoît Pirenne, Editor, Earth and Space Science

The post Radar Data Highlights Areas Damaged by Wildfire and Debris Flows appeared first on Eos.

High-Altitude “Wind Walls” Discovered near Magnetic Poles

Tue, 08/14/2018 - 12:18

In 2008, scientists reported a “superdisturbance” consisting of extreme east–west variations in wind speed in the upper atmosphere near Australia. It was not an anomaly; similar disturbances occurred in March of three different years. Building on that initial discovery, Shepherd and Shepherd now report the existence of extreme-speed wind features called “wind walls” near Earth’s magnetic poles.

The superdisturbances reported in 2008 surfaced from data captured by the Wind Imaging Interferometer (WINDII) instrument mounted on NASA’s Upper Atmosphere Research Satellite. From 1991 to 2003, WINDII measured wind flow in the lower thermosphere (at altitudes of 80–300 kilometers) by observing airglow, the luminescence of atmospheric molecules resulting from their excitement by solar radiation.

In 2017, analysis of airglow measurements from WINDII revealed perturbations of atomic oxygen levels in the same locations in the atmosphere as the superdisturbances, which prompted further investigation of the WINDII data, including moving the search nearer to the magnetic poles.

The new WINDII analysis reveals vertical wall-like channels where gentle, east flowing winds reverse and give way to extreme westward winds racing at 200–600 meters per second at altitudes of 140–250 kilometers near the south magnetic pole. These high-latitude “wind walls” occur about 50% of the time during the local summer and autumn, and similar wind walls occur near the north magnetic pole in the Northern Hemisphere’s summer and fall.

Sharp wind gradients at the walls’ boundaries result in different airflow patterns on either side of each wall, influencing vertical transport of atmospheric molecules. This airflow accounts for the atomic oxygen perturbations reported in 2017. The wind walls also appear to affect vertical transport of helium, nitrogen, and argon.

Previous research has shown that the interplanetary magnetic field can affect high-altitude, high-latitude winds. The new WINDII analysis suggests that the interplanetary magnetic field may help to shape wind walls, but more research is needed to determine the extent of this influence. (Geophysical Research Letters, https://doi.org/10.1029/2018GL077722, 2018)

—Sarah Stanley, Freelance Writer

The post High-Altitude “Wind Walls” Discovered near Magnetic Poles appeared first on Eos.

Why Trace Metals Cling to the Ocean’s Skin

Tue, 08/14/2018 - 12:17

What lies between the sea and the sky? The horizon, most people would probably say. But ocean scientists know there’s something else: a millimeter-thick layer of carbohydrates, proteins, and lipids called the sea surface microlayer (SML). This gel-like layer acts like the ocean’s skin, regulating the exchange of substances between water and atmosphere. It plays a key role in how oceans absorb carbon dioxide and whether trash and other pollutants sink or float. Now a new study reveals why certain trace metals become concentrated in the SML, a finding that could help scientists understand how changes in the ocean’s “skin” affect climate and ocean health.

The SML is a complex environment, full of dissolved and particulate organic matter from marine life and human activities. Past studies have shown that when this layer contains high levels of lipids—organic compounds made of fatty acids and their derivatives—the concentration of certain trace metals is also higher. This finding is important because both metals and lipids can alter the ocean’s surface tension, which, in turn, affects the production of sea spray aerosol. These tiny droplets of seawater, produced when wind buffets the ocean and waves break, can travel high into the atmosphere. They play a key role in the formation of clouds, as well as other vital atmospheric processes.

To determine which lipids have the highest affinity, or tendency to bind, to metal ions in the SML, Zhang et al. put several types of fatty molecules in water with different metals and observed which combinations caused the water’s surface tension to decrease, an indication that binding was occurring. The team also used a technique called Brewster angle microscopy to directly observe ultrathin organic films similar to the SML and study their molecular organization.

Phosphate esters—a type of lipid found in lubricating hydraulic fluids and flame retardants, as well as in natural systems—were the most effective at binding to trace metals, the team found. In particular, they tended to bind to positively charged ions of aluminum, iron, and zinc very strongly. Although other types of organic molecules also bind to trace metals, such as those containing sulfate and nitrate groups, the study shows that phosphate esters are an important player in the SML’s surface organization, a dynamic that could affect the exchange of gases, aerosol composition and reflectivity, and water vapor worldwide. Additionally, this research explains metal enrichment within the SML, a critical finding for understanding metal abundances at ocean surfaces and the cycling of trace metals in our environment. (Journal of Geophysical Research: Oceans, https://doi.org/10.1029/2018JC013926, 2018)

—Emily Underwood, Freelance Writer

The post Why Trace Metals Cling to the Ocean’s Skin appeared first on Eos.

Evidence of Regional Deposition in Mars’s South Polar Deposits

Mon, 08/13/2018 - 12:04

One of Mars’s largest water ice reservoirs, the south polar layered deposits, consists of a thick stack of alternating bands of dust and ice that encompasses an area nearly the size of Alaska. Previous studies have suggested that variations in the obliquity of the Red Planet’s axis, which can wobble up to 10° from its current 25° tilt, have controlled the accretion of these layers and that they therefore preserve a long-term record of Mars’s ancient climate.

Building upon earlier research, which identified four distinct periods of ice accumulation in the south polar deposits, Whitten and Campbell utilize Shallow Radar (SHARAD) data collected by the Mars Reconnaissance Orbiter to investigate the structure and continuity of the deposits’ subsurface layers.

Using a series of new processing techniques to improve the data’s vertical resolution, the team identified three discrete units within the south polar deposits and mapped their areal extents.

Despite a diffuse signal that blurred data below a depth of about 1.1 kilometers, the team was able to laterally trace some units with distinctive radar characteristics, suggesting they could be continuous across the entire region. The researchers found that the majority of the identified units are nearly horizontal, with no observed change in their chemical composition, and the location of unconformities supports a regional depositional pattern. These features led the researchers to conclude that the deposits’ interior is relatively homogeneous.

Collectively, these results suggest Mars’s south polar deposits were emplaced as a single, regional unit rather than as material from multiple centers of deposition that gradually coalesced. By identifying the various units in the south polar layered deposits and helping to clarify their areal relationships, this study contributes to our current understanding of this potential resource and provides a crucial step in helping to unravel the Red Planet’s climate history. (Journal of Geophysical Research: Planets, https://doi.org/10.1029/2018JE005578, 2018)

—Terri Cook, Freelance Writer

The post Evidence of Regional Deposition in Mars’s South Polar Deposits appeared first on Eos.

Kevin Charles Antony Burke (1929–2018)

Mon, 08/13/2018 - 12:01
Kevin Charles Antony Burke. Credit: Kevin Burke

Kevin Charles Antony Burke, one of the greatest geologists of our time, died in his sleep of a heart attack at age 89 on 21 March 2018 at Addison Gilbert Hospital in Gloucester, Mass. Kevin was what one might call a “compleat geologist” of the ilk of Charles Lyell, Alexander von Humboldt, Eduard Suess, or Arthur Holmes.

Kevin made immense contributions to our understanding of Earth’s behavior after the onset of plate tectonics. He based these contributions on his extensive previous experience in such diverse parts of the globe as the British Isles, various parts of Africa, South Korea, the Caribbean, and Canada. In the 1970s, he created, together with his lifelong friend John F. Dewey, an exemplary institution of research and education in the Department of Geological Sciences of the State University of New York at Albany. Many postgraduate students who obtained their degrees in that remarkable place went on to become significant researchers.

From England to Nigeria

In Nigeria, he initiated the Benue Valley project, which had far-reaching implications for our later understanding of continental breakup.Kevin was born on 13 November 1929 in London to a cultured family of Irish descent. He obtained his B.Sc. from University College London in 1951. He earned his Ph.D. from the University of London in 1953 on the basis of a mapping project in western Ireland. After serving as a lecturer in the University College of the Gold Coast (now the University of Ghana), he joined the British Geological Survey in 1956. As a senior geologist in the survey’s Atomic Energy Division, he worked in the east African rifts and in South Korea. That was also when he married his lifelong companion and great supporter, Angela Marion Burke (neé Phipps).

Kevin was the head of the Geology Department at the University of the West Indies in Kingston, Jamaica, between 1961 and 1965, followed by a position as the head of the Geology Department in the University of Ibadan in Nigeria from 1965 to 1972. In Nigeria, he initiated the Benue Valley project, which had far-reaching implications for our later understanding of continental breakup.

It was also then, in 1967, that he became aware of the great power of the then new theory of plate tectonics for assisting our understanding of the behavior and history of our planet. Despite the social unrest in Nigeria, caused by the Biafran War (1967–1970), he not only completed the Benue Valley project with his colleagues, but he also found time for a host of other studies: the formation of tropical soils (he showed the dominant role of earthworms in their formation), catastrophic erosion events in a tropical climate, and new sedimentological methods (a quick way to determine the sphericity of pebbles)—and he published a Bouguer map of gravity anomalies in Nigeria.

Plate Tectonics and Precambrian Geology

The need for better schools for their three children, Nicholas, Matthew, and Jane, forced the Burke family to relocate to Erindale College of the University of Toronto in 1972, where J. Tuzo Wilson was principal. It was there that Kevin began to explore the implications of plate tectonics in geology, starting with what he called “hot spots.” He showed how these hot spots led to continental breakup in Africa. Further, he showed that the hot spots moved about, albeit much more slowly than the plates themselves.

This time was also when his old friend John F. Dewey had moved to Albany, and Kevin was called there as department head. Already, John and Kevin had collaborated in Toronto on the origin of rifts, continental dispersion, and the effects of continental collision in creating vast areas of basement reactivation leading to continental differentiation.

It was also in Canada that Kevin rekindled his interest in Precambrian geology that had started in the Gold Coast. His experience in mapping Precambrian and Phanerozoic terrains showed him that greenstone belts were more deformed than previously believed and that they had no unconformity under them. Thus, Kevin believed that greenstone belts did not form over continents (and he was later proven right). He thought that the greenstone belts were simply remnants of Archaean and early Proterozoic oceans (i.e., suture zones).

After Kevin moved to Albany, he collaborated with Dewey and Bill Kidd on studying the Archaean eon. They showed that the Archaean most likely had a plate tectonic regime, but with smaller and faster-moving plates caused by higher heat loss of the planet. They obtained money from NASA, enabling a departmental project to map the rifts and sutures of the world, showing the pertinence of what Kevin and John had earlier called the Wilson cycle.

Kevin was always keen to point out that plate tectonics was introduced by John Tuzo Wilson’s seminal 1965 paper on transform faults. He also pointed out that Wilson had quickly recognized the profound implications of ocean opening and closing cycles in Earth’s history. Dewey wrote an important paper in 1975 showing the great uncertainties plate motions introduced into geological reconstructions, and Kevin never tired of pointing this out to people engaged in historical geology.

The Houston Years

In Houston, his interest spread to other planetary bodies, emphasizing the importance of comparative geology.In 1980, Dewey returned to England, and Kevin moved to Houston to become deputy (1982–1983), and then director, of the Lunar and Planetary Institute. From 1983 onward, he served as professor of geology at the University of Houston. It was in Houston that his interest spread to other planetary bodies, emphasizing the importance of comparative geology.

After leaving the Lunar and Planetary Institute, Kevin became a full-time professor at the University of Houston. Since 2004, he had been dividing his year between the Massachusetts Institute of Technology (as a Crosby Scholar) and Houston.

In the latter half of the first decade of the new century, Kevin pointed out that the edges of two permanent large, low shear-wave velocity provinces at the core-mantle boundary, which he named Tuzo and Jason, respectively, were the sources of the major mantle plumes. At the same time, he showed with his colleagues in South Africa that deformed alkalic (sodium- or potassium-rich) rocks and carbonatites (carbonate-rich igneous rocks) were a good guide in mapping former sutures, even where they are otherwise cryptic.

Insight, Wisdom, and Friendship

Kevin received many honors on both sides of the Atlantic. Among these are the Penrose Medal of the Geological Society of America and the Arthur Holmes Medal of the European Geosciences Union.

No one expressed Kevin’s role in the scientific community better than his longtime friend Sean Solomon: “Kevin was a source of seemingly limitless insight, experience, and wisdom, and a good friend.”

—A. M. Celâl Şengör (email: sengor@itu.edu.tr), Geology Department, Faculty of Mines, Istanbul Technical University, and Eurasia Institute of Earth Sciences, Ayazağa, Istanbul, Turkey

The post Kevin Charles Antony Burke (1929–2018) appeared first on Eos.

Bhutan Earthquake Opens Doors to Geophysical Studies

Mon, 08/13/2018 - 11:58

In 2015, a magnitude 7.8 earthquake shook the Gorkha District of Nepal, killing more than 9,000. The memory of this event is still vivid for the residents of this central Himalayan nation.

But farther east in the mountains, in Bhutan, many residents doubt the likelihood of a similar event happening to them. Bhutan had experienced several other earthquakes with a magnitude of about 6 during the past century. However, there was no clear evidence that Bhutan had ever seen an earthquake similar to the M7.8 Nepal event.

Findings from recent geophysical exploration suggest that this confidence may be overly optimistic. These results have shown that the eastern Himalayas region is extremely complex compared with the rest of the mountain belt.

During our campaigns, we found evidence that at least one M8 earthquake had, in fact, occurred in Bhutan.The kingdom of Bhutan sets great store in its traditions and its principle of Gross National Happiness. Although its rugged terrain and remote location have allowed this kingdom to preserve its unique culture, these factors have also limited the development of international collaborations there, notably in the Earth sciences. This situation changed in 2009 after a damaging M6.1 earthquake that claimed 11 lives persuaded Bhutan to open its doors to exploration of the region’s geophysics.

Our team studied mountain-building processes in this region after the 2009 earthquake. After 7 years of multipronged field campaigns, we learned that Bhutan’s geodynamics are as unique as its culture. The region’s crustal structure, seismicity, and deformation pattern are all different from what scientists had speculated previously.

During our campaigns, we found evidence that at least one M8 earthquake had, in fact, occurred in Bhutan. This means that other earthquakes of this magnitude could occur in the region again [Hetényi et al., 2016b; Berthet et al., 2014; Le Roux-Mallouf et al., 2016].

A Different Plate?

Although the western and central Himalayan arc curves gently from Pakistan to Sikkim and has a low-lying foreland, the eastern third curves more sharply and has significant topographical relief south of the mountain belt, namely, the Shillong Plateau and neighboring hills (Figure 1). Previous studies proposed that these structures accommodate part of the India-Eurasia tectonic plate convergence. These earlier studies also proposed that the great 1897 Assam earthquake (M8.1) had relieved some of the strain between these converging tectonic plates, thereby lowering earthquake hazard in Bhutan.

Fig. 1. Topographic map of the 2,500-kilometer-long Himalayan arc and surrounding region, with formerly (yellow) and newly (pink) cataloged seismicity. The dextral fault zone (white arrows) between Sikkim and the Shillong Plateau marks the break of the India plate, east of which a zone of complex 3-D deformation begins. Red dates mark the three largest earthquakes mentioned in the text. Green lines mark the surface trace of the megathrust along which the India plate underthrusts the Himalayan orogen, as well as the thrust faults bounding the Shillong Plateau. Political boundaries are shown for reference. Abbreviations: Pl. = plateau; Pr. = Pradesh; Sik. = Sikkim.

We collected new gravity, geodetic, and seismology data, and we found that the lithosphere—the rigid top layer of Earth—beneath Bhutan and the Shillong Plateau is most likely not part of the Indian plate or, if it once was, that it is now detached from it. The demarcation between plates stretches in a NW–SE direction, without a surface trace, but it is evident in a middle to lower crustal zone of continuously active seismicity and dextral (right-lateral) motion [Diehl et al., 2017]. This fault zone most likely hosted an M7 earthquake in 1930.

Research team member Théo Berthet monitors data collection during a campaign to a less visited region in central Bhutan. The Black Mountains, which rise to 4,500–4,600 meters, are visible in the background. Credit: György Hetényi

Our GPS measurements confirm the relative motion of the newly defined microplate. These measurements also show that this microplate is rotating clockwise with respect to the Indian plate [Vernant et al., 2014]. The different behaviors of the two lithospheres are clearly expressed in their differences in flexural stiffness along the strike direction of the orogen (mountain belt). The flexural stiffness beneath Nepal is homogeneous [Berthet et al., 2013] but is comparatively weaker beneath Bhutan [Hammer et al., 2013].

A similar, but less well defined, deep seismicity zone, with distinct GPS vectors and flexural signatures, may mark another terrain boundary farther east along the Himalayas in Arunachal Pradesh [Hetényi et al., 2016a].

Not a Safe Haven

India’s 1897 Assam earthquake, which occurred farther south, is only a few human generations in the past and has not completely faded from memory. No event since then has reached magnitude 7 in Bhutan, and many of the local population believe that big earthquakes cannot happen there.

However, the return period of large Himalayan events is longer than oral history: Western Nepal, for example, has not experienced a significant event since 1505. It is true that over the past decades, the seismicity rate in Bhutan has been low, but we have found evidence of several great earthquakes in the past on the local megathrust.

Geomorphological analysis of uplifted river terraces in central Bhutan revealed two major events over the past millennium [Berthet et al., 2014]. A newly excavated paleoseismological trench has documented surface rupture during a medieval event and a 17th–18th century event [Le Roux-Mallouf et al., 2016]. Calculations based on newly translated historical eyewitness reports, macroseismic information, and reassessed damage reports have constrained a M8 ± 0.5 earthquake on 4 May 1714 [Hetényi et al., 2016b].

Thus, the seismic gap proved to be an information gap: The entire length of the Himalayas can generate earthquakes with a magnitude greater than 7.5, and it has done so in the past 500 years.

Differences at Multiple Scales The landscape in eastern Bhutan, south of Trashigang, typically features incised valleys, steep slopes, and terraces. The hut in the center is shown in the inset for scale. The view here is to the east, and the hut is located at 27.2784°N, 91.4478°E. Credit: György Hetényi

The major change along the Himalayas occurs between their central western part (with a single convergence zone) and the eastern third (with distributed deformation including strike-slip motion), and the east–west extent of Bhutan exhibits even greater complexity. The crust appears to be smoothly descending in western Bhutan and is subhorizontal in the eastern part of the country [Singer et al., 2017a]. Our measurements of seismic wave speeds in the upper crust show important changes across the country, and they coincide well with the geological structure mapped at the surface [Singer et al., 2017b].

The most striking difference between western and eastern Bhutan is the crustal deformation pattern. In the west, the accommodation of present-day crustal shortening is very similar to the rest of the Himalayas: The plates in the megathrust region are fully locked [Vernant et al., 2014], and microseismicity (the occurrence of small events) is scattered across the crust [Diehl et al., 2017]. In the east, the locked segment of the megathrust is shorter, and it focuses most of the microseismic activity within a smaller region. Also, the fault appears to be creeping (sliding without producing significant seismicity) in both shallower and deeper segments [Marechal et al., 2016].

This variation of loading and background seismicity warrants further research along the entire Himalayan orogen because there is very little existing insight into variations of structures and processes at such short distance scales.

Gangkhar Puensum, a mountain in north central Bhutan, is clearly visible from the main road between Ura and Sengor, looking north-northwest. Gangkhar Puensum, at an altitude of 7,570 meters, is the highest unclimbed peak on Earth. For religious reasons, mountaineering above 6,000 meters is prohibited in Bhutan, so this record is very likely to remain. Credit: György Hetényi Bhutan Is Moving Forward

Our 7 years of field campaigns in this region have advanced our geophysical exploration and geodynamic understanding considerably.Bhutan is an exotic place that has self-imposed isolation for a long time, but the country’s technology is now catching up at a rate that is higher than for the rest of the Himalayan regions. During our 2010 campaign, we used paper traveler’s checks, and we lacked individual cell phones. During our 2017 campaign, we had access to automated teller machines (ATMs) and 3G internet.

Likewise, our 7 years of field campaigns in this region have advanced our geophysical exploration and geodynamic understanding considerably. Still, there is a strong need to continue and build on the existing knowledge, which includes freely available seismological, gravity, and GPS data from our projects.

Focusing on three areas would help improve future development in Bhutan:

Broadening timescales. Acquiring long-term data needed to confirm or to adjust interpretations made on relatively short timescales is possible only with national observatories. We have launched seismology and GPS monitoring initiatives, and we hope for long-term funding and training of local manpower for all levels of operation. Broadening investigations. Some fields of study have advanced dramatically, including work on glacial lake outburst floods and on landslides. Others, like seismic microzonation, have been limited so far and could benefit from more extensive efforts. There is also a strong need for up-to-date building codes that reflect the scientific knowledge coming from these investigations. Increasing public awareness of natural hazards. The Bhutanese Ministry of Home and Cultural Affairs now has a full department devoted to disaster management that includes well-trained employees and comprehensive administration. However, education is the key to reaching the broadest population possible, which requires regular adaptation of school curricula and concise, practical information that local residents from any generation can understand.

We hope that recent efforts by our teams have promoted progress in the right direction. We also hope that large portions of the population will be sufficiently aware to deal with the next natural disaster. As our research shows, the next event may come sooner than previously thought.

The main Himalayan peaks in northwest Bhutan, on the border with southern Tibet, are, from left to right, Chomolhari, Jichu Drake, and Tserim Kang. Exact altitudes are debated, but Chomolhari is higher than 7,000 meters, and Tserim Kang towers above 6,500 meters. Credit: György Hetényi Acknowledgments

The authors gratefully acknowledge all scientific, fieldwork, and logistical help provided by participants of the projects GANSSER and BHUTANEPAL, carried out in collaboration with the Department of Geology and Mines and the National Land Commission, Thimphu, Bhutan, and with support of Helvetas. Research highlighted in this article became possible thanks to the seed funding of the North-South Centre (ETH Zurich), followed by funding from the Swiss National Science Foundation (grants 200021_143467 and PP00P2_157627) and the French Agence Nationale de la Recherche (grant 13-BS06-0006-01).

The post Bhutan Earthquake Opens Doors to Geophysical Studies appeared first on Eos.

Explore Your Inner Child by Painting Science with Pixels

Fri, 08/10/2018 - 12:34

Need a break from slogging through revision requests on your latest paper? Are lab results making you want to bang your head against a wall? Going in circles trying to fund your research?

If so, you need the latest science art challenge making its way around Twitter: #MSPaintYourScience.

This Internet trend challenges scientists of all disciplines to sketch the topic of their current research project using whatever basic drawing software, like Microsoft Paint, is installed on their computer. There’s a catch, though: The picture must be drawn with the scientist’s nondominant hand.

The trend started with one aquatic geologist taking a lighthearted break from writing to draw a picture of a garfish:

To answer your question, no it’s not easy for a left-handed person to draw a gar in MS Paint (using a mouse). But it was a nice break from writing!  #sciart pic.twitter.com/mGHqnBdGUl

— Dr. Solomon David (@SolomonRDavid) July 31, 2018

That one fish sparked a wave of scientists procrastinating on their research to create more science art.

And there have been dozens of submissions so far, from the exceedingly beautiful (presumably made by artistic and ambidextrous people) to cute pictures reminiscent of when your kindergarten teacher looked at your drawings and said, “That’s very imaginative. Can you tell me about it?”

We’ll let you judge. Check out these creative Earth and space science computer drawings..


Excellent Attention to Microdetails. “Shear” Artistic Beauty!

Wait, I made it better! (well, more fully encompassing at least) Microstructure of glaciers AND shear zones. #MSPaintYourScience pic.twitter.com/420ZGZwJSZ

— Stephanie Mills (@MicroEarthSci) August 2, 2018

. Minerals Sold Separately

A fun time down in the mines! (Wulfenite not included)#MSPaintYourScience pic.twitter.com/q94I0dg8Oe

— Daveedo (@daveedoburrito) August 3, 2018

. Ah, Ah, Ah, Ah! Stayin’ Alive…in Space!

So #MSPaintYourScience is brilliant. Here’s my #exobiology contribution @esa #space #biology pic.twitter.com/dAyse5xLa7

— Nicol Caplin (@NCaplin_PhD) August 2, 2018

. Healthy Wetlands Make for Happy Fish

#GreatLakes coastal wetlands provide a critically important habitat for fish like yellow perch – all that food and refuge really puts a smile on their face! #GreatLakesSci #MSPaintYourScience pic.twitter.com/6asHmoF4u2

— Katherine O’Reilly (@DrKatfish) August 1, 2018

. “No, No, Arctic Melt,” Says Australia. “You Stay Up There.”

#MSPaintYourScience They say you either have a more science-minded brain or a more creative brain. I totally disagree… pic.twitter.com/KqHSKi7kl6

— Sophie Williams (@sophielwill_) August 2, 2018

. Coming Soon to a Science Journal Near You

I tried. #MSPaintYourScience

Unofficial Fig. 7 for my recently accepted sediment reconstruction in the Okinawa Trough (https://t.co/HL538ZVkj0). Advancing river mouths in glacials transport lots of mud out to sea and make it difficult to look for #dust from the #monsoon! pic.twitter.com/DUHXAv5XAj

— Chloe Anderson (@chloerophyll_a) August 3, 2018

. This Foram Tickles Our Fancy

This is my submission to #MSPaintYourScience showing the process of picking #foraminifera shells with a paintbrush for paleoclimate reconstruction #forams #paleoceanography #sciart

I title this masterpiece: “tickling the swirly boiz” feat. mad lad C. wuellerstorfi pic.twitter.com/YnpIMWYgpu

— Katie Harazin (@_katiezin) August 2, 2018

. We See Shades of American Gothic Here

I’m studying water pollution from agriculture for my PhD! #MSpaintyourscience #phdlife pic.twitter.com/KsRw7V6Skr

— Charlotte Chivers (@cachivers) August 1, 2018

. Of Course the Sun Wears a Hat

Well, why not? #MSPaintYourScience pic.twitter.com/ThSyJiJG8A

— Andy Emery (@AndyDoggerBank) August 3, 2018

. We “C” What “U” Did There…

I study the effects of extraction on communities and how communities work with government agencies clean up the problem. #MSPaintYourScience (The true nerds will get the elemental joke.) #thisiswhatascientistlookslike #SocialSciences pic.twitter.com/8xMH2psbzL

— Kelley Christensen (@kjhchristensen) August 2, 2018

. You Get a Line, I Get a Pole…I Mean a Water Quality–Monitoring Sonde

Scene from a recent trip to the marsh in #Lousiana to deploy instruments and survey. #MSPaintYourScience #fieldwork #geology pic.twitter.com/AaHsjb6tce

— Diana R. Di Leonardo (@SwirlingSands) August 3, 2018

. Don’t Cross the (Solar Wind) Streams

I don’t know why I’m a science major when clearly I’m destined to be an art major #MSPaintYourScience pic.twitter.com/Dmy2E22t5q

— Tony Iampietro (@iamtony_97) August 3, 2018

. Even Federal Agencies Are Getting Their Paint On

We couldn’t choose just one part of our work to highlight, so we chose them all! All 33 programs across the country are doing really great science and helping their local communities #MSPaintYourScience https://t.co/hIa6YYYtVd pic.twitter.com/LMuBgSmeRx

— NOAA Sea Grant (@SeaGrant) August 1, 2018

Do you have exciting science to share? Open up your computer’s paint program and sketch away! Don’t worry. Your research can wait. —Kimberly M. S. Cartier (@AstroKimCartier), Staff Writer


The post Explore Your Inner Child by Painting Science with Pixels appeared first on Eos.

First Spacecraft to Touch the Sun Awaiting Launch

Fri, 08/10/2018 - 12:33

Humanity’s first mission to touch the Sun could be launched as early as tomorrow morning, on a course to uncover our nearest star’s biggest secrets.

The Parker Solar Probe, built and operated by NASA and the Johns Hopkins University’s Applied Physics Laboratory (APL) in Laurel, Md., will orbit closer to the Sun than any prior spacecraft.

Just how close? If the Sun and the Earth were scaled to be 1 meter apart, Parker would travel to within 4 centimeters of the Sun. That’s nearly 7 times closer than the previous record holder.“We’ve been studying the Sun for decades, and now we’re finally going to go where the action is.”

Once there, it will conduct the first in situ measurements of the Sun’s mysteriously hot corona. This outer layer of the Sun’s atmosphere is a diffuse plasma hundreds of times hotter than the Sun’s surface. Scientists, keen to find out how the corona forms and heats, designed Parker to help answer that question.

“We’ve been studying the Sun for decades, and now we’re finally going to go where the action is,” Alex Young said at a 20 July press conference at the Kennedy Space Center in Cape Canaveral, Fla. Young is a solar scientist and the associate director for science in the Heliophysics Science Division at NASA’s Goddard Space Flight Center in Greenbelt, Md.

Solving Solar Questions

Scientists designed Parker to answer three fundamental questions about our nearest star: What mechanism both heats the corona to 1–2 million degrees Celsius and also accelerates the solar wind to supersonic speeds? What are the structure and the dynamics of the Sun’s electric and magnetic fields? How are charged solar particles flung to speeds that can damage communications systems at Earth?

“We’ve been to every major planet, but we’ve never managed to go up into the corona.”To answer these questions, Parker is equipped with four suites of scientific instruments. The Electromagnetic Fields Investigation (FIELDS) will map the structure and measure the strength of the Sun’s electric and magnetic fields. The Wide-Field Imager for Solar Probe Plus (WISPR) is a visible-light imager that will look at the large-scale structure of the corona and solar wind. The Solar Wind Electrons Alphas and Protons (SWEAP) investigation will capture solar wind particles and catalog their properties. Last, the Integrated Science Investigation of the Sun (IS⊙IS, pronounced ee-sis) will study energetic particles released by the Sun to determine what accelerates them to high speeds.

The mission “is a real voyage of discovery,” Nicola Fox, Parker’s project scientist and a space physics researcher at APL, told Eos last fall. Fox is also a member of the Eos Editorial Advisory Board. “We’ve been to every major planet, but we’ve never managed to go up into the corona,” she said.

Just past the corona, “the solar wind suddenly gets so energized that it can actually break away from the pull of the Sun and move out at millions of miles an hour to bathe all of the planets,” Fox said. “You really need to get into [the upper solar atmosphere] to be able to answer the fundamental questions.”

Facing the Heat Parker Solar Probe in a clean room at Astrotech Space Operations in Titusville, Fla. Its all-important heat shield, seen at the top of the craft, was installed just before this photo was taken on 6 July. Credit: NASA/Johns Hopkins APL/Ed Whitman

To get so close to the Sun, Parker must withstand the heat. The spacecraft is protected by a heat shield composed of advanced carbon composite materials that dissipates incoming heat before it can penetrate the protective layer. The 2.4-meter-wide, 11.43-centimeter-thick shield also has a white coating to reflect as much of the Sun’s light as possible.

The heat shield can withstand solar radiation that is about 500 times more intense than experienced on Earth. Although the probe will travel through temperatures that average 1,400°C, Parker’s heat shield will let its delicate instruments operate at a relatively cool 29°C, no hotter than a mild summer’s day.

The heat shield technology is revolutionary, Fox explained, and it’s what will allow this probe to survive prolonged exposure to the intense heat and radiation in the Sun’s vicinity. This video explains more about the shield and about other mechanisms in place to help Parker survive the extreme temperatures on approach to the Sun and in its corona:

Closest Approach to Sun in History

The probe will make 24 orbits of the Sun over the course of nearly 7 years. Each of Parker’s orbits will have a perihelion, when the probe will make its closest approach to the Sun for that orbit. Parker’s first perihelion, on 1 November, will occur at a distance of 35.7 solar radii, or about 25 million kilometers, from the Sun’s surface.

Orbital path of Parker Solar Probe (red). The locations of its first flyby of Venus, its first perihelion, and its minimum perihelion events are marked with yellow arrows. RS indicates distance in terms of solar radii. For comparison, the orbits of Earth, Venus, and Mercury are shown in blue. Credit: Johns Hopkins APL

It takes approximately 55 times as much energy to reach the Sun as it does to get to Mars and twice the energy needed to reach Pluto, according to mission design leader Yanping Guo of APL. To get around this, the spacecraft will use seven gravity assists from Venus to gradually tighten its orbit. It will make its minimum perihelion at just 8.86 solar radii, or about 6.2 million kilometers, from the Sun’s surface on 19 December 2024.

Parker will get that close to the Sun twice more during the mission. Then, after it’s final orbit in 2025, it will spiral into the Sun and end its mission.

Parker Solar Probe will be launched no earlier than Saturday, 11 August, at 3:33 a.m. Eastern Daylight Time from Cape Canaveral Air Force Station in Florida. Its launch window will remain open until 23 August.

This article includes reporting contributions from Randy Showstack.

—Kimberly M. S. Cartier (@AstroKimCartier), Staff Writer

The post First Spacecraft to Touch the Sun Awaiting Launch appeared first on Eos.

Kīlauea Eruption Abruptly Slows Down

Thu, 08/09/2018 - 12:26

Nearly 3 months into one of its most voluminous and destructive eruptions in recent history, Hawaii’s Kīlauea volcano abruptly went quiet over the weekend, with new lava from the volcano’s flank slowing to a trickle and seismic activity at its summit going quiet.

Although Tina Neal, scientist in charge at the Hawaiian Volcano Observatory (HVO), called it a “dramatic shift in activity,” she said it was too soon to say whether the changes were a pause in activity or a sign that the eruption was truly coming to an end.

“A pause of days or weeks is not completely out of the question.”“Volcanoes worldwide can go through pauses or lulls or periods of diminished activity during the course of an eruption,” she said. “A pause of days or weeks is not completely out of the question.”

A U.S. Geological Survey (USGS) overflight crew on Monday morning observed a weakly to moderately bubbling lava pond within the cone at fissure 8, the opening that has been the primary source of lava flows since May. They also saw a weak gas plume and a completely crusted lava channel.

Later in the morning, crews on the ground confirmed that the upper section of the lava channel was empty of new lava, although small amounts of molten material continued to ooze out of the ground nearer to the coast. Those observations “are all consistent with something turning off the spigot to the surface,” Neal said.

At the same time, the pattern of seismic activity and subsidence at Kīlauea’s summit that has continued since May and has caused the summit crater to grow dramatically wider and deeper abruptly came to a standstill, she said.

Caught by Surprise

The sudden slowdown appeared to have caught scientists by surprise. Less than 3 weeks earlier, HVO had issued a report to Hawaii County Civil Defense officials warning that the eruption could take months or even years to wind down.

A view of Kīlauea’s summit on 6 August. The crater was quiet over the weekend, after the last significant seismic event on 2 August. Credit: USGS

The current eruption began on 3 May, when lava began gushing out of fissures on Kīlauea’s lower east flank. At the same time, volcanic activity ceased at Pu‘u ‘Ō‘ō, a vent around 20 kilometers to the west that had been erupting continuously since 1983. Meanwhile, at the mountain’s summit, Halema‘uma‘u Crater began to subside, eventually sinking more than 450 meters over 2 months.

In total, around 500 million cubic meters of lava are estimated to have erupted to date, according to geologist Janet Babb of HVO. The eruption has wiped three communities off the map, destroying more than 650 homes.

Scientists have several hypotheses as to what might have caused the slowdown, Neal told reporters at a hastily organized telephone briefing on Monday. There might be some kind of blockage in the volcano’s system that is preventing further draining, or there could be a slowdown in magma supply to the volcano from deeper within Earth. “We’re continuing to search for explanations as to what’s going on,” she said.

Kīlauea, however, can be fickle. “Don’t let the volcano fool you,” USGS spokeswoman Leslie Gordon added. “As soon as you think one thing, something else will happen.”

Signs of a Slowdown

The volcano may have been giving signs since mid-July that the eruption was going to change.Although they were not obvious at the time, Neal says scientists now recognize that the volcano may have been giving signs since mid-July that the eruption was going to change.

At the summit, scientists observed a gradual increase in the periods of rest between collapse events: explosions or rockfalls that were causing the crater to grow wider and deeper. Meanwhile, in the eruption zone on the volcano’s eastern flank, levels within the lava channel would sometimes drop below levy rims, and fountain heights at fissure 8 would decrease.

Scientists also noticed another puzzling change in the volcano’s behavior: an increase in gas emissions at Pu‘u ‘Ō‘ō. Although the old vent had been inactive since early May, a white plume has been rising from it since mid-July. On an overflight last week, they measured an emission rate of more than 1,000 metric tons per day of sulfur dioxide, the highest rate observed at the vent in several years. Emissions were down to 200 metric tons per day as of Tuesday.

Neal called it a “long shot” to think that eruption might resume at Pu‘u ‘Ō‘ō but said it was too soon to say. “We’ll be keeping an eye on that,” she said.

—Ilima Loomis (email: ilima@ilimaloomis.com; @iloomis), Freelance Science Writer

The post Kīlauea Eruption Abruptly Slows Down appeared first on Eos.

2018 Class of AGU Fellows Announced

Thu, 08/09/2018 - 12:24

Each year since 1962, AGU has elected as Fellows members whose visionary leadership and scientific excellence have fundamentally advanced research in their respective fields. This year, 62 members will make up the 2018 class of Fellows.

This honor is bestowed on only 0.1% of AGU membership in any given year.AGU Fellows are recognized for their scientific eminence in the Earth and space sciences. Their breadth of interests and the scope of their contributions are remarkable and often groundbreaking. Only 0.1% of AGU membership receives this recognition in any given year.

On behalf of the AGU Honors and Recognition Committee, the Union Fellows Committee, the section Fellows committees, and AGU leaders and staff, we are immensely proud to present the 2018 class of AGU Fellows.

We appreciate the efforts of everyone who provided support and commitment to AGU’s Honors Program. Our dedicated volunteers gave valuable time and energy as members of selection committees to elect this year’s Fellows. We also thank all the nominators and supporters who made this possible through their dedicated efforts to nominate and recognize their colleagues.

Honor and Celebrate Eminence at Fall Meeting

At this year’s Honors Tribute, to be held Wednesday, 12 December, at Fall Meeting 2018 in Washington, D. C., we will celebrate and honor the exceptional achievements, visionary leadership, talents, and dedication of 62 new AGU Fellows.

Please join us in congratulating our 2018 class of AGU Fellows, listed below in alphabetical order.

—Eric A. Davidson, President, AGU; and Mary Anne Holmes (email: unionfellows@agu.org), Chair, Honors and Recognition Committee, AGU


Jess F. Adkins, California Institute of Technology

Donald F. Argus, Jet Propulsion Laboratory, California Institute of Technology

Paul A. Baker, Duke University

Cecilia M. Bitz, University of Washington

Nina C. Buchmann, ETH Zurich

Marc W. Caffee, Purdue University

Gregory R. Carmichael, University of Iowa

Andrew Cohen, University of Arizona

Patrick M. Crill, Bolin Centre for Climate Research, Stockholm University

Thomas L. Delworth, Geophysical Fluid Dynamics Laboratory, National Oceanic and Atmospheric Administration

Donna Eberhart-Phillips, GNS Science and University of California, Davis

Kerry Emanuel, Massachusetts Institute of Technology

Andrew T. Fisher, University of California, Santa Cruz

Marilyn L. Fogel, University of California, Riverside

Hayley J. Fowler, University of Newcastle

S. Peter Gary, Los Alamos National Laboratory (Retired)

Steven J. Ghan, Pacific Northwest National Laboratory

Joris Gieskes, Scripps Institution of Oceanography, University of California, San Diego

Karl-Heinz Glassmeier, Technische Universität Braunschweig

Dorothy K. Hall, University of Maryland and Cryospheric Sciences Laboratory, NASA Goddard Space Flight Center

Charles Franklin Harvey, Massachusetts Institute of Technology

Sidney R. Hemming, Columbia University in the City of New York

Benjamin P. Horton, Nanyang Technological University

Bruce F. Houghton, University of Hawai‘i at Mānoa

Catherine Jeandel, Centre National de la Recherche Scientifique, Université de Toulouse

Tomoo Katsura, University of Bayreuth

Kimitaka Kawamura, Chubu University

Simon L. Klemperer, Stanford University

Cin-Ty Lee, Rice University

Jos Lelieveld, Max Planck Institute for Chemistry and the Cyprus Institute

Philippe Lognonné, Institut de Physique du Globe de Paris, Université Paris Diderot

Timothy William Lyons, University of California, Riverside

Trevor McDougall, University of New South Wales

Bruno Merz, GFZ German Research Centre for Geosciences and University of Potsdam

Stephen A. Montzka, Earth System Research Laboratory, National Oceanic and Atmospheric Administration

Rumi Nakamura, Space Research Institute, Austrian Academy of Sciences

Heidi Nepf, Massachusetts Institute of Technology

Victor P. Pasko, Pennsylvania State University

Adina Paytan, University of California, Santa Cruz

Christa D. Peters-Lidard, NASA Goddard Space Flight Center

Balaji Rajagopalan, University of Colorado Boulder

Cesar R. Ranero, Catalan Institution for Research and Advanced Studies

Geoffrey D. Reeves, Los Alamos National Laboratory

Josh Roering, University of Oregon

David B. Rowley, University of Chicago

Vincent J. M. Salters, Florida State University

Gavin A. Schmidt, NASA Goddard Institute for Space Studies

Richard Seager, Lamont-Doherty Earth Observatory of Columbia University

Nikolai Shapiro, Institut de Physique du Globe de Paris and Centre National de la Recherche Scientifique

Eli A. Silver, University of California, Santa Cruz

Mark Simons, California Institute of Technology

Bradley S. Singer, University of Wisconsin–Madison

Lee Slater, Rutgers University–Newark

David G. Tarboton, Utah State University

Doerthe Tetzlaff, Leibniz Institute of Freshwater Ecology and Inland Fisheries, Humboldt University, and University of Aberdeen

Friedhelm von Blanckenburg, GFZ German Research Centre for Geosciences

Christopher R. Webster, Jet Propulsion Laboratory, California Institute of Technology

Naohiro Yoshida, Tokyo Institute of Technology

Vladimir E. Zakharov, University of Arizona

Fuqing Zhang, Pennsylvania State University

Pei-Zhen Zhang, Sun Yat-sen University

Francis W. Zwiers, Pacific Climate Impacts Consortium, University of Victoria

The post 2018 Class of AGU Fellows Announced appeared first on Eos.

Why Space Weather Needs Ensemble Forecasting

Thu, 08/09/2018 - 12:22

Ensemble forecasting has long been used in terrestrial weather forecasting but Murray [2018] introduces this methodology to the space weather community. The author explains the importance of combining many predictions to create an ensemble forecast, a technique best known in hurricane prediction models. The history of these methods in the terrestrial weather community is reviewed and the case is made that space weather is now ready to broadly apply ensemble modeling to improve results over existing deterministic forecasts. This paper is timely because of the explosion of forecast and validation methods in the space weather community, which will soon turn to ensemble forecasts to drive next generation tools.

Citation: Murray, S. A. [2018]. The importance of ensemble techniques for operational space weather forecasting. Space Weather, 16. https://doi.org/10.1029/2018SW001861

—Daniel T. Welling, Editor, Space Weather

The post Why Space Weather Needs Ensemble Forecasting appeared first on Eos.

Training Early-Career Polar Weather and Climate Researchers

Wed, 08/08/2018 - 11:43

Weather and climate are changing faster in the polar regions than anywhere else on Earth. These changes are opening up new opportunities for shipping, energy extraction, and tourism, but they also expose these sensitive regions to increasing environmental hazards and pose major challenges to local communities. Limitations in our ability to predict polar weather and climate changes on scales from days to decades hamper our ability to make effective decisions regarding responses to these changes. Furthermore, our understanding of how changes in the polar regions may affect the midlatitudes, including high-impact extreme events, is far from complete.

The course combined theory lectures, practical exercises, and fieldwork, as well as a dedicated science communication program.The polar prediction problem is inherently multidisciplinary and requires cooperation across a wide community. Thus, an international group of agencies specifically designed a 10-day training course to bring together a wide group of students and lecturers to cover important topics related to polar prediction. Topics included satellite and conventional observation techniques; numerical modeling of the polar atmosphere, sea ice, and ocean; and data assimilation and model evaluation. The course included an innovative combination of theory lectures, practical exercises, and fieldwork, as well as a dedicated science communication program, each of which forms a crucial pillar of the prediction problem.

Daily weather briefings encouraged the students to interpret and evaluate forecast models specifically for the context of a mountainous polar area.Micrometeorological observations and daily radio soundings provided hands-on training opportunities, and these data were directly used in the practical exercises. This experience allowed the students to investigate the topics discussed in the theoretical lectures more thoroughly. The data were also used in the daily weather briefings: exercises that encouraged the students to interpret and evaluate forecast models specifically for the context of a mountainous polar area. During the science communication sessions, which complemented the scientific program, the students produced brief, informative videos aimed at the general public.

In contrast to single-discipline courses designed to address a narrow topic, a diverse course such as this is unusual. However, this approach is necessary to help build and maintain the community needed to address the inherently multidisciplinary polar prediction problem. Student feedback showed that the school was well appreciated, and we propose this model for other disciplines where cross-disciplinary links are crucial to progress.

The training school was organized under the auspices of the European Union Horizon 2020–funded Advanced Prediction in Polar Regions and Beyond (APPLICATE) project in cooperation with the Association of Polar Early Career Scientists (APECS) and the World Meteorological Organization’s Polar Prediction Project on the occasion of the Year of Polar Prediction. Further sponsorship was provided by the Climate and Cryosphere project, International Arctic Science Committee, and Scientific Committee on Antarctic Research. More information about the school can be found on the APECS website.

We thank all the school’s lecturers—Ian Brooks, Matthieu Chevallier, Anna Fitch, Martin Hagman, Anna Hogg, Thomas Jung, Erik Kolstad, Linus Magnusson, Donald Perovich, Jessica Rohde, and Doug Smith—as well as the excellent team at the Abisko Scientific Research Station.

—Fiona Tummon (email: fiona.s.tummon@uit.no), Arctic University of Norway, Tromsø; Jonathan Day (@jonny_day), European Centre for Medium-Range Weather Forecasts, Reading, U.K.; and Gunilla Svensson, Stockholm University, Sweden

The post Training Early-Career Polar Weather and Climate Researchers appeared first on Eos.

Hunting for Landslides from Cascadia’s Great Earthquakes

Wed, 08/08/2018 - 11:40

It’s been more than 300 years since an earthquake with a magnitude greater than 8 has shaken the U.S. Pacific Northwest, but that earthquake was an impressive one. On 26 January 1700, an earthquake with an estimated magnitude of 9 caused the coastline to drop by several feet and a tsunami to inundate the shore.

In the centuries since then, the Juan de Fuca plate has continued to push against the North American plate as it heads downward toward Earth’s mantle, building stress along the Cascadia subduction zone, which extends from northern Vancouver Island in Canada to northern California. As a result, some scientists are estimating as high as a 22% chance of a megathrust earthquake of magnitude 9.0 or greater within the next 50 years [Goldfinger et al., 2017].

The specter of megathrust earthquakes along the Cascadia subduction zone has sparked public interest and prompted widespread hazard preparedness and resilience activities, but nailing down a specific time frame has proven notoriously difficult. One way to overcome such difficulties is to search for evidence of past landslides and compare them with the earthquake record in efforts to tease out the pattern and severity of ground shaking.

Although coastal and offshore geologic evidence provides a long record of regularly occurring earthquakes, data showing whether nearby landslides occurred during or after a given earthquake (or independently of the earthquake) are limited and incomplete. These uncertainties hinder predictions of how much damage future earthquakes are likely to cause.

The Cascadia Earthquake Landslide Working Group, an informal coalition of some 20 scientists with diverse disciplinary backgrounds, recently identified promising means—dating tree rings in forests drowned in landslide-dammed lakes—to establish a regional landslide chronology that we can compare with data sets from past earthquakes. The tree ring work represents one project within the working group, which focuses on a general understanding of landslides in the Cascadia region.

The Past as Uncertain Prologue

Evidence of how Cascadia landscapes fared during past great earthquakes is limited and difficult to acquire.Although geoscientists have used seafloor and coastal sedimentary archives to produce increasingly refined versions of the Holocene chronology of Cascadia subduction earthquakes [Goldfinger et al., 2012; Witter et al., 2003], they have a poor understanding of what kinds of ground motion and landscape response to expect during future events. Evidence of how Cascadia landscapes fared during past great earthquakes is limited and difficult to acquire, and few data sets exist documenting landslides that have occurred during deep-sourced subduction earthquakes (coseismic response landslides).

The relatively shallow earthquakes along crustal faults produce landslides in a relatively predictable fashion: The larger the event is, the farther its effects tend to radiate into the surrounding terrain. As a result, the cumulative volume of landslide material increases with increasing seismic moment [e.g., Keefer, 1994].

The more powerful 1700 earthquake originated at a much deeper level as the result of one tectonic plate plunging beneath another. Few data inventories exist to document landslides that occur during this type of earthquake (a subduction zone megathrust earthquake), and the few data sets that do exist indicate significant variability in hazard potential.

For example, the 2011 M9.1 Tohoku earthquake that occurred off Japan produced approximately 18 million cubic meters of landslide debris [Wartman et al., 2013]. However, Keefer’s [1994] power law relationship associates this volume of debris with a crustal M6.7 earthquake. The offshore source of this earthquake might explain this dampening of slide activity.

However, this dampening effect was not observed for the M9.5 1960 Chilean earthquake, which caused an estimated 250 square kilometers of ground to fail as landslides [Veblen and Ashton, 1978]. When scaled for volume using the area-volume relationship of Larsen et al. [2010], the Chilean earthquake possibly produced thousands of times more material than the 2011 Tohoku event.

At present, there are no rigorous scientific comparisons of these two events. However, observations of large valley-blocking landslides within glacial sedimentary units in southern Chile [Davis and Karzulovíc K, 1963] suggest that regional lithology (rock characteristics) is an important factor controlling the potential for landslides during an earthquake.

The high variability of documented landslide volumes for past M9 events highlights the need to better understand subduction zone coseismic landslide hazards in the Cascadia subduction zone.

The 1700 Earthquake

To understand how the Cascadia region might respond to quakes in the future, we look to the most recent great earthquake in the region, specifically, the M9 event from 26 January 1700. Although steep, landslide-prone terrain abounds in Cascadia and previously published studies propose tantalizing evidence for earthquake-coupled slope instability [e.g., Schulz et al., 2012], the 1700 earthquake event has not been definitively linked with landslide activity.

Another reason to collect land-based coseismic evidence is to test the hypothesis that megathrust events occur more frequently along the southern segment of the subduction zone [Goldfinger et al., 2012]. If the age of landslide deposits corresponds with Cascadia earthquakes, we can use the pattern and distribution of slope instability to test and calibrate models for megathrust ground motion.

Landslide History Snapshots in Drowned Forests

The Cascadia Earthquake Landslide Working Group used tree ring dating to show, down to the year, when landslide events occurred.To re-create the landslide history of the Cascadia region, the Cascadia Earthquake Landslide Working Group used dendrochronology (tree ring dating) to show, down to the year, when these events occurred. In fact, the so-called ghost forests of drowned trees in estuaries along the Oregon and Washington coasts provided key evidence for characterizing the precise timing of the 1700 earthquake [Atwater et al., 1991].

The method works like this: Landslides along the steep slopes of the Oregon Coast Range can block stream channels and create small lakes that drown valley forests. The youngest rings on those trees corresponds to the year the trees died, presumably from the drowning. Tree ring widths are strongly correlated with regional climate, and each tree’s unique pattern of ring widths can be compared to a regional chronology established from living trees older than 400 years. Thus, the tree ring signatures for each landslide-damned lake can provide an estimate of when the landslide occurred.

Sediment layers in fore-arc lake basins also record smaller landslides, and they could record a longer record of events [e.g., Morey et al., 2013], albeit with greater age uncertainty. Thanks to the annual resolution of the tree ring records, however, drowned trees record landsliding events down to the year. So if tree ring evidence points to several trees drowning in 1700 near a particular landslide dam, this implies an earthquake origin for the slope failure.

In other words, using drowned trees to link specific landslides to the 1700 earthquake helps to paint a better picture of what exactly happened on the day Earth shook centuries ago. This picture can help scientists better understand the scope of hazards the Northwest faces from such great earthquakes.

Recently, to support one facet of our working group, the U.S. Geological Survey’s National Earthquake Hazards Reduction Program awarded collaborative grants to the University of Oregon, the University of Texas, and the Oregon Department of Geology and Mineral Industries (DOGAMI) to investigate landslide ages from the tree ring record of landslide-dammed lakes (Figure 1). Subsequent mapping and investigation by the team revealed the potential for dendrochronology to establish highly accurate landslide ages based on tree ring chronology [Black et al., 2015].

Fig. 1. (left) This slab sample from a drowned tree (visible in the background) in Klickitat Lake in Oregon provides a record of the years before a landslide dam formed this lake, inundating the forest. Slabs with a sufficient number of tree rings can be correlated to regional tree ring records. (right) This correlation is evident from the thinner rings beginning in 1739, shown in this close-up view of tree rings within the slab. Thinner rings document a profound growth slowdown, characteristic of Douglas fir trees throughout Cascadia. The outermost increment that formed immediately under the bark shows that this tree died in 1751. Similar records from other drowned trees could be linked to landslides triggered by great (~M9) earthquake that shook the region in 1700. Credit: Will Struble

Preliminary results for two landslide-dammed lakes in western Oregon show lake formation dates of 1819 and 1751 [Struble et al., 2017] ( Figure 2). Although these initial findings do not reveal a causal link with the 1700 event, the method holds great promise for establishing formation ages for the more than 200 landslide deposits that impound streams and entomb valley forests in western Cascadia.

Fig. 2. Lidar bare-earth data identify the origin of various landscape features in this map of the landslide-dammed Klickitat Lake in Lincoln County, Ore. The lake formed when a landslide deposit (pink) dammed a river. The image also shows the current landslide-dammed lake (blue); the sediment backfill from impounded rivers flowing to the southeast (brown); the lake high-water level, showing the extent of the lake immediately after the landslide (blue dashed line); and the head scarp of the landslide (yellow). Precise age dating of youthful landslides can help determine the landslide hazard posed by Cascadia megathrust earthquakes. Lidar base data are courtesy of DOGAMI. Mapping Landslides Using Topography Data

The working group has identified light detection and ranging (lidar) topographic data as a key asset for landslide mapping in the heavily forested Cascadia terrain [Burns and Madin, 2009]. DOGAMI and the Puget Sound Lidar Consortium have been instrumental in facilitating and serving lidar data (Figure 2), and as a result, Oregon and Washington geoscientists have added thousands of landslides to publicly available landslide inventories.

By combining techniques, including dendrochronologic studies of drowned trees, we may be able to observe the link between landslide ages and Cascadia megathrust events.Coupling quantitative analysis of landslide roughness with carbon dating of organic material from landslide deposits demonstrates that landslide ages can be estimated through morphologic analysis of surface roughness [Lahusen et al., 2016; Booth et al., 2017]. With these techniques, combined with dendrochronologic studies of drowned trees, we may be able to observe the link between landslide ages and Cascadia megathrust events.

Reading the Historical Record

Through these coordinated efforts, the working group hopes to shed light on the landslide hazards of great Cascadia earthquakes. To spur broader awareness and involvement of the Earth science community, the working group coordinated a technical session at the 2017 Geological Society of America Annual Meeting addressing subduction zone coseismic landslides.

Ultimately, merging regional landslide chronology data sets with new data and ground motion simulations for megathrust earthquakes, such as those produced by the U.S. National Science Foundation–funded M9 Project, should improve our understanding of the connection between subduction zone earthquakes and the response of the terrestrial landscape.


Funding for working group efforts was provided by the U.S. Geological Survey Cascadia Recurrence Project. Thoughtful reviews by Karl Wegmann, Sean Gallen, and Scott E. K. Bennett and an anonymous reviewer greatly improved this article.

The post Hunting for Landslides from Cascadia’s Great Earthquakes appeared first on Eos.

Dinosaur-Killing Asteroid Impact Made Huge Dead Zones in Oceans

Wed, 08/08/2018 - 11:38

About 66 million years ago, an asteroid roughly 10 kilometers wide hit Earth in what is today the Gulf of Mexico. It brought annihilation: All the dinosaurs except for the birds went extinct; forests around the planet vanished temporarily, killing off all bird species that lived in trees; dust and other aerosols blocked the Sun, and global temperatures took a nosedive. The world plunged into a state analogous to nuclear winter.

“The global warming following the impact is one of the most rapid warmings in Earth’s history.”Another fallout effect of the impact, according to new work, was a depletion of oxygen in the oceans triggered by rapid global warming following the impact and nuclear winter. Such anoxia, the researchers behind the work report, devastated marine life. What’s more, this episode of anoxia may have parallels to the rapid global warming and resulting ocean anoxia being wrought by human-driven climate change today.

“The global warming following the impact is one of the most rapid warmings in Earth’s history,” said Johan Vellekoop, a geologist at KU Leuven in Belgium who led the new research. “It’s on a human timescale.” He described that the postimpact warming happened over the course of only a few hundred to a few thousand years.

By comparison, humans have been injecting carbon dioxide (CO2) into the atmosphere—and driving global warming in the process—for about 200 years, since the start of the Industrial Revolution. Vellekoop and his team note in a paper published in Geology earlier this summer that as today’s warming continues unabated, the oceans appear poised to become anoxic once again.

An Unlucky Day

When the asteroid struck, it hit a platform of carbonate rock that was about 3 kilometers thick, creating a feature called the Chicxulub crater. Carbonate rock, when vaporized by something like a giant asteroid, releases CO2 into the atmosphere; previous work estimates that the amount of CO2 injected into the air after the impact equaled about 1,400 gigatons. By comparison, humans injected roughly 32.5 gigatons of CO2 into the atmosphere in 2017 alone.

After the impact, the event akin to nuclear winter descended, which saw global average temperatures fall by about 25°C, explained Timothy Bralower, a marine geologist at Pennsylvania State University who was not involved in the new research. The winter probably lasted for only a few years, and then as the dust cleared, global warming got going, and sea surface temperatures climbed by 1.5°C to 2°C.

And Then There Was Anoxia

“We found enrichments of molybdenum, which are indicators of low oxygen conditions.”For their study, Vellekoop and his team visited three Northern Hemisphere sites in Texas, Denmark, and Spain and took samples of marine rocks from strata right above the asteroid impact horizon. They tested the rocks for concentrations of the element molybdenum; “we found enrichments of molybdenum, which are indicators of low oxygen conditions,” said Vellekoop.

When oxygen is plentiful, it binds to molybdenum and removes it from seawater, he explained. When oxygen is scant, molybdenum sticks around in the seawater, where it can then incorporate into rocks like the ones the team tested. In rocks from Denmark, for instance, molybdenum concentrations jump from 1 or 2 parts per million to “up to 100 parts per million in the layer directly above the impact,” Vellekoop said.

This increase suggests ancient shallow oceans were relatively warm places, Bralower explained, because warmer ocean waters have a harder time retaining dissolved oxygen than colder waters. In such a world, there are oxygen “dead zones” where oxygen is scant or completely absent.

And dead zones, explains Vellekoop, would have helped devastate marine ecosystems, especially shallow-water seafloors where creatures like corals and bivalves dwelled, during postimpact warming. Such coastal waters, Vellekoop explains, “are the places with the highest diversity.”

A Mirror in the Past

In the course of geologic events, many things tend to happen on relatively long timescales. But an asteroid impact is an exception. It is, almost by definition, instantaneous.

“The slower the change, the better it is for life.”Such rapidity, explained Ellen Thomas, a paleoceanographer at Yale University who was not involved in the work, makes this ancient warming analogous to today’s human-driven global warming, which is close to being as quick as asteroid-related warming. “Although the causes are slightly different, and the timescale may be different, the basic principles are the same,” she said.

Life can adapt in the face of such change, Thomas added, but how much it is able to do so depends on the rate of change. Basically, “the slower the change, the better it is for life,” she said.

Today, because some areas of the oceans are already showing signs of oxygen depletion, the threat is not as much from off planet as it is from within, Thomas explained. Thus, if temperatures continue to climb in the coming century, humanity seems poised to become its own kind of asteroid.

—Lucas Joel (email: lucasvjoel@gmail.com), Freelance Writer

The post Dinosaur-Killing Asteroid Impact Made Huge Dead Zones in Oceans appeared first on Eos.

Are Diamonds Ubiquitous Beneath Old Stable Continents?

Tue, 08/07/2018 - 11:55

High seismic velocities beneath cratons, the ancient cores of continents, indicate anomalous, depth-dependent composition of their lithosphere. What this composition is has been the subject of a long-standing controversy. Garber et al. [2018] compare profiles of seismic velocities from recent tomographic models with those computed for different lithospheric compositions and infer that the data can be explained by a ubiquitous presence of diamond and eclogite beneath cratons. They show that compositions with up to 2 percent of diamond and up to 20 percent eclogite can reproduce high seismic velocities observed at 120 to 150 kilometer depths. Such compositions are also consistent with various other geophysical observables, as well as data from natural samples and carbon mass balance constraints. Valued for their rarity at the Earth’s surface, diamonds may thus be plentiful at depth—surprisingly common building blocks of the deep lithosphere of cratons.

Citation: Garber, J. M., Maurya, S., Hernandez, J.‐A., Duncan, M. S., Zeng, L., Zhang, H. L., et al [2018]. Multidisciplinary constraints on the abundance of diamond and eclogite in the cratonic lithosphere. Geochemistry, Geophysics, Geosystems, 19. https://doi.org/10.1029/2018GC007534

—Sergei Lebedev, Associate Editor, Geochemistry, Geophysics, Geosystems

The post Are Diamonds Ubiquitous Beneath Old Stable Continents? appeared first on Eos.

Improving Air Quality Could Prevent Thousands of Deaths in India

Tue, 08/07/2018 - 11:53

More than 6.1 million people worldwide die each year as a result of exposure to air pollution, which increases the risk of cardiovascular disease, lung disease, and cancer. In India, which contains many of the world’s most polluted cities, the annual death toll from air pollution exceeds 1.6 million. Now, a new study shows how implementing stricter emissions standards in India could save hundreds of thousands of lives each year.

One of the most dangerous components of air pollution is fine particulate matter (PM2.5), nanoscopic particles and droplets produced by burning fuels, which travel deep into the lungs and bloodstream and damage the lungs and heart. On average, Indian citizens are exposed to PM2.5 concentrations between 15 and 32 times the air quality guidelines set forth by the World Health Organization, and scientists project that India’s PM2.5 levels will double by 2050 relative to 2015. In New Delhi, one of the world’s most polluted megacities, PM2.5 concentrations have reached more than 1,200 micrograms per cubic meter, 48 times the guideline established by the World Health Organization.

Haze over the Indo-Gangetic Plain in January 2016. Credit: Jeff Schmaltz, LANCE/EOSDIS Rapid Response

The Indian government has policies in place to reduce the rapid rise of pollution, such as curbing emissions from buses and trucks and expanding the household use of liquified petroleum gas to replace solid fuels. In their new study, Conibear et al. decided to compare India’s existing and planned policies to a more aggressive plan to reduce emissions. The team used a high-resolution computer model to estimate the pollution levels people breathe at ground level throughout India and test how different emissions policies would affect their exposure and health.

Under India’s existing and planned policies, dubbed the New Policy Scenario, the rate of growth in Indian citizens’ exposure to pollution decreased by 9%, the team found. Compared to the present day, that plan of action will avert about 61,000 premature deaths in 2040, they calculated. A more aggressive plan, called the Clean Air Scenario, would decrease the rate of growth in air pollution by about 65% and avert around 610,000 deaths, they found.

Even with zero emissions growth, India’s rapidly growing and aging population means that the rates of disease and premature mortality caused by air pollution will increase by 75% from 2015 to 2040. Despite that grim statistic, the team argues, hundreds of thousands of deaths could be avoided through tighter emissions standards—like cleaner iron and steel manufacturing—and universal access to clean household energy. (GeoHealth, https://doi.org/10.1029/2018GH000139, 2018)

—Emily Underwood, Freelance Writer

The post Improving Air Quality Could Prevent Thousands of Deaths in India appeared first on Eos.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer