Feed aggregator

Ashley Lindalía Walker: Leading a Celebration of Black Scientists

EOS - Tue, 08/24/2021 - 14:57

Ashley Lindalía Walker’s academic interests started close to home and have traveled millions of kilometers. “I’m a nontraditional student,” said the Chicago native. “I started off in community college” in 2015, she said, studying forensic chemistry. “I really wanted to help understand what was happening and solve some of Chicago’s crime and things of that nature. But [then] I received a scholarship at my now alma mater, Chicago State University (CSU), the only Black 4-year college in Illinois.”

Once at CSU, Walker took an opportunity to participate in an astronomy research project, which quickly became her primary research focus. While still studying forensic chemistry, she interned at Harvard University researching planet-forming disks and at Johns Hopkins University studying aerosol hazes in the atmosphere of Saturn’s moon Titan.

Many community college students earn their associate’s degree within 2 years and then complete their bachelor’s degree in another 2 years, she explained. “However, it took me a little bit longer.” After graduating from CSU in 2020 as the first astrochemist in the university’s history, she became a postbaccalaureate researcher at NASA Goddard Space Flight Center investigating Titan’s atmospheric chemistry.

For the past few years, Walker has also focused on science communication, especially highlighting the experiences and amplifying the voices of Black scientists. As calls for racial justice increased during 2020, Walker founded and organized the first Black In Astro week on social media and co-organized Black in X events in other disciplines.

“I wanted to show some of the issues that we face as Black astronomers, aerospace engineers, space policy people, and so on,” Walker said. “I really wanted to show last year: This is what’s happened to us. There’s not many of us in the field or within these spaces. How can you all make us feel better? How can you all make us feel comfortable? This is what we see through our eyes and through our lens. This year we’re focusing on a celebration versus trauma. Now that we told our story and people know our story, we want to focus on celebrating us and how can we retain us and continuously recruit more of us.”

She plans to continue her work as a science communicator as she pursues a doctoral degree in atmospheric science at Howard University starting this fall. Walker (@That_Astro_Chic) encourages everyone to join in celebrating and amplifying Black experiences in space-related fields with Black In Astro (@BlackInAstro) during the annual #BlackInAstro events on social media and all year round.

This profile is part of a special series in our September 2021 issue on science careers.

—Kimberly M. S. Cartier (@AstroKimCartier), Staff Writer

Navakanesh M Batmanathan: Customizing Hazard Outreach

EOS - Tue, 08/24/2021 - 14:56

In June 2015, a magnitude 6.0 earthquake struck Sabah, a state of Malaysia in the northern part of the island of Borneo.

“It was a big surprise in Malaysia because, actually, we never experienced a magnitude 6 [earthquake] in that region,” said Navakanesh M Batmanathan. The seismic event was located away from active plate boundaries. M Batmanathan was in the perfect position to investigate what happened. He’d been fascinated with rocks ever since he was a child and, at the time, was pursuing a master’s degree in geophysics and seismology at Curtin University in Malaysia. His adviser encouraged him to focus on the Sabah earthquake, given its surprising nature.

M Batmanathan mapped faults that contributed to the quake by using a combination of satellite data and on-the-ground field measurements. These methods also allowed him the opportunity to engage with residents in Sabah. He learned not only about how the event affected people living in the area but also that there was a lack of awareness about earthquake hazards in the region.

As the recipient of a National Geographic Young Explorer grant, M Batmanathan helped to produce a short documentary that combined locals’ stories with educational information about the Sabah earthquake. He and his colleagues also taught schoolchildren in Sabah.

M Batmanathan is now a Ph.D. student at the National University of Malaysia and continues to study earthquakes—but from a slightly different perspective. He’s exploring potential connections between tectonics and sea level rise, not only in East Malaysia, where Borneo is located, but in peninsular Malaysia as well. He’s also a research assistant at the Southeast Asia Disaster Prevention Research Initiative.

In addition to fieldwork, Navakanesh M Batmanathan and his colleagues educated children in Sabah. Here M Matmanathan (center) is pictured at a school visit. Credit: Eric Chiang Hinn Yuen

M Batmanathan has done outreach on a range of science topics and attributes at least some of his success to tailoring content to different audiences, depending on their immediate concerns: He spoke about earthquakes with kids from Sabah, for instance, but focused on climate change and sea level rise with children from coastal communities in peninsular Malaysia. Determining the emphasis of community-focused outreach “depends on the region,” he said.

In the future, M Batmanathan hopes to continue educating people in Malaysia and, someday, Southeast Asia more broadly. The region is one of the most geologically active in the world—in addition to earthquakes, it is home to volcanoes, tsunamis, and landslides. “Southeast Asia is huge—we definitely need more groups to work on this,” he said.

M Batmanathan recently presented his work on the earthquake geology of Borneo at a webinar from the U-INSPIRE Alliance, an alliance of youth, young scientists, and young professionals working in science, engineering, technology, and innovation to support disaster risk reduction and resilience building, in line with the U.N. Sustainable Development Goals and the Sendai Framework for Disaster Risk Reduction. He also regularly posts about earthquakes on Instagram (@navakanesh).

This profile is part of a special series in our September 2021 issue on science careers.

—Jack Lee, Science Writer

Robin George Andrews: “The New York Times Volcano Guy”

EOS - Tue, 08/24/2021 - 14:56

Science journalist Robin George Andrews remembers first seeing a volcano—albeit an imaginary one—in The Legend of Zelda: Ocarina of Time. The ominous Death Mountain had sentient lava and monsters prowling its hollowed-out insides.

“It was all very fantastical,” Andrews said, and as a 10-year-old growing up in the United Kingdom, Death Mountain set him on a quest to visit real-life volcanoes around the world as a scientist. Now Andrews writes about volcanoes for National Geographic, the New York Times, the Atlantic, and other publications.

Andrews started out on a typical academic track. After earning strong grades in high school and encouragement from geography teachers, he attended a special program at Imperial College London. The program allows secondary students to earn a master’s degree in geology in just 4 years, bypassing a bachelor’s degree.

His next stop, after a year of rest, took Andrews to the volcano-ridden islands of New Zealand, where he studied at the University of Otago for his Ph.D. and helped create laboratory experiments that modeled volcanic eruptions. Although his work took him around the world, living in a quiet university town left the highly extroverted Andrews feeling isolated. Academia was losing its sheen too: Chasing funding frustrated him, and he didn’t like the prospect of leaving friends and family every few years for new posts.

Andrews hikes on a trip funded by the European Space Agency to take writers to see the northern lights in Norway. The photo “belies the fact that I recently fell right into a snowbank.” Credit: Robin George Andrews

Andrews started writing blog posts for Nature’s Scitable, the Earth Touch News Network, Discover, and Forbes—and found he actually preferred telling stories of science to doing it.

Andrews didn’t know any journalists, let alone science journalists, but after he graduated, he emailed an editor he admired at Gizmodo and scored a gig writing articles for her. From there, he cold-pitched National Geographic, Scientific American, and the New York Times, landing articles in publications he thought would take years to break into. “I put so much work into getting to that point,” Andrews said, bypassing sleep for almost a year and a half to build his reputation as “the New York Times volcano guy.” His own stubbornness and the emotional support from his partner and parents helped him make the jump from science to journalism.

Now Andrews freelances full-time and writes for a dozen publications. “Most science journalists have a beat of some sort, and my beat is generally things that explode in space or on Earth.” His first book, Super Volcanoes: What They Reveal About Earth and the Worlds Beyond, comes out this November. In it, readers can learn about strange volcanoes across the cosmos, like Tharsis on Mars, which tipped the entire planet 20 degrees. “To me, it sounds like magic sorcery”—or perhaps something out of a video game, he said.

Andrews welcomes messages from those interested in science writing through his website (robingeorgeandrews.com) or on Twitter (@SquigglyVolcano).

This profile is part of a special series in our September 2021 issue on science careers.

—Jenessa Duncombe (@jrdscience), Staff Writer

Darcy L. Peter: Harnessing Alaska’s Native Knowledge

EOS - Tue, 08/24/2021 - 14:54

A Gwich’in scientist from Beaver, Alaska, Darcy L. Peter spent her childhood on the land around her Alaska Native village hunting, fishing, and trapping. Now she studies the Arctic tundra with the Woodwell Climate Research Center, where she investigates climate change while building bridges between Indigenous communities and research scientists.

Peter majored in environmental biology at Fort Lewis College in Durango, Colo. After graduating, she returned to Alaska for a 1-month Arctic research program with the Polaris Project.

That summer shaped Peter’s career in two pivotal ways: One, she fell in love with field research, and two, she found a workplace that valued her ideas. She pointed out an opportunity for Polaris to connect with the local people near their field site, so the project leaders invited her back the next two summers. It was “really cool as an Indigenous young career person to have my voice be valued and heard.”

For the next 2 years, Peter took jobs at several Alaskan Native nonprofits working with Alaska Native communities on issues such as permafrost erosion and contamination cleanup. She began volunteering on half a dozen boards that control the state’s fishing and river regulations to increase Indigenous participation.

Peter started graduate studies in wildlife biology at the University of Alaska Fairbanks 2 years after graduating college but quickly felt stifled. She had to pass up job offers, like an executive director position at an Alaska Native organization, and was already familiar with the communities she was now reading about in papers. She left after 3 months and soon landed her “dream job” back where she’d started her career: the Polaris Project at Woodwell Climate.

Darcy L. Peter, left, crouches on the tundra to study the effects of Arctic climate change in southwestern Alaska. Peter credits her boss at the Polaris Project, Sue Natali, right, for being a steadfast ally to her. “She has the same values as me in terms of family, diversity, inclusion, communication, [and] ethical research.” Credit: Chris Linder“I am trained to be a scientist, but the cutting-edge science that is being done is not what I’m most proud of. It’s definitely the relationship building, making sure that the science is communicated, making sure the science is ethical, making sure that we’re incorporating Traditional Knowledge into our science…that is the most rewarding work.”

White-dominated spaces, including academia and workplaces, can be taxing on the mental health of scientists of color, particularly women of color, Peter said. But at Polaris, Peter is encouraged to think creatively, and when she presents an idea, her supervisor often tells her to “run with it.” Case in point: Peter wrote a guide for equitable research in the Arctic. “That’s something I’m pretty proud of because it’s gotten a lot of traction in the science world,” she said.

Peter encourages people to follow her on Twitter (@darcypeter1) and familiarize themselves with the Woodwell Climate Research Center’s guiding principles for working in local northern communities.

This profile is part of a special series in our September 2021 issue on science careers.

—Jenessa Duncombe (@jrdscience), Staff Writer

Fushcia-Ann Hoover: The Business of Environmental Justice

EOS - Tue, 08/24/2021 - 14:38

By her bedside, Fushcia-Ann Hoover still has the iron table lamp she welded as a 13-year-old in her junior high industrial technology class. “I like working with my hands,” Hoover said.

Now an assistant professor of environmental planning at the University of North Carolina at Charlotte, her passion as a “maker” drew her not only to engineering but also to business. In 2020, she founded EcoGreenQueen LLC to help people better integrate environmental justice into their work.

Hoover began her bachelor’s degree in mechanical engineering at the University of St. Thomas in Minnesota in 2005. During a summer abroad course in Germany studying renewable energies and her research on ethynyl as a McNair Scholar, she thought she’d found her calling. But when she graduated from college at the height of the Great Recession, the environmental engineering jobs she wanted required either a master’s degree or substantial work experience.

Instead, she took a job as a tutor in her hometown at Saint Paul Public Schools while researching graduate schools. “I used that time to really try and focus on where and what to study,” she said. Working as a tutor had a lasting impact on her career: Hoover noticed that her students, many of whom were people of color, faced challenges both inside and outside the classroom that made it difficult for them to excel. She had a realization that would serve as a guiding principle in her master’s and doctoral work in ecological science and engineering at Purdue University 2 years later. “Whatever it was that I was going to do, if it wasn’t going to somehow make [the students’] lives better, then I wasn’t interested in doing it.”

Hoover embarked on transdisciplinary research at Purdue, work that would pave the way to consulting in her specialty of urban green infrastructure. In her dissertation, she not only conducted an analysis of watersheds but also interviewed Black residents in Chicago’s South Side and city planners about Chicago’s green infrastructure practices. She sees a dire need for geoscientists to include communities in projects and critically examine power structures.

During her second postdoc, Hoover founded her consulting company to assist professionals, researchers, and government officials looking to incorporate environmental justice or interdisciplinary methods into their business, scholarship, or city plans.

Looking back at her career, she’s proudest of how she’s upheld her values. “I don’t have to leave out race or leave out water,” she said. “I get to bring everything in and say, ‘No, this is all important…and we’re going to talk about it.’”

This profile is part of a special series in our September 2021 issue on science careers.

—Jenessa Duncombe (@jrdscience), Staff Writer

Aisha Morris: Opening the Door to Science

EOS - Tue, 08/24/2021 - 14:38

Aisha Morris’s interest in geology runs deep. Growing up, she collected and cleaned rocks, and then sold them to the kids around her suburban Minnesota neighborhood. But despite this early financial success, it wasn’t until college that she seriously considered geology as a career path.

Morris was hooked as an undergraduate at Duke University, after Jeff Karson (now at Syracuse University) offered her a research opportunity studying mid-ocean rifts. “It was a challenge, because I was learning how to do research and ask questions,” Morris said, “but it was also very empowering to create knowledge as an undergrad.”

The project culminated in a senior thesis and a chance to dive in the Alvin submersible. With Alvin, she was able to see directly the rift she had been studying, through a porthole instead of on a screen.

Later, as a grad student at the University of Hawai‘i, Morris was wary of work–life imbalances and the lack of diversity in academia. She found a postdoc position with Karson that allowed her to work on both research and broadening participation activities among underrepresented groups.

She quickly learned that the work she was doing to create and support a more diverse scientific community was more fulfilling than the research. So Morris left academia in 2013 to further pursue that community building, first at UNAVCO and then at the National Science Foundation (NSF), where she is now a program director for Education and Human Resources.

Not everyone understood Morris’s decision to leave the academic path. “Sometimes you have to create your own pathway,” she said. “Who knows what kind of path you’re opening up for others in your forging ahead?”

At NSF, Morris is working to bring previously excluded groups into the geosciences through programs like the Improving Undergraduate STEM Education (IUSE) initiative. IUSE funds projects for precollege, undergraduate, and graduate students with a focus on service learning and outreach to historically excluded groups and from nongeoscience degree programs.

Success in the sciences doesn’t look like a tenure-track professorship at an R1 institution for everyone, she said. For Morris, the ultimate success would be to work herself out of a job by creating a community so welcoming it no longer needed the kind of broadening participation initiatives to which she’s dedicated her career.

To keep up with Morris’s work, follow her on Twitter (@volcanogirl17) or the Education and Human Resources portal at NSF’s Division of Earth Sciences.

This profile is part of a special series in our September 2021 issue on science careers.

—Kate Wheeling

Karen Layou: A Wider 2-Year Track

EOS - Tue, 08/24/2021 - 14:38

Karen Layou started her college career as a chemical engineering major. But after a year and a half, she realized she hated it. One day she stumbled upon a small museum on the Penn State campus, tucked away in the geology building.

“It was like being reunited with old friends,” Layou said. She’d collected rocks as a child and even categorized them for a third-grade science fair project. The mineral samples and fossils in the museum’s collection reawakened her childhood interests, including a passion for paleontology sparked by a family road trip to the Grand Canyon when she was in high school.

At the museum Layou decided to go upstairs to the geology office and ended up speaking with the dean, and changed her major that day. Layou graduated and went on to complete a master’s program at the University of Cincinnati before working for what is now the Texas Commission on Environmental Quality. While she learned a lot, she wasn’t satisfied.

As a high-schooler, Layou visited the Grand Canyon with her family—and was inspired to pursue a geoscience career. She is pictured here with her mother. Credit: Karen Layou

“I decided that, no, I need to go back and get [a] Ph.D. to fulfill that promise I made to myself, to get myself back out to landscapes I love,” Layou said. She earned a Ph.D. in geology at the University of Georgia and then obtained a position at the College of William and Mary in Williamsburg, Va., as a sabbatical replacement. She found that she enjoyed teaching and “cobbled together” a career by working as an adjunct professor at several schools. In 2013, she became a professor of geology at Reynolds Community College in Richmond, Va.

Since then, Layou has been involved in Supporting and Advancing Geoscience Education at Two-Year Colleges, or SAGE 2YC. The project aims to broaden participation in the geosciences, clarify transfer and workforce pathways for students at 2-year colleges, and emphasize best teaching practices for faculty. Layou especially loves sharing science with nonscience majors and “spreading the love—the geo love.”

SAGE 2YC has also been able to provide mentoring for students and send them to the annual Virginia Geological Field Conference. “We were able to bring 2-year college students to mix with professional geologists and just talk about a day in the life of their jobs,” Layou said. She will also continue to further geoscience education as the incoming president of the Geo2YC division of the National Association of Geoscience Teachers.

Layou encourages audiences to learn more about SAGE 2YC and also to check out the free online textbook she’s been working on.

This profile is part of a special series in our September 2021 issue on science careers.

—Jack Lee, Science Writer

Munazza Alam: Searching for New Worlds

EOS - Tue, 08/24/2021 - 14:38

For Munazza Alam, pondering the stars is part of being human. “Everyone, at some point, has looked up at the sky and contemplated the cosmos,” she said.

Yet as a New Yorker, Alam grew up without truly seeing the stars, because of light pollution. It took a couple of pivotal mentors and a trip to Arizona to bend her trajectory toward the Ph.D. in astronomy that she recently earned from Harvard University.

“I was always a curious child,” Alam said. In high school, she channeled this curiosity into physics class, where she was captivated by the way her teacher, Betty Jensen, unveiled the inner workings of the universe. Jensen was also the first woman Alam met who’d earned a Ph.D. in physics. Because of her, Alam started wondering whether academics could be her path too.

Alam wanted to stay in New York City after high school, so she joined the Macaulay Honors College at Hunter College, City University of New York. This selective program covers its students’ tuition, which allowed Alam to attend college without going into debt.

Alam knew she wanted to major in physics, but her scheduling adviser told her she needed to choose a specific research direction. Try astronomy, her adviser suggested.

As a graduate student, Alam conducted research at Las Campanas Observatory. Above, she calibrates the observatory’s Magellan Clay Telescope in preparation for her first solo observing run. Credit: Munazza Alam

At age 19, a year into her career as an astronomy researcher, Alam traveled to Kitt Peak National Observatory in Arizona. For the first time in her life, the Milky Way stretched before her, without interference from streetlights and apartment buildings. Alam was starstruck. “It’s just etched in my brain,” she said. The trip solidified her decision to become an astronomer.

Alam chose Harvard University for her graduate work because she found the astronomy department exciting and innovative. Her work focused on exoplanets, and she hopes it will one day reveal whether life exists outside our solar system.

Although Alam loved her work, she initially found graduate school exhausting. One of the most valuable lessons she learned is that it’s OK to take breaks.

As Alam begins a postdoctoral position at the Carnegie Science Earth and Planets Laboratory, she says she’s most proud of the network of mentors she’s built. She described her Ph.D. thesis adviser, Mercedes López-Morales, as “an absolute phenom,” and she’s also formed lasting connections with other faculty, staff, and students. To round out the crew, Alam still keeps in touch with her high school physics teacher, who attended Alam’s Ph.D. defense this past spring.

Alam welcomes messages through her website.

This profile is part of a special series in our September 2021 issue on science careers.

—Saima Sidik

Rick Jones: Finding the High School Spirit

EOS - Tue, 08/24/2021 - 14:38

In the early 1980s, when Rick Jones was studying geology as an undergraduate at the University of Wyoming, the U.S. oil business was booming. He anticipated a well-paid position following graduation and a stable career as a geologist. Then the oil industry went bust.

“[The economy] changed my trajectory,” said Jones. He was able to find work at companies that did groundwater monitoring, and in the mid-1980s, Jones found himself overseas, working with a nongovernmental organization installing water utilities in refugee camps. He became involved in sanitation education and quickly saw how “a little bit of education can go a long way.” When he came back to the United States, Jones decided to go back to school, getting a second bachelor’s degree in science education as well as a master’s in natural sciences at the University of Wyoming.

His first job after graduation was as a middle school science teacher in Lihue, Hawaii. The experience was eye-opening: “I got to realize really quicky that I had it pretty easy,” Jones said. “I realized that I really need to make sure that I give an opportunity to my students, so that they can definitely go wherever they want to go.”

A family circumstance led Jones and his wife to return to the mainland, to Billings, Mont. There, Jones was a middle and high school teacher for nearly 2 decades, teaching everything from Earth science to biology to physics.

A field mapping class at the University of Wyoming in 1982 inspired Rick Jones to earn a geology degree. Credit: Rick Jones

“The thing that you’re most proud about when you’re teaching,” Jones said, “is when somebody that you really didn’t think that you connected with comes back and says, ‘It’s because of you that I am a success.’”

Jones has continued to pursue his passion for teaching and learning, both inside and outside academia. He twice participated in the NOAA Teacher at Sea program and obtained a doctorate in education from Montana State University.

Jones has since moved back to Hawaii and is now a geoscience educator at the University of Hawai‘i–West O‘ahu. He is also president of the National Earth Science Teachers Association, where he aims to instill a love of learning in the next generation of science teachers.

Find out more about Jones on Twitter (@mtzennmaster), where you can follow his science advocacy as well as see his quilt designs and updates about his 55-year-old Volkswagen convertible, or at his website.

This profile is part of a special series in our September 2021 issue on science careers.

—Jack Lee, Science Writer

Zdenka Willis: Sailing into a High-Tech Future

EOS - Tue, 08/24/2021 - 14:38

Zdenka Willis always loved the ocean. Every summer, her family trekked from Indiana to South Carolina, where Willis and her sisters combed the beaches for sharks’ teeth, watched the dolphins swim, and wondered what other marvels were out there, just out of view.

When it came time for college, Willis returned to the Eastern Seaboard to study marine science at the University of South Carolina. Encouraged by her father, who had emigrated from Czechoslovakia and joined the Army Reserves after settling in the United States, Willis applied for a Reserve Officers’ Training Corps scholarship. When she graduated in 1981, she became a U.S. Navy oceanographer.

Willis’s first assignment at sea was aboard a 122-meter hydrographic research vessel, where she led a small boat crew mapping the ocean floor off the coast of Haiti. In those days, Navy ships were still using paper charts, but as technology evolved and Willis’s responsibilities expanded, she helped the Navy transition to digital charts and adopt new weapons systems. Willis loved both the challenges and the opportunities presented by a military career. “You’re always being challenged with new ideas and new tasks, and that makes it exciting,” she said. “You certainly don’t get bored in your job.”

Zdenka Willis’s parents, Vladimir and Margaret Saba, attended her retirement ceremony as she left the U.S. Navy. Credit: Zdenka Willis

Willis retired from the Navy as a captain in 2006 and went to work for NOAA. There, she became the founding director of the agency’s U.S. Integrated Ocean Observing System office, which aims to connect data, tools, and people all along the nation’s coasts. It was her broadest mission yet. Willis had spent her Navy career thinking about how what she was learning about the ocean might affect naval operations and warfare, whereas “at NOAA, I had to care about everything from microbes to whales and everything in between,” she said.

Willis currently serves as president of the Marine Technology Society, an international organization that “brings together businesses, institutions, professionals, academics, and students who are ocean engineers, technologists, policymakers, and educators” in the advancement and application of marine technologies.

Willis is working to make both the organization and the field of marine science in general more equitable and inclusive, adding programs and positions to support young professionals and promote women leaders. She is committed to providing young professionals the same kind of opportunities and support she received in the Navy, where shipmates take care of each other. “The Navy gives you responsibility, expects you to perform, but is a very supporting environment.”

To learn more about the Marine Technology Society and its programs, Willis encourages people to follow the society on Twitter (@MTSociety) or through its website (mtsociety.org/).

This profile is part of a special series in our September 2021 issue on science careers.

—Kate Wheeling, Science Writer

Cooper Elsworth: Cycling‑Inspired Science

EOS - Tue, 08/24/2021 - 14:37

Cooper Elsworth can trace many of his career decisions back to his long-standing obsession with cycling. From a young age, he was amazed by the mechanics of the bikes that carried him along the roads and trails of rural Pennsylvania, where he grew up.

He was always interested in understanding how things worked. That interest fueled his undergraduate and master’s degrees in engineering, during which he worked on numerical methods to study fluid dynamics. When it came time to pursue a Ph.D., it was the hours he had spent biking, hiking, and kayaking with his family that inspired him to turn to the geosciences.

As Elsworth thought about how to apply his theoretical skills in fluid dynamics to applied science, studying ice sheets seemed a natural transition. “The ice sheets are really just very, very slow moving fluids,” he said. “Even more than that, I was excited about working on something climate related. The response of the ice sheets to climate change is one of the biggest unknowns in our projections of sea level rise, so it seemed like a really impactful area of research to go into.”

Initially, Elsworth felt empowered by basic science research and the opportunity to help answer outstanding questions about the climate system. Like most grad students, he planned to stay in academia. That began to change with the 2016 U.S. presidential election. He watched the results come in from Antarctica, where he was studying how subglacial meltwater influences the large-scale ice flow, and afterward he grew increasingly troubled by the environmental deregulation and climate inaction of the Trump administration.

“We’ve had climate science saying this is something we need to act on for decades,” Elsworth said. He became increasingly interested in how to take that basic climate science and turn it into climate action. Now a Ph.D. candidate in geophysics at Stanford University, he turned his professional interest to the private sector.

Elsworth became an applied scientist and, more recently, a program manager at the sustainability start-up Descartes Labs, where he leads the production of sustainability tools that use remote sensing to track things like carbon emissions from agricultural and consumer goods supply chains.

Now he tells students stressing about life decisions after grad school that academia, private industry, and the public sector aren’t as siloed as they seem, nor should they be: “It’s really valuable, especially in sustainability research, to break down those silos and to realize that we’re all moving toward a common goal and we’re trying to solve a common problem.”

Elsworth encourages people to reach him through LinkedIn (linkedin.com/in/coopere/) or his personal website (cooperelsworth.com).

This profile is part of a special series in our September 2021 issue on science careers.

—Kate Wheeling, Science Writer

Earth’s Continents Share an Ancient Crustal Ancestor

EOS - Mon, 08/23/2021 - 13:34

The jigsaw fit of Earth’s continents, which long intrigued map readers and inspired many theories, was explained about 60 years ago when the foundational processes of plate tectonics came to light. Topographic and magnetic maps of the ocean floor revealed that the crust—the thin, rigid top layer of the solid Earth—is split into plates. These plates were found to shift gradually around the surface atop a ductile upper mantle layer called the asthenosphere. Where dense oceanic crust abuts thicker, buoyant continents, the denser crust plunges back into the mantle beneath. Above these subduction zones, upwelling mantle melt generates volcanoes, spewing lava and creating new continental crust.

To better understand the production and recycling of crust, some scientists have shifted from studying massive moving plates to detailing the makeup of tiny mineral crystals.From these revelations, geologists had a plausible theory for how the continents formed and perhaps how Earth’s earliest continents grew—above subduction zones. Unfortunately, the process is not that simple, and plate tectonics have not always functioned this way. Subsequent research since the advent of plate tectonic theory has shown that subduction and associated mantle melting provide only a partial explanation for the formation and growth of today’s continents. To better understand the production and recycling of crust, some scientists, including our team, have shifted from studying the massive moving plates to detailing the makeup of tiny mineral crystals that have stood the test of time.

Starting in the 1970s, geologists from the Greenland Geological Survey collected stream sediments from all over Greenland, sieving them to sand size and chemically analyzing them to map the continent-scale geochemistry and contribute to finding mineral occurrences. Unbeknownst to them at the time, tiny grains of the mineral zircon contained in the samples held clues about the evolution of Earth’s early crust. After decades in storage in a warehouse in Denmark, the zircon grains in those carefully archived bottles of sand—and the technology to analyze them—were ready to reveal their secrets.

This cathodoluminescence image shows the internal structure of magnified zircons analyzed by laser ablation. Credit: Chris Kirkland

Zircon occurs in many rock types in continental crust, and importantly, it is geologically durable. These tiny mineral time capsules preserve records of the distant past—as far back as 4.4 billion years—which are otherwise almost entirely erased. More than just recording the time at which a crystal grew, zircon chemistry records information about the magma from which it grew, including whether the magma originated from a melted piece of older crust, from the mantle, or from some combination of these sources. Through the isotopic signatures in a zircon grain, we can track its progression, from the movement of the magma up from the mantle, to its crystallization, to the grain’s uplift to the surface and its later erosion and redeposition.

The Past Is Not Always Prologue

New continental crust is formed above subduction zones, but it is also destroyed at subduction zones [e.g., Scholl and von Heune, 2007]. Formation and destruction occur at approximately equal rates in a planetary-scale yin and yang [Stern and Scholl, 2010; Hawkesworth et al., 2019]. So crust formation above subduction zones cannot satisfactorily account for growth of the continents.

If subduction didn’t generate the volume of continental crust we see today, what did?What’s more, plate tectonic movements like we see on Earth today did not operate the same way during Earth’s early history. Although there are indications that subduction may have occurred in Earth’s early history (at least locally), many geochemical, isotopic, petrological, and thermal modeling studies of crust formation processes suggest that plate tectonics started gradually and didn’t begin operating as it does today until about 3 billion years ago, after more than a quarter of Earth’s history had already passed [e.g., McClennan and Taylor, 1983; Dhuime et al., 2015; Hartnady and Kirkland, 2019]. Because the mantle was much hotter at that time, more of it melted than it does now, producing large amounts of oceanic crust that was both too thick and too viscous to subduct.

Nonetheless, although subduction was apparently not possible on a global scale before about 3 billion years ago, geochemical and isotopic evidence shows that a large volume of continental crust had already formed by that time [e.g., Hawkesworth et al., 2019; Condie, 2014; Taylor and McClennan 1995].

If subduction didn’t generate the volume of continental crust we see today, what did?

How Did Earth’s Early Crust Form?

The nature of early Earth dynamics and how and when the earliest continental crust formed have remained topics of intense debate, largely because so little remains of Earth’s ancient crust for direct study. Various mechanisms have been proposed.

Perhaps plumes of hot material rising from the mantle melted the oceanic crustal rock above [Smithies et al., 2005]. If dense portions of this melted rock “dripped” back into the mantle, they could have stirred convection cells in the upper mantle. These drips might have also added water to the mantle, lowering its melting point and producing new melts that ascended into the crust [Johnson et al., 2014].

Or maybe meteorite impacts punched through the existing crust into the mantle, generating new melts that, again, ascended toward the surface and added new crust [Hansen, 2015]. Another possibility is that enough heat built up at the base of the thick oceanic crust on early Earth that parts of the crust remelted, with the less dense, buoyant melt portions then rising and forming pockets of continental crust [Smithies et al., 2003].

By whichever processes Earth’s first continental crust formed, how did the large volume of continental crust we have now build up? Our research helps resolve this question [Kirkland et al., 2021].

Answers Hidden in Greenland Zircons

We followed the histories of zircon crystals through the eons by probing the isotopes preserved in grains from the archived stream sediment samples from an area of west Greenland. These isotopes were once dissolved within molten mantle before being injected into the crust by rising magmas that crystallized zircons and lifted them up to the surface. Eventually, wind and rain erosion released the tiny crystals from their rock hosts, and rivulets of water tumbled them down to quiet corners in sandy stream bends. There they rested until geologists gathered the sand, billions of years after the zircons formed inside Earth.

In the laboratory, we extracted thousands of zircon grains from the sand samples. These grains—mounted inside epoxy resin and polished—were then imaged with a scanning electron microscope, revealing pictures of how each zircon grew, layer upon layer, so long ago.

Researchers used the laser ablation mass spectrometer at Curtin University to study isotopic ratios in zircon crystals. Credit: Chris Kirkland

In a mass spectrometer, the zircons were blasted with a laser beam, and a powerful magnetic field separated the resulting vapor into isotopes of different masses. We determined when each crystal formed using the measured amounts of radioactive parent uranium and daughter lead isotopes. We also compared the hafnium isotopic signature in each zircon with the signatures we would expect in the crust and in the mantle on the basis of the geochemical and isotopic fractionation of Earth through time. Using these methods, we determined the origins of the magma from which the crystals grew and thus built a history of the planet from grains of sand.

The oldest continental crust might have survived to serve as scaffolding for successive additions of younger continental crust.Our analysis revealed that the zircon crystals varied widely in age, from 1.8 billion to 3.9 billion years old—a much broader range than what’s typically observed in Earth’s ancient crust. Because of both this broad age range and the high geographic density of the samples in our data set, patterns emerged in the data.

In particular, some zircons of all ages had hafnium isotope signatures that showed that these grains originated from rocks that formed as a result of the melting of a common 4-billion-year-old parent continental crust. This common source implied that early continental crust did not form anew and discretely on repeated occasions. Instead, the oldest continental crust might have survived to serve as scaffolding for successive additions of younger continental crust.

In addition to revealing this subtle, but ubiquitous, signature of Earth’s ancient crust in the Greenland samples, our data also showed something very significant about the evolution of Earth’s continental crust around 3 billion years ago. The hafnium signature of most of the zircons from that time that we analyzed showed a distinct isotopic signal linked to the input of mantle material into the magma from which these crystals grew. This strong mantle signal in the hafnium signature showed us that massive amounts of new continental crust formed in multiple episodes around this time by a process in which mantle magmas were injected into and melted older continental crust.

Geologists work atop a rock outcrop in the Maniitsoq region of western Greenland. Credit: Julie Hollis

The idea that ancient crust formed the scaffolding for later growth of continents was intriguing, but was it true? And was this massive crust-forming event related to some geological process restricted to what is now Greenland, or did this event have wider significance in Earth’s evolution?

A Global Crust Formation Event

To test our hypotheses, we looked at data sets of isotopes in zircons from other parts of the world where ancient continental crust is preserved. As with our Greenland data, these large data sets all showed evidence of repeated injection of mantle melts into much more ancient crust. Ancient crust seemed to be a prerequisite for growing new crust.

This “hot flash” in the deep Earth may have driven a planetary continent growth spurt.Moreover, the data again showed that these large volumes of mantle melts were injected into older crust everywhere at about the same time, between 3.2 billion and 3.0 billion years ago, timing that coincides with the estimated peak in Earth’s mantle temperatures. This “hot flash” in the deep Earth may have enabled huge volumes of melt to rise from the mantle and be injected into existing older crust, driving a planetary continent growth spurt.

The picture that emerges from our work is one in which buoyant pieces of the oldest continental crust melted during the accrual and trapping of new mantle melts in a massive crust-forming event about 3 billion years ago. This global event effectively, and rapidly, built the continents. With the onset of the widespread subduction that we see today, these continents have since been destroyed, remade, and shifted around the surface like so many jigsaw pieces in perpetuity through the eons.

Index Suggests That Half of Nitrogen Applied to Crops Is Lost

EOS - Mon, 08/23/2021 - 13:33

Nitrogen use efficiency, an indicator that describes how much fertilizer reaches a harvested crop, has decreased by 22% since 1961, according to new findings by an international group of researchers who compared and averaged global data sets.“If we don’t deal with our nitrogen challenge, then dealing with pretty much any other environmental or human health challenge becomes significantly harder.”

Excess nitrogen from fertilizer and manure pollutes water and air, eats away ozone in the atmosphere, and harms plants and animals. Excess nitrogen can also react to become nitrous oxide, a greenhouse gas that is 300 times more potent than carbon dioxide.

Significant disagreements remain about the exact value of nitrogen use efficiency, but current estimates are used by governments and in international negotiations to regulate agricultural pollution.

“If we don’t deal with our nitrogen challenge, then dealing with pretty much any other environmental or human health challenge becomes significantly harder,” David Kanter, an environmental scientist at New York University and vice-chair of the International Nitrogen Initiative, told New Scientist in May. Sri Lanka and the United Nations Environment Programme called for countries to halve nitrogen waste by 2030 in the Colombo Declaration.

Whereas the global average shows a decline, nitrogen fertilizing has become more efficient in developed economies thanks to technologies and regulations, and new results out last month from the University of Minnesota as well as field trials by the International Fertilizer Development Center are just two examples of ongoing research to limit nitrogen pollution without jeopardizing yield.

Too Much of a Good Thing

Nitrogen is an essential nutrient for plant growth: It is a vital aspect of amino acids for proteins, chlorophyll for photosynthesis, DNA, and adenosine triphosphate, a compound that releases energy.

Chemist Fritz Haber invented an industrial process to create nitrogen fertilizer in 1918, and the practice spread. Since the 1960s, nitrogen inputs on crops have quadrupled. In 2008, food production from nitrogen fed half the world’s population.

“One of the things that is evident in nitrogen management, generally, is that there seems to be a tendency to avoid the risk of too low an application rate.”Yet nitrogen applied to crops often ends up elsewhere. Fertilizer placed away from a plant’s roots means that some nitrogen gets washed away or converts into a gas before the plant can use it. Fertilizer applied at an inopportune moment in a plant’s growth cycle goes to waste. At a certain point, adding more fertilizer won’t boost yield: There’s a limit to how much a plant can produce based on nitrogen alone.

“One of the things that is evident in nitrogen management, generally, is that there seems to be a tendency to avoid the risk of too low an application rate,” said Tony Vyn, an agronomist at Purdue University.

In many parts of the world, cheap subsidized fertilizer is critical for producing enough food. But gone unchecked, subsidies incentivize farmers to apply more than they need. And according to plant scientist Rajiv Khosla at Colorado State University, who studies precision agriculture, farmers struggle to apply just the right amount of fertilizer probably 90% of the time.

The 90% Efficiency Goal

According to an average of 13 global databases from 10 data sources, in 2010, 161 teragrams of nitrogen were applied to agricultural crops, but only 73 teragrams of nitrogen made it to the harvested crop. A total of 86 teragrams of nitrogen was wasted, perhaps ending up in the water, air, or soil. The new research was published in the journal Nature Food in July.

Globally, nitrogen use efficiency is 46%, but the ratio should be much closer to 100%, said environmental scientist Xin Zhang at the University of Maryland, who led the latest study. The crops with the lowest nitrogen efficiency are fruits and vegetables, at around 14%, said Zhang. In contrast, soybeans, which are natural nitrogen fixers, have a high efficiency of 80%.

The European Union Nitrogen Expert Panel recommended a nitrogen use efficiency of around 90% as an upper limit. The EU has reduced nitrogen waste over the past several decades, though progress has stagnated.

The United States has similarly cut losses by improving management and technology. For instance, even though the amount of nitrogen fertilizer per acre applied to cornfields was stable from 1980 to 2010 in the United States, the average crop grain yields increased by 60% in that period, said Vyn. Those gains can be hidden in broad-stroke indices like global nitrogen use efficiency.

“The most urgent places will be in China and India because they are two of the top five fertilizer users around the world,” Zhang said. China set a target for a zero increase in fertilizer use in 2015, which showed promising early results.

Cultivating Solutions

New research from the University of Minnesota using machine learning–based metamodels suggested that fertilizer amount can be decreased without hurting the bottom line.Just a 10% decrease in nitrogen fertilizer led to only a 0.6% yield reduction.

Just a 10% decrease in nitrogen fertilizer led to only a 0.6% yield reduction and cut nitrous oxide emissions and nitrogen leaching. “Our analysis revealed hot spots where excessive nitrogen fertilizer can be cut without yield penalty,” said bioproducts and biosystems engineer Zhenong Jin at the University of Minnesota.

Applying fertilizer right at the source could help too: A technique developed by the International Fertilizer Development Center achieved an efficiency as high as 80% in field studies around the world using urea deep placement. The method buries cheap nitrogen fertilizer into the soil, which feeds nitrogen directly into a plant and reduces losses.

Many other initiatives and technologies, from giving farmers better data to nitrogen-fixing bacteria, also show promise. Even practices as simple as installing roadside ditches can help.

Meanwhile, Vyn said researchers must focus on sharpening scientific tools to measure nitrogen capture. The differences in nitrogen inputs in the databases analyzed by the latest study were as high as 33% between the median values and the outliers.

“The nitrogen surplus story is sometimes too easily captured in a simple argument of nitrogen in and nitrogen off the field,” Vyn said. “It’s more complex.” His research aims to improve nitrogen recovery efficiency by understanding plant genotypes and management factors.

One of Zhang’s next research steps is to refine the quantification of nitrogen levels in a crop, which is currently based on simplistic measurements. “There has been some scattered evidence that as we’re increasing the yield, the nitrogen content is actually declining. And that also has a lot of implications in terms of our calculated efficiency,” Zhang said.

—Jenessa Duncombe (@jrdscience), Staff Writer

Lightning Tames Typhoon Intensity Forecasting

EOS - Fri, 08/20/2021 - 13:58

Each year, dozens of typhoons initiate in the tropics of the Pacific Ocean and churn westward toward Asia. These whirling tempests pose a problem for communities along the entire northwestern Pacific, with the Philippines bearing the brunt of the battering winds, waves, and rain associated with typhoons. In 2013, for instance, deadly supertyphoon Haiyan directly struck the Philippines, killing more than 6,000 people.

“On average,” said Mitsuteru Sato, a professor at Hokkaido University, “the Philippines will be hit [by] about 20 typhoons a year.”

Throughout the Pacific, the intensity of typhoons and torrential rainfall has been increasing, said Yukihiro Takahashi of Hokkaido University. “We need to predict that.”

Scientists today forecast intensity with less certainty than in decades past.Though scientists have improved typhoon tracking over the past 20 years, errors for intensity-related metrics, like pressure and wind speed, have counterintuitively increased, Sato said. In other words, scientists today forecast intensity with less certainty than in decades past, in spite of sophisticated models and expensive meteorological satellites.

Studies suggest that by measuring the lightning occurrence number (the number of strokes from cloud to ground or cloud to ocean), scientists may be able to forecast just how intense a typhoon might be about 30 hours before a storm reaches its peak intensity, said Sato. Today Sato and colleagues are using an inexpensive, innovative network of automated weather stations that more accurately measure a typhoon’s lightning occurrence number to convert that information into a prediction of how intense an incoming storm might be.

Philippine-Focused Forecasting

As a typhoon’s thunderclouds rise high into the atmosphere, its water-rich updrafts force ice and water particles to collide and become either positively or negatively charged, explained Kristen Corbosiero, an atmospheric scientist at the University at Albany. Lightning is nature’s attempt to neutralize the charge differences.

Lightning is controlled by a typhoon’s track, seawater temperature, and other variables, said Sato. Determining how these factors affect the complex interplay between lightning and storm intensity is the goal of the new Pacific-centric lightning monitoring system designed to detect and geolocate much weaker lightning than detected by existing global lightning monitoring networks.Determining how these factors affect the complex interplay between lightning and storm intensity is the goal of the new Pacific-centric lightning monitoring system.

This system includes six stations residing on Philippine islands and five stations distributed throughout the northwestern Pacific region. Each station, which monitors an area of about 2,000 kilometers, comes equipped with several sensors that measure rain, very low frequency signals produced by lightning, and other weather-related phenomena. The off-grid stations use solar power, storing energy in batteries for overcast days. An internal computer sends data over 3G cellular networks. The cost for each station totals about $10,000, substantially less expensive than meteorological satellites.

Because this system should more accurately measure the number of lightning flashes, Corbosiero said, “it does certainly have potential to improve forecasts.”

“If We Can Make the Precise Predictions, We Can Save Lives”

Sato, Takahashi, and their colleagues in the Philippines hope to refine and begin applying the lightning detection and forecasting system within the next 1–2 years, as data arrive from the nascent network of stations.

The network’s focus on the Philippines is key to its value: The Philippines’ furious rainy season means more data. More data contribute to more precise forecasts about a storm’s strength at landfall. More accurate forecasts will give emergency managers the information they need to inform the public about the risks of rain, storm surges, and wind. Combined, rainfall and storm surges can cause more damage than winds alone, said Corbosiero.

Perhaps more important, improving the accuracy of forecasts will help people believe that a storm is coming, said Takahashi. “In many, many cases, the people don’t believe” forecasts, he said. “They don’t want to evacuate.”

An additional consideration, said Takahashi, is integrating alert systems with lightning monitoring and forecasting. In developed and developing countries alike, everyone has a smartphone. With smartphones, “we can distribute this precise information directly to the people,” he said, “and precise information is the necessary condition to make [the people] believe.”

“If we can make the precise predictions,” Takahashi said, “we can save [lives].”

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Swipe Left on the “Big One”: Better Dates for Cascadia Quakes

EOS - Fri, 08/20/2021 - 13:58

The popular media occasionally warns of an impending earthquake—the “big one”—that could devastate the U.S. Pacific Northwest and coastal British Columbia, Canada. Although ranging into hyperbole at times, such shocking coverage provides wake-up calls that the region is indeed vulnerable to major earthquake hazards courtesy of the Cascadia Subduction Zone (CSZ).

The last behemoth earthquake on the Cascadia Subduction Zone (CSZ) struck on 26 January 1700. We know this age because of several lines of geologic proxy evidence, in addition to Japanese historical records.The CSZ is a tectonic boundary off the coast that has unleashed massive earthquakes and tsunamis as the Juan de Fuca Plate is thrust beneath the North American Plate. And it will do so again. But when? And how big will the earthquake—or earthquakes—be?

The last behemoth earthquake on the CSZ, estimated at magnitude 9, struck on 26 January 1700. We know this age with such precision—unique in paleoseismology—because of several lines of geologic proxy evidence that coalesce around that date, in addition to Japanese historical records describing an “orphan tsunami” (a tsunami with no corresponding local earthquake) on that particular date [Atwater et al., 2015]. Indigenous North American oral histories also describe the event. Geoscientists have robust evidence for other large earthquakes in Cascadia’s past; however, deciphering and precisely dating the geologic record become more difficult the farther back in time you go.

Precision dating of and magnitude constraints on past earthquakes are critically important for assessing modern CSZ earthquake hazards. Such estimates require knowledge of the area over which the fault has broken in the past; the amount of displacement, or slip, on the fault; the speed at which slip occurred; and the timing of events and their potential to occur in rapid succession (called clustering). The paucity of recent seismicity on the CSZ means our understanding of earthquake recurrence there primarily comes from geologic earthquake proxies, including evidence of coseismic land level changes, tsunami inundations, and strong shaking found in onshore and marine environments (Figure 1). Barring modern earthquakes, increasing the accuracy and precision of paleoseismological records is the only way to better constrain the size and frequency of megathrust ruptures and to improve our understanding of natural variability in CSZ earthquake hazards.

Fig. 1. Age ranges obtained from different geochronologic methods used for estimating Cascadia Subduction Zone megathrust events are shown in this diagram of preservation environments. At top is a dendrochronological analysis comparing a tree killed from a megathrust event with a living specimen. Here 14C refers to radiocarbon (or carbon-14), and “wiggle-match 14C” refers to an age model based on multiple, relatively dated (exact number of years known between samples) annual tree ring samples. Schematic sedimentary core observations and sample locations are shown for marsh and deep-sea marine environments. Gray probability distributions for examples of each 14C method are shown to the right of the schematic cores, with 95% confidence ranges in brackets. Optically stimulated luminescence (OSL)-based estimates are shown as a gray dot with error bars. Click image for larger version.

To discuss ideas, frontiers, and the latest research at the intersection of subduction zone science and geochronology, a variety of specialists attended a virtual workshop about earthquake recurrence on the CSZ hosted by the U.S. Geological Survey (USGS) in February 2021. The workshop, which we discuss below, was part of a series that USGS is holding as the agency works on the next update of the National Seismic Hazard Model, due out in 2023.

Paleoseismology Proxies

Cascadia has one of the longest and most spatially complete geologic records of subduction zone earthquakes, stretching back more than 10,000 years along much of the 1,300-kilometer-long margin, yet debate persists over the size and recurrence of great earthquakes [Goldfinger et al., 2012; Atwater et al., 2014]. The uncertainty arises in part because we lack firsthand observations of Cascadia earthquakes. Thus, integrating onshore and offshore proxy records and understanding how different geologic environments record past megathrust ruptures remain important lines of inquiry, as well as major hurdles, in CSZ science. These hurdles are exacerbated by geochronologic data sets that differ in their precision and usefulness in revealing past rupture patches.

One of the most important things to determine is whether proxy evidence records the CSZ rupturing in individual great events (approximately magnitude 9) or in several smaller, clustered earthquakes (approximately magnitude 8) that occur in succession. A magnitude 9 earthquake releases 30 times the energy of a magnitude 8 event, so the consequences of misinterpreting the available data can result in substantial misunderstanding of the seismic hazard.

Some of the best proxy records for CSZ earthquakes lie onshore in coastal environments, such as coastal wetlands.Geologic proxies of megathrust earthquakes are generated by different aspects of the rupture process and can therefore inform us about specific rupture characteristics and hazards. Some of the best proxy records for CSZ earthquakes lie onshore in coastal environments. Coastal wetlands, for example, record sudden and lasting land-level changes in their stratigraphy and paleoecology when earthquakes cause the wetlands’ surfaces to drop into the tidal range (Figure 1) [Atwater et al., 2015]. The amount of elevation change that occurs during a quake, called “coseismic deformation,” can vary along the coast during a single event because of changes in the magnitude, extent, and style of slip along the fault [e.g., Wirth and Frankel, 2019]. Thus, such records can reveal consistency or heterogeneity in slip during past earthquakes.

Tsunami deposits onshore are also important proxies for understanding coseismic slip distribution. Tsunamis are generated by sudden seafloor deformation and are typically indicative of shallow slip, near the subduction zone trench (Figure 1) [Melgar et al., 2016]. The inland extent of tsunami deposits, and their distribution north and south along the subduction zone, can be used to identify places where an earthquake caused a lot of seafloor deformation and can tell generally how much displacement was required to create the tsunami wave.

Offshore, seafloor sediment cores show coarse layers of debris flows called turbidites that can also serve as great proxies for earthquake timing and ground motion characteristics. Coseismic turbidites result when earthquake shaking causes unstable, steep, submarine canyon walls to fail, creating coarse, turbulent sediment flows. These flows eventually settle on the ocean floor and are dated using radiocarbon measurements of detrital organic-rich material.

Geochronologic Investigations Fig. 2. These graphs show the age range over which different geochronometers are useful (top), the average record length in Cascadia for different environments (middle), and the average uncertainty for different methods (bottom). Marine sediment cores have the capacity for the longest records, but age controls from detrital material in turbidites have the largest age uncertainties. Radiocarbon (14C) ages from bracketing in-growth position plants and trees (wiggle matching) have much smaller uncertainties (tens of years) but are not preserved in coastal environments for as long. To optimize the potential range of dendrochronological geochronometers, the reference chronology of coastal tree species must be extended further back in time. The range limit (black arrow) of these geochronometers could thus be extended with improved reference chronologies. Click image for larger version.

To be useful, proxies must be datable. Scientists primarily use radiocarbon dating to put past earthquakes into temporal context. Correlations in onshore and offshore data sets have been used to infer the occurrence of up to 20 approximately magnitude 9 earthquakes on the CSZ over the past 11,000 years [Goldfinger et al., 2012], although uncertainty in the ages of these events ranges from tens to hundreds of years (Figure 2). These large age uncertainties allow for varying interpretations of the geologic record: Multiple magnitude 8 or magnitude 7 earthquakes that occur over a short period of time (years to decades) could be misidentified as a single huge earthquake. It’s even possible that the most thoroughly examined CSZ earthquake, in 1700, might have comprised a series of smaller earthquakes, not one magnitude 9 event, because the geologic evidence providing precise ages of this event comes from a relatively short stretch of the Cascadia margin [Melgar, 2021].

By far, the best geochronologic age constraints for CSZ earthquakes come from tree ring, or dendrochronological, analyses of well-preserved wood samples [e.g., Yamaguchi et al., 1997], which can provide annual and even seasonal precision (Figure 2). Part of how scientists arrived at the 26 January date for the 1700 quake was by using dendrochronological dating of coastal forests in southwestern Washington that were killed rapidly by coseismic saltwater submergence. Some of the dead western red cedar trees in these “ghost forests” are preserved with their bark intact; thus, they record the last year of their growth. By cross dating the dead trees’ annual growth rings with those in a multicentennial reference chronology derived from nearby living trees, it is evident that the trees died after the 1699 growing season.

The ghost forest, however, confirms only that coseismic submergence in 1700 occurred along the 90 kilometers of the roughly 1,300-kilometer-long Cascadia margin where these western red cedars are found. The trees alone do not confirm that the entire CSZ fault ruptured in a single big one.

Meanwhile, older CSZ events have not been dated with such high accuracy, in part because coseismically killed trees are not ubiquitously distributed and well preserved along the coastline and because there are no millennial-length, species-specific reference chronologies with which to cross date older preserved trees (Figure 2).

Advances in Dating

At the Cascadia Recurrence Workshop earlier this year, researchers presented recent advances and discussed future directions in paleoseismic dating methods. For example, by taking annual radiocarbon measurements from trees killed during coseismic coastal deformation, we can detect dated global atmospheric radiocarbon excursions in these trees, such as the substantial jump in atmospheric radiocarbon between the years 774 and 775 [Miyake et al., 2012]. This method allows us to correlate precise dates from other ghost forests along the Cascadian coast from the time of the 1700 event and to date past megathrust earthquakes older than the 1700 quake without needing millennial-scale reference chronologies [e.g., Pearl et al., 2020]. Such reference chronologies, which were previously required for annual age precision, are time- and labor-intensive to develop. With this method, new data collections from coastal forests that perished in, or survived through, CSZ earthquakes can now give near-annual dates for both inundations and ecosystem transitions.

Numerous tree rings are evident in this cross section from a subfossil western red cedar from coastal Washington. Patterns in ring widths give clues about when the tree died. Credit: Jessie K. Pearl

Although there are many opportunities to pursue with dendrochronology, such as dating trees at previously unstudied sites and trees killed by older events, we must supplement this approach with other novel geochronological methods to fill critical data gaps where trees are not preserved. Careful sampling and interpretation of age results from radiocarbon-dated material other than trees can also provide tight age constraints for tsunami and coastal submergence events.

For example, researchers collected soil horizons below (predating) and overlying (postdating) a tsunami deposit in Discovery Bay, Wash., and then radiocarbon dated leaf bases of Triglochin maritima, a type of arrowgrass that grows in brackish and freshwater marsh environments. The tsunami deposits, bracketed by well-dated pretsunami and posttsunami soil horizons, revealed a tsunamigenic CSZ rupture that occurred about 600 years ago on the northern CSZ, perhaps offshore Washington State and Vancouver Island [Garrison-Laney and Miller, 2017].

Multiple bracketing ages can dramatically reduce uncertainty that plagues most other dated horizons, especially those whose ages are based on single dates from detrital organic material (Figure 2). Although the age uncertainty of the 600-year-old earthquake from horizons at Discovery Bay is still on the order of several decades, the improved precision is enough to conclusively distinguish the event from other earthquakes dated along the margin.

Further advancements in radiocarbon dating continue to provide important updates for dating coseismic evidence from offshore records. Marine turbidites do not often contain materials that provide accurate age estimates, but they are a critically important paleoseismic proxy [Howarth et al., 2021]. Turbidite radiocarbon ages rely on correcting for both global and local marine reservoir ages, which are caused by the radiocarbon “memory” of seawater. Global marine reservoir age corrections are systematically updated by experts as we learn more about past climates and their influences on the global marine radiocarbon reservoir [Heaton et al., 2020]. However, samples used to calibrate the local marine reservoir corrections in the Pacific Northwest, which apply only to nearby sites, are unfortunately not well distributed along the CSZ coastline, and little is known about temporal variations in the local correction, leading to larger uncertainty in event ages.

Jessie Pearl samples a subfossil tree in a tidal mudflat in coastal Washington State in summer 2020. This and other nearby trees are hypothesized to have died in a massive coseismic subsidence event about 1,000 years ago. Researchers are using the precise ages of the trees to determine if past land level changes can be attributed to earthquakes on the Cascadia Subduction Zone or on shallower, more localized faults. Credit: Wes Johns

These local corrections could be improved by collecting more sampled material that fills spatial gaps and goes back further in time. At the workshop, researchers presented the exciting development that they were in the process of collecting annual radiocarbon measurements from Pacific geoduck clam shells off the Cascadian coastline to improve local marine reservoir knowledge. Geoducks can live more than 100 years and have annual growth rings that are sensitive to local climate and can therefore be cross dated to the exact year. Thus, a chronology of local climatic variation and marine radiocarbon abundance can be constructed using living and deceased specimens. Annual measurements of radiocarbon derived from marine bivalves, like the geoduck, offer new avenues to generate local marine reservoir corrections and improve age estimates for coseismic turbidity flows.

Putting It All Together        

An imminent magnitude 9 megathrust earthquake on the CSZ poses one of the greatest natural hazards in North America and motivates diverse research across the Earth sciences. Continued development of multiple geochronologic approaches will help us to better constrain the timing of past CSZ earthquakes. And integrating earthquake age estimates with the understanding of rupture characteristics inferred from geologic evidence will help us to identify natural variability in past earthquakes and a range of possible future earthquake hazard scenarios.

Useful geochronologic approaches include using optically stimulated luminescence to date tsunami sand deposits (Figure 1) and determining landslide age estimates on the basis of remotely sensed land roughness [e.g., LaHusen et al., 2020]. Of particular value will be focuses on improving high-precision radiocarbon and dendrochronological dating of CSZ earthquakes, paired with precise estimates of subsidence magnitude, tsunami inundation from hydrologic modeling, inferred ground motion characteristics from sedimentological variations in turbidity deposits, and evidence of ground failure in subaerial, lake, and marine settings. Together, such lines of evidence will lead to better correlation of geologic records with specific earthquake rupture characteristics.

Ultimately, characterizing the recurrence of major earthquakes on the CSZ megathrust—which have the potential to drastically affect millions of lives across the region—hinges on the advancement and the integration of diverse geochronologic and geologic records.

Acknowledgments

We give many thanks to all participants in the USGS Cascadia Recurrence Workshop, specifically J. Gomberg, S. LaHusen, J. Padgett, B. Black, N. Nieminski, and J. Hill for their contributions.

Ten Years on from the Quake That Shook the Nation’s Capital

EOS - Fri, 08/20/2021 - 13:58

The earthquake caused an estimated $200 million to $300 million in damages, including substantial damage 130 kilometers away in Washington, D.C.Ten years ago, early in the afternoon of 23 August 2011, millions of people throughout eastern North America—from Florida to southern Canada and as far west as the Mississippi River Valley—felt shaking from a magnitude 5.8 earthquake near the central Virginia town of Mineral [Horton et al., 2015a]. This is one of the largest recorded earthquakes in eastern North America, and it was the most damaging earthquake to strike the eastern United States since a magnitude 7 earthquake near Charleston, S.C., in 1886. Considering the population density along the East Coast, more people may have felt the 2011 earthquake than any other earthquake in North America, with perhaps one third of the U.S. population having felt the shaking.

The earthquake caused an estimated $200 million to $300 million in damages, which included the loss of two school buildings near the epicenter in Louisa County, substantial damage 130 kilometers away at the National Cathedral and the Washington Monument, and minor damage to other buildings in Washington, D.C., such as the Smithsonian Castle. Significant damage to many lesser-known buildings in the region was also documented. Shaking led to falling parapets and chimneys, although, fortunately, there were no serious injuries or fatalities. Rockfalls were recorded as far as 245 kilometers away, and water level changes in wells were recorded up to 550 kilometers away.

Fig. 1. Red dots denote epicenters of earthquakes of magnitude 3.5 or greater recorded since 1971 and indicate that earthquakes occur across large areas of eastern North America. The epicenter of the 2011 Mineral, Va., event is shown by the yellow star. Credit: USGS

The intraplate Mineral earthquake (meaning it occurred within a tectonic plate far from plate boundaries) is the largest eastern U.S. earthquake recorded on modern seismometers, which allowed for accurate characterization of the rupture process and measurements of ground shaking. Technologies developed in the past few decades, together with an evolving understanding of earthquake sources, created opportunities for comprehensive geological and geophysical studies of the earthquake and its seismic and tectonic context. These research efforts are providing valuable new understanding of such relatively infrequent, but damaging, eastern North American earthquakes (Figure 1), as well as intraplate earthquakes generally, but they have also highlighted perplexing questions about these events’ causes and behaviors.

Revealing New Faults

The Central Virginia Seismic Zone has a long history of occasional earthquakes [Chapman, 2013; Tuttle et al., 2021]. The largest recorded event before 2011 was an earthquake that damaged buildings in and near Petersburg, Va., in 1774, but larger events are evident in the geologic record from studies of ground liquefaction. These are natural earthquakes, with no evidence that they are caused by human activity such as injection or withdrawal of fluids in wells.

Postearthquake studies of the area around the Mineral earthquake’s epicenter involved geologic mapping, high-resolution topographic mapping with lidar, airborne surveys of Earth’s magnetic field, seismic methods to examine subsurface structure, and detailed examination of faults using trenching. The earthquake began about 8 kilometers underground on a northeast–southwest trending fault that dips to the southeast. The rupture progressed upward and to the northeast across three major areas of slip [Chapman, 2013], thrusting rocks southeast of the fault upward relative to rocks to the northwest, although the fault rupture did not break the surface. This previously unknown fault has been named the Quail fault [Horton et al., 2015b].

U.S. Geological Survey (USGS) scientists explore a trench dug near Mineral, Va., for signs of deformation related to the 2011 earthquake. Credit: Stephen L. Snyder, USGS

The relationship between the Quail fault and ancient faults in the region is debated. The simple hypothesis is that the many faults along which modern earthquakes concentrate represent long-term zones of weakness. However, over their long geologic history, some of these old faults have gone through metamorphic events, in which exposure to high pressures and temperatures may have hardened or healed the faulted rocks. Magnetic data collected after the 2011 earthquake are consistent with rocks having different magnetic properties on each side of the Quail fault, suggesting the earthquake ruptured an older fault juxtaposing different rock types [Shah et al., 2015]. Yet a corresponding fault is not apparent in seismic reflection data from several kilometers south of the earthquake, indicating either that the fault terminates not far south of the 2011 epicenter or that there is a different explanation for the magnetic anomalies [Pratt et al., 2015].

As with faults associated with many earthquakes in eastern North America, the Quail fault does not appear to extend upward to any previously mapped faults at the surface, making past ruptures on such faults difficult to study. Clearly, there is still much to learn about the complicated relationships between modern seismicity and the locations and orientations of older intraplate faults, raising questions about the common assumption that future earthquakes will reactivate older faults and creating uncertainties in regional hazard assessments.

Shaking Eastern North America

The crust of the eastern North American plate is older, thicker, colder, and stronger than younger crust near the plate’s active margins, allowing efficient energy transmission that results in higher levels of shaking reaching much greater distances than is typical for earthquakes in the western part of the continent. Such intraplate settings also cause earthquakes to be relatively energetic (with high “stress drop”) for their size [McNamara et al., 2014a; Wu and Chapman, 2017], which results in relatively high frequency shaking that can cause strong ground accelerations and damage to built structures. The strong accelerations from the Mineral earthquake, for example, caused a temporary shutdown of the reactors at the North Anna Nuclear Generating Station in Louisa County, although damage was minimal.

Within minutes of the Mineral earthquake, it was evident that the event had the largest felt area of any eastern U.S. earthquake in more than 100 years. The U.S. Geological Survey (USGS) Did You Feel It? (DYFI?) website, where people can report and describe earthquake shaking, received entries from throughout the eastern United States and southeast Canada (Figure 2). Seismometers indeed showed ground shaking extending to far greater distances than for similarly sized earthquakes in the western United States, as had been observed in smaller earthquakes. These seismic readings provide an important data set for accurately determining the attenuation of seismic energy with distance across eastern North America, which is valuable information for estimating potential extents of damage in future earthquakes.

Fig. 2. Comparison of felt reports in the USGS Did You Feel It? (DYFI?) system from the 2011 Mineral earthquake and the similarly sized 2014 Napa earthquake in California. Shaking during the Mineral earthquake was reported over a much larger area. Considering the modern population density in the eastern United States, the Mineral earthquake was probably felt by more people than any other earthquake in U.S. history. Credit: USGS

Ground shaking during the Mineral earthquake was decidedly stronger to the northeast of the epicenter [McNamara et al., 2014b]. This variation with azimuth was found to be mostly due to the more efficient transmission of energy parallel to the Appalachians and the edge of the continent, indicating the strong influence that crustal-scale geology has on ground shaking. A similar pattern was recently seen in the magnitude 5.1 Sparta, N.C., earthquake in 2020.

Localized stronger shaking was primarily caused by amplification of seismic energy in the Atlantic Coastal Plain sediments.Also notable during the Mineral earthquake was the enhanced strength of shaking in Washington, D.C., which was quickly recognized in the DYFI? reports [Hough, 2012]. This localized area of stronger shaking was primarily caused by amplification of seismic energy in the Atlantic Coastal Plain sediments that underlie much of the city. Seismometer recordings and modeling of ground shaking using soil profiles have shown that this effect can be severe in the eastern part of the continent where the transition from extremely hard bedrock to soft overlying sediments amplifies shaking and efficiently traps seismic energy through internal reflections in the sediments [Pratt et al., 2017]. Similar amplification effects by sediments have concentrated damage in other earthquakes, for example, in the Marina District of San Francisco during the 1989 Loma Prieta earthquake and in Mexico City during the 1985 Michoacán earthquake.

The large amplification during the Mineral earthquake by the Atlantic Coastal Plain sediments, which cover coastal areas from New York City to Texas, has given impetus to recent studies characterizing how this effect is influenced by sediment layers of different thicknesses and different frequencies of shaking. Personnel from the USGS National Seismic Hazard Model project are evaluating results from these studies for production of more accurate national-level hazard maps on which many building codes are based.

Renewed Interest in Intraplate Earthquakes

Earthquakes within plate interiors generally receive less attention than the more frequent events at active plate boundaries, including those of western North America. With its extensive affected area, the Mineral earthquake led to increased interest in the causes of intraplate earthquakes. Earthquakes at plate boundaries occur largely because of differential motion between adjacent plates. Earthquakes within relatively stable tectonic plates are more difficult to understand. Hypotheses to explain their occurrence include plate flexing due to long-term glacial unloading (melting) or erosion of large areas of rock and sediment, drag caused by mantle convection below a plate, residual tectonic stress from earlier times that has not been released, gravitational forces caused by heavy crustal features such as old volcanic bodies, and stress transmitted from plate edges. The question of what causes these earthquakes remains unresolved, and the answer may differ for different earthquakes.

Aftershocks from the Mineral earthquake continue today, providing information to better forecast aftershock probabilities and durations in future eastern North American earthquakes.Studies of the Mineral earthquake have offered new understanding and insights into intraplate earthquakes, such as the behavior and duration of aftershock sequences following eastern North American earthquakes. Portable seismometers deployed following the earthquake were used to identify nearly 4,000 aftershocks in the ensuing months [McNamara et al., 2014a; Horton et al., 2015b; Wu et al., 2015]. Of about 1,700 well-located aftershocks, the majority occurred in a southeast dipping zone forming a donut-like ring around the main shock rupture area. The aftershocks show a variety of fault orientations over a relatively wide area, indicating rupture of small secondary faults. Aftershocks from the Mineral earthquake continue today, providing information to better forecast aftershock probabilities and durations in future eastern North American earthquakes.

Detailed geologic studies of the earthquake’s epicentral region also led to the recognition of fractures, or joints, in the bedrock that trend northwest–southeast. These joints have orientations similar to some of the fault planes determined from aftershocks but significantly different from the main shock fault plane. This observation indicates that some aftershocks are occurring on small faults trending at sometimes large angles to the Quail fault, with some being parallel or nearly parallel to Jurassic dikes mapped in the region. These aftershocks and joints are indicative of the influence of older deformation of multiple ages on modern seismicity.

Although seismometers throughout the region provided important recordings of ground motions during the Mineral earthquake, the earthquake was also a “missed opportunity” for studying infrequent intraplate earthquakes because of the relatively small number of seismometers operating in the eastern United States at the time. Far more information could have been obtained had there been more instruments, especially near-source instruments to record strong shaking. To increase the density of seismometers in the central and eastern United States and prepare for studies of future earthquakes in the region, in 2019 the USGS assumed operations of the Central and Eastern United States N4 network, comprising stations retained from the EarthScope Transportable Array.

Reminder of Risk Scaffolding was erected around the Washington Monument, seen here from inside the Lincoln Memorial, to help repair damage caused by shaking from the 2011 earthquake. Credit: Thomas L. Pratt

The Mineral earthquake offered a startling reminder for many people that eastern North America is not as seismically quiet as it might seem. Damaging earthquakes off Cape Ann, Mass., in 1755 and near Petersburg, Va., in 1774 demonstrated this fact more than 2 centuries ago, and the 1886 Charleston earthquake showed just how damaging such events can be. Recent events, including the recent magnitude 4.1 earthquake in Delaware in 2017 and the 2020 Sparta, N.C., earthquake, have shown that earthquakes can still strike unexpectedly across much of the eastern United States (Figure 1).

Geologic evidence of large earthquakes in the ancient past in eastern North America is clear. A magnitude 6 to 7 earthquake, likely in the past 400,000 years, ruptured a shallow fault beneath present-day Washington, D.C., for example, with the fault now visible in a road cut near the National Zoo.

In any given area of the eastern United States, however, these damaging earthquakes are infrequent on human timescales and are commonly followed by decades of quiescence, so their impacts tend to fade from memory. Nonetheless, such earthquakes can cause severe and widespread damage exceeding that from more frequent severe storms and floods. Also, unlike many natural hazards, earthquakes provide virtually no advance warning. Even if an earthquake early-warning system like that operating along the West Coast is eventually installed in the central and eastern United States, there would be, at most, seconds to tens of seconds of notice before strong shaking from a large earthquake.

The rarity of earthquakes in central and eastern North America presents challenges for studying their causes and effects and for planning mitigation efforts to reduce damage and loss of life in future earthquakes. Yet the potential consequences of not taking some mitigation measures can be extreme. Many older buildings across this vast region were constructed without regard for earthquakes, with unreinforced masonry buildings being particularly vulnerable. These older construction practices, combined with the high efficiency of energy transmission and potential local amplification of ground shaking by sediments, create potential risk for eastern North American cities to sustain extensive damage during an earthquake.

For example, damage seen in Washington, D.C., from the Mineral earthquake shows that a repeat of the ancient magnitude 6 to 7 earthquake on a fault directly beneath the city would be devastating to the city and could severely affect federal government operations. The last earthquake on the fault beneath D.C. is thought to have occurred tens to hundreds of thousands of years ago—that is, in the geologically recent past—suggesting the fault could still be active. Its next rupture may not occur for thousands of years or more, yet there is also the remote chance that it could happen much sooner.

The Mineral earthquake showed clearly that seismic risks in eastern North America cannot be ignored, as there will inevitably be more earthquakes that cause damage in this part of the country.The Mineral earthquake showed clearly that seismic risks in eastern North America cannot be ignored, as there will inevitably be more earthquakes that cause damage in this part of the country. It was only in the second half of the past century that probabilistic seismic hazard assessments, like the U.S. National Seismic Hazard Model, were developed to quantify the ground shaking that buildings may experience in a given time frame. These shaking forecasts provide guidelines for constructing buildings and other infrastructure to suitable levels for seismic safety.

Retrofitting vulnerable structures and raising awareness of the earthquake risks—and of simple, inexpensive mitigation actions like keeping an emergency preparedness kit on hand and making contingency plans (e.g., for family separations)—are important societal steps to help safeguard the population. Meanwhile, continued scientific research that builds on the work done since the Mineral earthquake and explores past and present earthquakes elsewhere in eastern North America will improve seismic hazard assessments to better estimate and mitigate ground shaking expected during earthquakes to come.

First Report of Seismicity That Initiated in the Lower Mantle

EOS - Thu, 08/19/2021 - 13:40

On 30 May 2015, a magnitude 7.9 earthquake took place beneath Japan’s remote Ogasawara (Bonin) Islands, located about 1,000 kilometers south of Tokyo. The seismic activity occurred over 660 kilometers below Earth’s surface, near the transition between the upper and lower mantle. The mechanism of deep-focus earthquakes, like the 2015 quake, has long been mysterious—the extremely high pressure and temperature at these depths should result in rocks deforming, rather than fracturing as in shallower earthquakes.

By using a 4D back-projection method, Kiser et al. traced the path of the 2015 earthquake and identified, for the first time, seismic activity that initiated in the lower mantle. They relied on measurements by the High Sensitivity Seismograph Network, or Hi-net, a network of seismic stations distributed across Japan. The data captured by these instruments are analogous to ripples in a pond produced by a dropped pebble: By calculating how seismic waves spread, the researchers were able to pinpoint the path of the deep-focus quake.

The team found that the main shock initiated at a depth of 660 kilometers, then propagated to the west-northwest for at least 8 seconds while decreasing in depth. Analyses of the 2 hours following the main shock identified aftershocks between depths of 624 and 751 kilometers. A common model for deep-focus earthquakes is transformational faulting; in other words, instability causes the transition of olivine in a subducting slab into a denser form, spinel. The aftershocks below 700 kilometers, however, occurred outside the zone where this transition occurs. The authors propose that the deep seismicity may have resulted from stress changes caused by settling of a segment of subducting slab in response to the main shock, although the hypothesis requires future investigation. (Geophysical Research Letters, https://doi.org/10.1029/2021GL093111, 2021)

—Jack Lee, Science Writer

La primera mirada de la meteorización a escala angstrom

EOS - Thu, 08/19/2021 - 13:40

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Tanto las rocas sedimentarias y como el agua son abundantes en la superficie de la Tierra y, durante largos períodos de tiempo, sus interacciones convierten montañas en sedimentos. Los investigadores saben desde hace mucho tiempo que el agua erosiona las rocas sedimentarias tanto físicamente, al facilitar la abrasión y migración de las rocas, como químicamente, a través de la disolución y recristalización. Pero estas interacciones nunca antes se han visto in situ a la escala angstrom.

En un nuevo estudio, Barsotti et al. utilizan la microscopía electrónica de transmisión ambiental para capturar imágenes dinámicas de vapor de agua y gotitas que interactúan con muestras de dolomitas, calizas y areniscas. Usando un sistema de inyección de fluidos personalizado, el equipo expuso las muestras a agua destilada y monitoreó los efectos del agua en el tamaño de los poros en el transcurso de 3 horas. La meteorización física fue fácilmente observable en los experimentos con vapor de agua, y los procesos químicos de disolución y recristalización fueron más pronunciados en experimentos con agua en fase líquida.

Los investigadores pudieron observar una capa de agua adsorbida que se había formado en las paredes de microporos de los tres tipos de rocas. Descubrieron que a medida que se agregaba vapor de agua, el tamaño de los poros se contraía hasta en un 62.5%. Después de 2 horas, cuando se eliminó el agua, aumentaron los tamaños de los poros. En general, con respecto al tamaño inicial, el tamaño final de los poros de la dolomita disminuyó en un 33.9%, mientras que el tamaño aumentó en un 3.4% y un 17.3% en la roca caliza y la arenisca, respectivamente. El equipo sugiere que estos cambios en el tamaño de los poros se debieron a la tensión inducida por adsorción. Los experimentos en fase líquida revelaron que las tasas de disolución fueron más altas en la piedra caliza, seguida de la dolomita y la arenisca.

El estudio apoya trabajos previos que sugieren que la disolución y recristalización pueden alterar el tamaño y la forma de los poros en las rocas sedimentarias. También proporciona la primera evidencia directa de un experimento in situ de que la tensión inducida por adsorción es una fuente de meteorización. En última instancia, estos cambios en la geometría de los poros podrían conducir a cambios en las propiedades de las rocas, como la permeabilidad, que influyen en el flujo de agua, la erosión y el ciclo de los elementos a escalas más grandes. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2020JB021043, 2021)

—Kate Wheeling, Escritora de ciencia

This translation by Daniela Navarro-Pérez (@DanJoNavarro) of @GeoLatinas and @Anthnyy was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Cosmological Tool Helps Archaeologists Map Earthly Tombs

EOS - Wed, 08/18/2021 - 13:13

Thousands of ancient tombs dot the semiarid landscape of eastern Sudan.

Compared to their more famous cousins in Egypt, these tombs are neither well-known nor well studied because they lie off the beaten path. Just a single road leads to the city of Kassala, and an additional 5 hours of off-roading are required to get to the funerary sites, said Stefano Costanzo, a Ph.D. student in archaeology at the University of Naples “L’Orientale” in Italy.

But the journey is worth it.

“I didn’t know I was interested before seeing them. I didn’t even know they were there. But because I went and saw them, you know, [my] interest just shot up,” Costanzo said. Viewed from the field, “the place was stunning,” he said.

As described in a new study published in the open-access journal PLoS ONE, satellite images of these tombs revealed an even more intriguing observation: Not only were the funerary sites numerous; they were also clustered in groups of up to thousands of structures.

Clusters of Qubbas

The study is the first to apply a statistical method created for cosmology to the more grounded field of archaeology to quantitatively describe the immense number of tombs and how their locations were scattered across the landscape.

Using satellite imagery and information gathered in field surveys he did for 3 years prior, Costanza was able to map the locations of the funerary structures. It took 6 months to draw the map at a resolution high enough to permit statistical analysis.

“I literally drew single boulders sometimes,” said Costanzo, the lead author of the study.

The funeral structures in eastern Sudan came in two flavors: tumuli, which are simpler raised structures made of earth or stone, and qubbas, which are square shrines or tombs constructed with flat slabs of foliated metamorphic rock standing about 2 meters tall and 5 meters wide. Most of the site’s tumuli are dated to the first millennium CE, whereas the qubbas are associated with medieval Islam, built in the area starting from the 16th century up to the 20th.

In all, the data set contained 783 tumuli and 10,274 qubbas in the roughly 6,475-square-kilometer (2,500-square-mile) region.

Viewed from the sky, the qubbas were clustered along foothills or atop ridges. However, topography was not enough to completely explain where the qubbas were located—there seemed to be another force that drew them close to one another.

Neyman-Scott Cluster Process

Study coauthor Filippo Brandolini, an archaeologist and environmental scientist at Newcastle University in the United Kingdom, extensively reviewed different methods for analyzing spatial statistics before coming across the Neyman-Scott cluster process. First used to study the spatial pattern of galaxies, the process has since been used in ecology and biology research but never before in the field of archaeology.

“It’s actually refreshing to see people use some of these methods, even though they might not identify themselves as statisticians, in a pretty sound way.”“It’s actually refreshing to see people use some of these methods, even though they might not identify themselves as statisticians, in a pretty sound way,” said Tilman Davies, a statistician at the University of Otago in New Zealand.

In fact, many statistical tools like the Neyman-Scott process were not originally created by statisticians, said Davies, who was not involved in the study. “They were sort of formulations that were derived by practitioners or applied scientists working in a particular field and thinking ‘This particular mathematical construction might be useful for my particular application.’”

In Sudan, the researchers found that many of the tombs were clustered around environmental features but also toward each other. Just as there’s a natural tendency for galaxies or stars to group together, there seems to be “a sort of gravitational attraction, which is actually social-cultural” for the patterns among the qubbas, possibly involving tribes or families, Costanzo said.

Green dots mark the sites of 1,195 clustered qubbas around and on top of a rocky outcrop in eastern Sudan. Credit: Stefano Costanzo

“They are attempting to describe the appearance of these points, both in terms of environmental predictors and in terms of some random mechanism that could explain this aggregation,” Davies said. “So they’re attempting to combine two things, which is often a very, very difficult thing to do.”

“We discovered the applicability of this tool; we didn’t invent it.”Similar statistical tools and models may be a boon for archaeology as a whole.

Many locations are remote and hard to get to on the ground. Using satellite and remote sensing—in addition to methods that permit quantification—could allow for rigorous archaeology from the desk.

“We discovered the applicability of this tool; we didn’t invent it. But we found out that it is useful in archaeological terms,” Costanzo said. “It has the potential to help many other research expeditions in very remote lands.”

—Richard J. Sima (@richardsima), Science Writer

Undertaking Adventure to Make Sense of Subglacial Plumes

EOS - Wed, 08/18/2021 - 13:13

“This is impossible” was our first reaction 5 years ago when Shin Sugiyama and I first heard the idea from Naoya Kanna, a postdoctoral scholar at Hokkaido University’s Arctic Research Center in Japan at the time [Kanna et al., 2018, 2020]. What was his “impossible” proposal? Kanna, now a postdoctoral scholar at the University of Tokyo, wanted to deploy oceanographic equipment into the water to study a turbulent glacial plume issuing from the calving front of a Greenland glacier.

If we could make that idea work, our instruments would record the first extended look at the chaotic region where fresh glacial meltwater streams out into the salt water of the ocean. The data would span a few weeks, in contrast to the snapshots from previous expeditions.

Collecting data near a crevasse-riven glacial terminus is challenging for several reasons.Collecting data near a crevasse-riven glacial terminus is challenging for several reasons. Instruments can disappear or become impossible to retrieve when chunks of ice break from the edge of a glacier. This type of event, called calving, can generate tsunami-like splashes as high as about 20 meters. Ice mélanges, disorderly mixes of sea ice and icebergs commonly seen where glaciers meet the sea, can block access to fjords or tug on cables attached to oceanographic instruments. Adding to the difficulties, anything that is deployed on a glacier’s surface melts out during the summer.

However, as ice loss and discharges of meltwater and sediment from coastal glaciers around the world continue to increase, it is important to understand exchanges of heat, meltwater, and nutrients between marine-terminating glaciers and their surroundings. These factors affect sea level, ocean circulation and biogeochemistry, marine ecosystems, and the communities that rely on marine ecosystems [Straneo et al., 2019].

Unfortunately, the data needed to fill key gaps in our understanding of ice-ocean interactions have been in short supply [Straneo et al., 2019]. This situation is starting to change, however, with the emergence of pioneering new research efforts, including an expedition at the calving front of Greenland’s Bowdoin Glacier in July 2017 in which colleagues and I participated.

The Challenges and Serendipity of Installing Instruments

Our expedition faced its first challenge early on. Because of a 7-day flight delay getting to northwest Greenland, we had only one night to prepare our expedition supplies in Qaanaaq Village (77.5°N) before our chartered helicopter’s scheduled departure. After rushing to collect the necessary food and gear for our week-and-a-half trip, we made our flight the next morning, which took us about 30 kilometers northwest to Bowdoin Glacier on 6 July. We had a lot of work ahead of us in the limited time available, not the least of which involved carrying our geophysical instruments (see sidebar) on our backs to their deployment sites at various locations on and around the glacier.

The discharge site studied (greenish area at center of left image), where meltwater traveling beneath Bowdoin Glacier reaches the ocean and pushes aside the floating ice mélange, is seen in this 7 July 2017 photo taken by an uncrewed aerial vehicle. On 8 July, the glacier calved an iceberg along the crevasse running from the top center of the photo down and to the right, creating a new glacier front where scientists then deployed instruments into the water. At right, the author stands beside a crevasse that blocked access to the calving front of Bowdoin Glacier on 6 July 2017. Credit: left, Eef van Dongen/Podolskiy et al. [2021a], CC BY 4.0; right, Lukas E. Preiswerk/van Dongen et al., 2021, CC BY 4.0A major crevasse, some 2 meters wide, initially blocked our direct access to the calving front, but the path was cleared after the glacier serendipitously calved a kilometer-scale iceberg on 8 July. We now had access to the fresh 30-meter-high ice cliff, but the evidence, or expression, of the plume on the water’s surface was gone.

Surface expressions of glacial discharge plumes typically appear as semicircular areas of either turbid water of a color different from surrounding seawater or—as was the case upon our arrival to Bowdoin in 2017—water that’s mostly ice free surrounded by sea ice and iceberg-laden water (or sometimes both of these things). These waters provide clear indications of the locations and timing of discharge plumes. Without the surface expression, we could make only an educated guess that the plume should still be there under the water.



Finally, on 13 July, we deployed our oceanographic instrumentation, hanging sensors from the calving front to collect continuous pressure, temperature, and salinity measurements at roughly 5- and 100-meter depths in the salty water of the fjord. Such a feat is trickier than it might sound. As we well knew—and had been reminded of a few days earlier with the large calving event—glacial termini are mobile, treacherous environments. We had to deploy a few hundred meters of cables, instrumentation, and protective pipes at the crevassed calving front, all while trying not to damage the equipment with our crampons and securing each other with ropes. Meanwhile, our Swiss colleagues were remotely operating uncrewed aerial vehicles (UAVs) over the calving front, providing near-real-time safety support and guidance on crevasse development.

Our expedition required more than just down parkas and warm socks. If you take along the following items, you will be all set to study a subglacial discharge plume:

hundreds of meters of cables connecting a data logger to oceanographic sensors four time-lapse cameras a seismometer for on-ice deployment pressure and temperature sensors a conductivity-temperature-depth profiler a helicopter a boat two uncrewed aerial vehicles an ice drill, ice screws, and other mountaineering gear supporting tidal and air temperature data sets (we got ours from Thule and Qaanaaq, respectively) an incredible team an unbelievable amount of luck a chocolate bar in your pocket a bottle of Champagne—chilled in a water-filled crevasse—to celebrate a successful deployment

The same evening of the deployment, a strange chain of events started to unfold near our tent camp, 2 kilometers up glacier from the plume. A huge chunk of dead ice (a stationary part of the glacier) collapsed into a large ice-dammed lake and triggered a small tsunami, which displaced the pressure and temperature sensors we had deployed in the lake. The sensors weren’t the only things that moved—the sound of this collapse was so loud that everyone in our camp ran out of their tents to see what was happening.

The next evening, while we were enjoying dinner, we realized that the ice-dammed lake was draining in front of our eyes! The bed of this lake, which had so recently held enough water to fill about 120 Olympic swimming pools, was now exposed. Unlike the simple, idealized glacier lake bed structures used in some models, what we saw in this chasm resembled limestone cave structures: a complex jumble of spongelike features and ponds. We were very fortunate to be present for this event, with all our instruments deployed and collecting data.

Soon after, we got a radio call from Martin Funk, a glaciologist from ETH Zürich (who has since retired) who was working in front of the glacier terminus. Funk, who was encamped with his colleagues on top of a protruding mountain ridge (called a nunataq) in front of the glacier, had a front-row seat for the event. Through binoculars, he saw that the discharge plume had resurfaced right where our oceanographic instruments were set up. The water in the lake had drained under the glacier via the plume we were monitoring. Funk’s team used radar, time-lapse photography, and UAVs to capture as many remote observations as possible. We have since confirmed these observations using analysis of our oceanographic data, time-lapse photographs, UAV images, and a nearby seismometer that recorded the drainage event.

The ice-dammed marginal lake (top) is seen here from a helicopter on 6 July 2017, before it drained abruptly on 14 July. The chasm left behind by the drained lake (bottom) was photographed on 16 July. Credit:  Evgeny A. Podolskiy

The helicopter retrieved us from the glacier, and we returned to Qaanaaq on 17 July. While we flew along the calving front, I was amazed to see that the cable connected to our deep sensor and its ballast were inclined away from the ice cliff, likely because of the strong current generated by the plume.

Some team members returned to the field area by boat on 1 August, climbing onto and traversing the glacier to collect the data and instrumentation at the calving front. They found the unsupervised cables dangling from a bent aluminum stake. Originally, the stake had been drilled 2 meters into the ice to secure the cables, but by 1 August, it had melted halfway out of the ice. The retrieval was timely: Later that same day, a several-hundred-meters-long section of the terminus near the equipment setup and data logger calved into the water.

Dealing with Difficult Data

Bringing home our hard-earned data set seemed like a major achievement; however, it proved to be just the beginning of our work on this project. The complexity of the retrieved data was a nightmare. Colleagues of ours were eager to plug the oceanographic observations into their models of plumes and subaqueous melt, but we were hesitant to share this unique data set until we could begin to understand it ourselves.

We eventually recognized that our data captured the physical turbulence of water near the calving front of Bowdoin Glacier—this chaotic behavior in fluids has fascinated scientists, including Leonardo da Vinci, for half a millennium. Dealing with turbulence in our data was already a daunting task, but other complications added to the challenge of making sense of it.

For example, the instrumentally observed pressure variations, which represent water depth, indicated that the plume occasionally “spit out” our sensor that had been anchored 100 meters below the surface, pushing it outward from where it had been deployed, and icebergs then pulled it up to the surface. This highly unconventional observation eventually yielded remarkable results. We realized that our sensor was traveling with the turbulent water rather than taking data at a single location as expected, thus producing Lagrangian time series data.

This 17 July 2017 helicopter photo shows how the subglacial discharge plume’s current pushed an underwater sensor away from the face of the ice cliff, causing the sensor’s cable to angle outward from the cliff. The horizontal undercut strip near the water surface is a tidal melt notch, caused by melting below the water’s surface. Credit: Shin Sugiyama

In contrast to previous studies, which obtained infrequent, single profiles of the water column, this time-lapse profiling of the water column documented a dramatic shift in fjord stratification over a span of a few days. For example, on 16 July, a cold layer of water began a 3-day descent, and the water column profile we obtained on 24 July bore no resemblance to conditions just 1 week earlier.

It took a few years of demanding detective work to understand every wiggle in the data recorded during those 12 days of observations and their causal connections with iceocean dynamics at the calving front. We learned a lot simply by analyzing the nearly 1 terabyte of time-lapse photography we and Funk’s team had shot. But our main target, the plume, was mostly hiding underwater, obscured by the low-salinity, relatively warm water at the surface near the calving front. Nevertheless, various lines of evidence provided fingerprints of the plume and details of its activity. For example, the seismic records collected near where the plume exited the glacial front revealed a low-frequency seismic tremor signal that lasted 8.5 hours during the drainage of the lake.

After applying all the classical signal-processing methods commonly applied in oceanographic, statistical, and seismic analyses, I realized we needed a radically different approach to comprehend the subsurface water pressure, salinity, and temperature data from within the plume.

The time series of these oceanographic data started to remind me of the nonlinear cardiograms I used to see at home when I was younger, courtesy of my mother, a cardiologist. The heart and many other dynamical systems can generate irregular and chaotic patterns of behavior even from totally deterministic behavior, without requiring any random, or stochastic, fluctuations in the system. In our case, the irregularities in the data came from the occasional turbulence caused by the plume and by icebergs, which repeatedly pushed the sensors through different water masses.

There are many linear methods that can be used to understand the evolution of time series. For example, the well-known Fourier transform mathematically converts time domain data to the frequency domain. But such methods neither statistically characterize the observed dynamics of a system nor detect the existence of so-called attractors (states to which a system tends to converge or return, like a pendulum that eventually comes to rest at the center of its arcing swings)—both capabilities we needed to make sense of our complex data. Mapping different states (i.e., water masses) visited by moving sensors collecting scalar, or point, measurements required us to move beyond linear analytical methods.

In the artillery of nonlinear methods, there is a technique called time delay embedding, which projects data from the time domain to the state space domain [Takens, 1981]. This technique reveals structure in a system graphically by using delay coordinates, which combine sequential scalar measurements in a time series, with each measurement separated by a designated time interval (the delay time), into multidimensional vectors. This technique helps reveal attractors and identify transitions in a time series (e.g., distinguishing between signals from the lake water and the fjord water). We applied this (almost) magical approach in combination with other, more conventional methods to decipher our oceanographic records and comprehend the observed dynamics of the plume [Podolskiy et al., 2021a].

New Discoveries and Problems Still to Solve

Our analyses revealed previously undocumented high-frequency dynamics in the glacier-fjord environment. These dynamics extended well beyond anything we had imagined, and they could be important for understanding submarine melting, water mixing, and circulation, as well as biogeochemistry near glacial termini. For example, sudden stratification shifts may imply the descent of a cold water layer and the corresponding thickening of a warmer layer near the surface as well as the occurrence of enhanced melting that undercuts ice cliffs and leads to calving.

Observed ripples in the water surface and detected seismicity may offer innovative ways to monitor abrupt, large discharge events even when there is no surface expression of the plume. These events may occur more often in the future as surface waters increasingly freshen (and become less dense), thereby blocking plumes from reaching the surface [De Andrés et al., 2020]. Also, the tidal modulation of water properties, such as temperature, that we found highlights that there are still processes not accounted for in models estimating submarine glacial melting in Greenland.

Our work shows that we may need to rethink how we model and monitor discharge plumes.Previous efforts to study this melting were limited to providing episodic views of the iceocean interface and have not monitored the ocean and glacier simultaneously. Our first-of-their-kind observations could thus be informative for constraining predictive ice-ocean models. Abrupt stratification changes in a fjord, the intermittent nature of glacial plumes, tidal forcing, and sudden drainages of marginal lakes are all very complex processes to model. It is possible these processes could be parameterized to make modeling them easier or even ignored if they’re found to be insignificant in long-term predictions of iceocean interactions. Nevertheless, our work shows that we may need to rethink how we model and monitor discharge plumes to obtain clarity on these processes, particularly on short timescales.

Our analyses may be helpful for interpreting future records of glacial discharge and other phenomena. The methods developed and applied here are not necessarily limited to oceanography because deciphering time series is an ever-present task across the geosciences. On the other hand, our novel and customized observational approach was challenging and is suited for only short-term campaigns. We hope that ongoing developments in sea bottom lander technology and remotely operated deployments at calving fronts [e.g., Nash et al., 2020; Podolskiy et al., 2021b] will pave the way for continuous, year-round observations in these critical environments, providing further insights to help scientists understand the effects of increasing glacial melting on ocean dynamics and marine ecosystems.

Acknowledgments

This work was part of Arctic Challenge for Sustainability research projects (ArCS and ArCS II; grants JPMXD1300000000 and JPMXD1420318865, respectively) funded by the Ministry of Education, Culture, Sports, Science and Technology of Japan.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer