Give Homo naledi credit for originality. The fossils of this humanlike species previously revealed an unexpectedly peculiar body plan. Now its pockmarked teeth speak to an unusually hard-edged diet.
H. naledi displays a much higher rate of chipped teeth than other members of the human evolutionary family that once occupied the same region of South Africa, say biological anthropologist Ian Towle and colleagues. Dental damage of this kind results from frequent biting and chewing on hard or gritty objects, such as raw tubers dug out of the ground, the scientists report in the September American Journal of Physical Anthropology. “A diet containing hard and resistant foods like nuts and seeds, or contaminants such as grit, is most likely for H. naledi,” says Towle, of Liverpool John Moores University in England.
Extensive tooth chipping shows that “something unusual is going on” with H. naledi’s diet, says paleoanthropologist Peter Ungar of the University of Arkansas in Fayetteville. He directs ongoing microscopic studies of H. naledi’s teeth that may provide clues to what this novel species ate. Grit from surrounding soil can coat nutrient-rich, underground plant parts, including tubers and roots. Regularly eating those things can cause the type of chipping found on H. naledi teeth, says paleobiologist Paul Constantino of Saint Michael’s College in Colchester, Vt. “Many animals cannot access these underground plants, but primates can, especially if they use digging sticks.” H. naledi fossils, first found in South Africa’s subterranean Dinaledi Chamber and later a second nearby cave (SN: 6/10/17, p. 6), came from a species that lived between 236,000 and 335,000 years ago. It had a largely humanlike lower body, a relatively small brain and curved fingers suited for climbing trees.
Towle’s group studied 126 of 156 permanent H. naledi teeth found in Dinaledi Chamber. Those finds come from a minimum of 12 individuals, nine of whom had at least one chipped chopper. Two of the remaining three individuals were represented by only one tooth. Teeth excluded from the study were damaged, had not erupted above the gum surface or showed signs of having rarely been used for chewing food.
Chips appear on 56, or about 44 percent, of H. naledi teeth from Dinaledi Chamber, Towle’s team says. Half of those specimens sustained two or more chips. About 54 percent of molars and 44 percent of premolars, both found toward the back of the mouth, display at least one chip. For teeth at the front of the mouth, those figures fell to 25 percent for canines and 33 percent for incisors.
Chewing on small, hard objects must have caused all those chips, Towle says. Using teeth as tools, say to grasp animal hides, mainly damages front teeth, not cheek teeth as in H. naledi. Homemade toothpicks produce marks between teeth unlike those on the H. naledi finds.
Two South African hominids from between roughly 1 million and 3 million years ago, Australopithecus africanus and Paranthropus robustus, show lower rates of tooth chipping than H. naledi, at about 21 percent and 13 percent, respectively, the investigators find. Researchers have suspected for decades that those species ate hard or gritty foods, although ancient menus are difficult to reconstruct (SN: 6/4/11, p. 8). Little evidence exists on the extent of tooth chipping in ancient Homo species. But if H. naledi consumed underground plants, Stone Age Homo sapiens in Africa likely did as well, Constantino says.
In further tooth comparisons with living primates, baboons — consumers of underground plants and hard-shelled fruits — showed the greatest similarity to H. naledi, with fractures on 25 percent of their teeth. That figure reached only about 11 percent in gorillas and 5 percent in chimpanzees.
Human teeth found at sites in Italy, Morocco and the United States show rates and patterns of tooth fractures similar to H. naledi, he adds. Two of those sites date to between 1,000 and 1,700 years ago. The third site, in Morocco, dates to between 11,000 and 12,000 years ago. People at all three sites are suspected to have had diets unusually heavy on gritty or hard-shelled foods, the scientists say.
Chips mar 50 percent of H. naledi’s right teeth, versus 38 percent of its left teeth. That right-side tilt might signify that the Dinaledi crowd were mostly right-handers who typically placed food on the right side of their mouths. But more fossil teeth are needed to evaluate that possibility, Towle cautions.
Some stars erupt like clockwork. Astronomers have tracked down a star that Korean astronomers saw explode nearly 600 years ago and confirmed that it has had more outbursts since. The finding suggests that what were thought to be three different stellar objects actually came from the same object at different times, offering new clues to the life cycles of stars.
On March 11, 1437, Korean royal astronomers saw a new “guest star” in the tail of the constellation Scorpius. The star glowed for 14 days, then faded. The event was what’s known as a classical nova explosion, which occurs when a dense stellar corpse called a white dwarf steals enough material from an ordinary companion star for its gas to spontaneously ignite. The resulting explosion can be up to a million times as bright as the sun, but unlike supernovas, classical novas don’t destroy the star. Astronomer Michael Shara of the American Museum of Natural History in New York City and colleagues used digitized photographic plates dating from as early as 1923 to trace a modern star back to the nova. The team tracked a single star as it moved away from the center of a shell of hot gas, the remnants of an old explosion, thus showing that the star was responsible for the nova. The researchers also saw the star, which they named Nova Scorpii AD 1437, give smaller outbursts called dwarf novas in the 1930s and 1940s. The findings were reported in the Aug. 31 Nature.
The discovery fits with a proposal Shara and colleagues made in the 1980s. They suggested that three different stellar observations — bright classical nova explosions, dwarf nova outbursts and an intermediate stage where a white dwarf is not stealing enough material to erupt — are all different views of the same system.
“In biology, we might say that an egg, a larva, a pupa and a butterfly are all the same system seen at different stages of development,” Shara says.
Peer inside the brain of someone learning. You might be lucky enough to spy a synapse pop into existence. That physical bridge between two nerve cells seals new knowledge into the brain. As new information arrives, synapses form and strengthen, while others weaken, making way for new connections.
You might see more subtle changes, too, like fluctuations in the levels of signaling molecules, or even slight boosts in nerve cell activity. Over the last few decades, scientists have zoomed in on these microscopic changes that happen as the brain learns. And while that detailed scrutiny has revealed a lot about the synapses that wire our brains, it isn’t enough. Neuroscientists still lack a complete picture of how the brain learns.
They may have been looking too closely. When it comes to the neuroscience of learning, zeroing in on synapse action misses the forest for the trees.
A new, zoomed-out approach attempts to make sense of the large-scale changes that enable learning. By studying the shifting interactions between many different brain regions over time, scientists are beginning to grasp how the brain takes in new information and holds onto it. These kinds of studies rely on powerful math. Brain scientists are co-opting approaches developed in other network-based sciences, borrowing tools that reveal in precise, numerical terms the shape and function of the neural pathways that shift as human brains learn.
“When you’re learning, it doesn’t just require a change in activity in a single region,” says Danielle Bassett, a network neuroscientist at the University of Pennsylvania. “It really requires many different regions to be involved.” Her holistic approach asks, “what’s actually happening in your brain while you’re learning?” Bassett is charging ahead to both define this new field of “network neuroscience” and push its boundaries.
“This line of work is very promising,” says neuroscientist Olaf Sporns of Indiana University Bloomington. Bassett’s research, he says, has great potential to bridge gaps between brain-imaging studies and scientists’ understanding of how learning happens. “I think she’s very much on the right track.” Already, Bassett and others have found tantalizing hints that the brains that learn best have networks that are flexible, able to rejigger connections on the fly to allow new knowledge in. Some brain regions always communicate with the same neural partners, rarely switching to others. But brain regions that exhibit the most flexibility quickly swap who they’re talking with, like a parent who sends a birthday party invite to the preschool e-mail list, then moments later, shoots off a work memo to colleagues.
In a few studies, researchers have witnessed this flexibility in action, watching networks reconfigure as people learn something while inside a brain scanner. Network flexibility may help several types of learning, though too much flexibility may be linked to disorders such as schizophrenia, studies suggest.
Not surprisingly, some researchers are rushing to apply this new information, testing ways to boost brain flexibility for those of us who may be too rigid in our neural connections.
“These are pretty new ideas,” says cognitive neuroscientist Raphael Gerraty of Columbia University. The mathematical and computational tools required for this type of research didn’t exist until recently, he says. So people just weren’t thinking about learning from a large-scale network perspective. “In some ways, it was a pretty boring mathematical, computational roadblock,” Gerraty says. But now the road is clear, opening “this conceptual avenue … that people can now explore.” It takes a neural village That conceptual avenue is more of a map, made of countless neural roads. Even when a person learns something very simple, large swaths of the brain jump in to help. Learning an easy sequence of movements, like tapping out a brief tune on a keyboard, prompts activity in the part of the brain that directs finger movements. The action also calls in brain areas involved in vision, decision making, memory and planning. And finger taps are a pretty basic type of learning. In many situations, learning calls up even more brain areas, integrating information from multiple sources, Gerraty says.
He and colleagues caught glimpses of some of these interactions by scanning the brains of people who had learned associations between two faces. Only one of the faces was then paired with a reward. In later experiments, the researchers tested whether people could figure out that the halo of good fortune associated with the one face also extended to the face it had been partnered with earlier. This process, called “transfer of learning,” is something that people do all the time in daily life, such as when you’re wary of the salad at a restaurant that recently served tainted cheese.
Study participants who were good at applying knowledge about one thing — in this case, a face — to a separate thing showed particular brain signatures, Gerraty and colleagues reported in 2014 in the Journal of Neuroscience. Connections between the hippocampus, a brain structure important for memory, and the ventromedial prefrontal cortex, involved in self-control and decision making, were weaker in good learners than in people who struggled to learn. The scans, performed several days after the learning task, revealed inherent differences between brains, the researchers say. The experiment also turned up other neural network differences among these regions and larger-scale networks that span the brain.
Children who have difficulty learning math, when scanned, also show unexpected brain connectivity, according to research by neuroscientist Vinod Menon of Stanford University and colleagues. Compared with kids without disabilities, children with developmental dyscalculia who were scanned while doing math problems had more connections, particularly among regions involved in solving math problems. That overconnectivity, described in 2015 in Developmental Science, was a surprise, Menon says, since earlier work had suggested that these math-related networks were too weak. But it may be that too many links create a system that can’t accommodate new information. “The idea is that if you have a hyperconnected system, it’s not going to be as responsive,” he says. There’s a balance to be struck, Menon says. Neural pathways that are too weak can’t carry necessary information, and pathways that are too connected won’t allow new information to move in. But the problem isn’t as simple as that. “It’s not that everything is changing everywhere,” he says. “There is a specificity to it.” Some connections are more important than others, depending on the task.
Neural networks need to shuttle information around quickly and fluidly. To really get a sense of this movement as opposed to snapshots frozen in time, scientists need to watch the brain as it learns. “The next stage is to figure out how the networks actually shift,” Menon says. “That’s where the studies from Dani Bassett and others will be very useful.”
Flexing in real time Bassett and colleagues have captured these changing networks as people learn. Volunteers were given simple sequences to tap out on a keyboard while undergoing a functional MRI scan. During six weeks of scanning as people learned the task, neural networks in their brains shifted around. Some connections grew stronger and some grew weaker, Bassett and her team reported in Nature Neuroscience in 2015.
People who quickly learned to tap the correct sequence of keys showed an interesting neural trait: As they learned, they shed certain connections between their frontal cortex, the outermost layer of the brain toward the front of the head, and the cingulate, which sits toward the middle of the brain. This connection has been implicated in directing attention, setting goals and making plans, skills that may be important for the early stages of learning but not for later stages, Bassett and colleagues suspect. Compared with slow learners, fast learners were more likely to have shunted these connections, a process that may have made their brains more efficient.
Flexibility seems to be important for other kinds of learning too. Reinforcement learning, in which right answers get a thumbs up and wrong answers are called out, also taps into brain flexibility, Gerraty, Bassett and others reported online May 30 at bioRxiv.org. This network comprises many points on the cortex, the brain’s outer layer, and a deeper structure known as the striatum. Other work on language comprehension, published by Bassett and colleagues last year in Cerebral Cortex, found some brain regions that were able to quickly form and break connections.
These studies captured brains in the process of learning, revealing “a much more interesting network structure than what we previously thought when we were only looking at static snapshots,” Gerraty says. The learning brain is incredibly dynamic, he says, with modules breaking off from partners and finding new ones.
While the details of those dynamics differ from study to study, there is an underlying commonality: “It seems that part of learning about the world is having parts of your brain become more flexible, and more able to communicate with different areas,” Gerraty says. In other words, the act of learning takes flexibility.
But too much of a good thing may be bad. While performing a recall task in a scanner, people with schizophrenia had higher flexibility among neural networks across the brain than did healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. “That suggests to me that while flexibility is good for healthy people, there is perhaps such a thing as too much flexibility,” Bassett says. Just how this flexibility arises, and what controls it, is unknown. Andrea Stocco, a cognitive neuroscientist at the University of Washington in Seattle, suspects that a group of brain structures called the basal ganglia, deep within the brain, has an important role in controlling flexibility. He compares this region, which includes the striatum, to an air traffic controller who shunts information to where it’s most needed. One of the basal ganglia’s jobs seems to be shutting things down. “Most of the time, the basal ganglia is blocking something,” he says. Other researchers have found evidence that crucial “hubs” in the cortex help control flexibility.
Push for more Researchers don’t yet know how measures of flexibility in brain regions relate to the microscopic changes that accompany learning. For now, the macro and the micro views of learning are separate worlds. Despite that missing middle ground, researchers are charging ahead, looking for signs that neural flexibility might offer a way to boost learning aptitude.
It’s possible that external brain stimulation may enhance flexibility. After receiving brain stimulation carefully aimed at a known memory circuit, people were better able to recall lists of words, scientists reported May 8 in Current Biology. If stimulation can boost memory, some argue, the technique could enhance flexibility and perhaps learning too. Certain drugs show promise. DXM, found in some cough medicines, blocks proteins that help regulate nerve cell chatter. Compared with a placebo, the compound made some brain regions more flexible and able to rapidly switch partners in healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. She is also studying whether neurofeedback — a process in which people try to change their brain patterns to become more flexible with real-time monitoring — can help.
Something even simpler might work for boosting flexibility. On March 31 in Scientific Reports, Bassett and colleagues described their network analyses of an unusual subject. For a project called MyConnectome, neuroscientist Russ Poldrack, then at the University of Texas at Austin, had three brain scans a week for a year while assiduously tracking measures that included mood. Bassett and her team applied their mathematical tools to Poldrack’s data to get measurements of his neural flexibility on any given scan day. The team then looked for associations with mood. The standout result: When Poldrack was happiest, his brain was most flexible, for reasons that aren’t yet clear. (Flexibility was lowest when he was surprised.)
Those results are from a single person, so it’s unknown how well they would generalize to others. What’s more, the study identifies only a link, not that happiness causes more flexibility or vice versa. But the idea is intriguing, if not obvious, Bassett says. “Of course, no teacher is really going to say we’re doing rocket science if we tell them we should make the kids happier and then they’ll learn better.” But finding out exactly how happiness relates to learning is important, she says.
The research is just getting started. But already, insights on learning are coming quickly from the small group of researchers viewing the brain as a matrix of nodes and links that deftly shift, swap and rearrange themselves. Zoomed out, network science brings to the brain “a whole new set of hypotheses and new ways of testing them,” Bassett says.
What do you get when you flip a fossilized “jellyfish” upside down? The answer, it turns out, might be an anemone.
Fossil blobs once thought to be ancient jellyfish were actually a type of burrowing sea anemone, scientists propose March 8 in Papers in Palaeontology.
From a certain angle, the fossils’ features include what appears to be a smooth bell shape, perhaps with tentacles hanging beneath — like a jellyfish. And for more than 50 years, that’s what many scientists thought the animals were. But for paleontologist Roy Plotnick, something about the fossils’ supposed identity seemed fishy. “It’s always kind of bothered me,” says Plotnick, of the University of Illinois Chicago. Previous scientists had interpreted one fossil feature as a curtain that hung around the jellies’ tentacles. But that didn’t make much sense, Plotnick says. “No jellyfish has that,” he says. “How would it swim?”
One day, looking over specimens at the Field Museum in Chicago, something in Plotnick’s mind clicked. What if the bell belonged on the bottom, not the top? He turned to a colleague and said, “I think this is an anemone.”
Rotated 180 degrees, Plotnick realized, the fossils’ shape — which looks kind of like an elongated pineapple with a stumpy crown — resembles some modern anemones. “It was one of those aha moments,” he says. The “jellyfish” bell might be the anemone’s lower body. And the purported tentacles? Perhaps the anemone’s upper section, a tough, textured barrel protruding from the seafloor.
Plotnick and his colleagues examined thousands of the fossilized animals, dubbed Essexella asherae, unearthing more clues. Bands running through the fossils match the shape of some modern anemones’ musculature. And some specimens’ pointy protrusions resemble an anemone’s contracted tentacles. “It’s totally possible that these are anemones,” says Estefanía Rodríguez, an anemone expert at the American Museum of Natural History in New York City who was not involved with the work. The shape of the fossils, the comparison with modern-day anemones — it all lines up, she says, though it’s not easy to know for sure.
Paleontologist Thomas Clements agrees. Specimens like Essexella “are some of the most notoriously difficult fossils to identify,” he says. “Jellyfish and anemones are like bags of water. There’s hardly any tissue to them,” meaning there’s little left to fossilize. Still, it’s plausible that the blobs are indeed fossilized anemones, says Clements, of Friedrich-Alexander-Universität Erlangen-Nürnberg in Germany. He was not part of the new study but has spent several field seasons at Mazon Creek, the Illinois site where Essexella lived some 310 million years ago. Back then, the area was near the shoreline, Clements says, with nearby rivers dumping sediment into the environment – just the kind of place ancient burrowing anemones may have once called home.
Weird materials called Weyl metals might reveal the secrets of how Earth gets its magnetic field.
The substances could generate a dynamo effect, the process by which a swirling, electrically conductive material creates a magnetic field, a team of scientists reports in the Oct. 26 Physical Review Letters.
Dynamos are common in the universe, producing the magnetic fields of the Earth, the sun and other stars and galaxies. But scientists still don’t fully understand the details of how dynamos create magnetic fields. And, unfortunately, making a dynamo in the lab is no easy task, requiring researchers to rapidly spin giant tanks of a liquefied metal, such as sodium (SN: 5/18/13, p. 26). First discovered in 2015, Weyl metals are topological materials, meaning that their behavior is governed by a branch of mathematics called topology, the study of shapes like doughnuts and knots (SN: 8/22/15, p. 11). Electrons in Weyl metals move around in bizarre ways, behaving as if they are massless.
Within these materials, the researchers discovered, electrons are subject to the same set of equations that describes the behavior of liquids known to form dynamos, such as molten iron in the Earth’s outer core. The team’s calculations suggest that, under the right conditions, it should be possible to make a dynamo from solid Weyl metals.
It might be easier to create such dynamos in the lab, as they don’t require large quantities of swirling liquid metals. Instead, the electrons in a small chunk of Weyl metal could flow like a fluid, taking the place of the liquid metal. The result is still theoretical. But if the idea works, scientists may be able to use Weyl metals to reproduce the conditions that exist within the Earth, and better understand how its magnetic field forms.
Oceans may be shrinking — Science News, March 10, 1973
The oceans of the world may be gradually shrinking, leaking slowly away into the Earth’s mantle…. Although the oceans are constantly being slowly augmented by water carried up from Earth’s interior by volcanic activity … some process such as sea-floor spreading seems to be letting the water seep away more rapidly than it is replaced.
Update Scientists traced the ocean’s leak to subduction zones, areas where tectonic plates collide and the heavier of the two sinks into the mantle. It’s still unclear how much water has cycled between the deep ocean and mantle through the ages. A 2019 analysis suggests that sea levels have dropped by an average of up to 130 meters over the last 230 million years, in part due to Pangea’s breakup creating new subduction zones. Meanwhile, molten rock that bubbles up from the mantle as continents drift apart may “rain” water back into the ocean, scientists reported in 2022. But since Earth’s mantle can hold more water as it cools (SN: 6/13/14), the oceans’ mass might shrink by 20 percent every billion years.
A new species of hulking ancient herbivore would have overshadowed its relatives.
Fossils found in Poland belong to a new species that roamed during the Late Triassic, a period some 237 million to 201 million years ago, researchers report November 22 in Science. But unlike most of the enormous animals who lived during that time period, this new creature isn’t a dinosaur — it’s a dicynodont.
Dicynodonts are a group of ancient four-legged animals that are related to mammals’ ancestors. They’re a diverse group, but the new species is far larger than any other dicynodont found to date. The elephant-sized creature was more than 4.5 meters long and probably weighed about nine tons, the researchers estimate. Related animals didn’t become that big again until the Eocene, 150 million years later. “We think it’s one of the most unexpected fossil discoveries from the Triassic of Europe,” says study coauthor Grzegorz Niedzwiedzki, a paleontologist at Uppsala University in Sweden. “Who would have ever thought that there is a fossil record of such a giant, elephant-sized mammal cousin in this part of the world?” He and his team first described some of the bones in 2008; now they’ve made the new species — Lisowicia bojani — official.
The creature had upright forelimbs like today’s rhinoceroses and hippos, instead of the splayed front limbs seen on other Triassic dicynodonts, which were similar to the forelimbs of present-day lizards. That posture would have helped it support its massive bodyweight.
WASHINGTON — After a stunningly explosive summer, Kilauea, the world’s longest continuously erupting volcano, finally seems to have taken a break. But the scientists studying it haven’t. Reams of new data collected during an unprecedented opportunity to monitor an ongoing, accessible eruption are changing what’s known about how some volcanoes behave.
“It was hugely significant,” says Jessica Larsen, a petrologist at the University of Alaska Fairbanks, and “a departure from what Kilauea had been doing for more than 35 years.” The latest eruption started in May. By the time it had ended three months later, over 825 million cubic meters of earth had collapsed at the summit. That’s the equivalent of 300,000 Olympic-sized swimming pools, Kyle Anderson, a geophysicist with the U.S. Geologic Survey in Menlo Park, Calif., said December 11 in a news conference at the annual meeting of the American Geophysical Union.
As the summit crater deflated, magma gushed through underground tunnels, draining out through fissures along an area called the lower eastern rift zone at a rate of roughly 50 meters per day. That lava eventually covered 35.5 square kilometers of land, Anderson and his colleagues reported in a study published December 11 in Science.
The volcano also taught scientists a thing or two. Scientists previously believed that groundwater plays a big role in how a caldera collapses. When craters were drained of their magma, “cooling, cracking depressurized the caldera, allowing groundwater to seep in and create a series of explosive eruptions,” Anderson said. “But groundwater did not play a significant role in driving the explosions this summer.”
Instead, the destruction of Kilauea’s crater is what’s called a piston-style caldera collapse, he said. Sixty-two small collapse events rattled the volcano from mid-May to late August, with each collapse causing the crater to sink and pushing the surrounding land out and up. By the end, the center of the volcano sank by as much as 500 meters — more than the height of the Empire State Building.
That activity didn’t just destroy the crater. “We could see surges in the eruption rate 40 kilometers away whenever there was a collapse,” Anderson said. Life finds a way Under the sea, life moved in around the brand-new land surprisingly quickly. Using a remotely operated vehicle to explore the seafloor, researchers in September found evidence of hydrothermal activity along newly deposited lava flows about 650 meters deep. More surprising, bright yellow, potentially iron-oxidizing microbes had already moved in.
“There’s no reason why we should have expected there would be hydrothermal activity that would be alive within the first 100 days,” Chris German, a geologist at Woods Hole Oceanographic Institution in Falmouth, Mass., said at the news conference. “This is actually life here!”
The discovery suggests “how volcanism can give rise to the chemical energy that can drive primitive microbial organisms and flower a whole ecosystem,” he said.
Studying these ecosystems can provide insight into how life may form in places like Enceladus, an icy moon of Saturn. Hydrothermal activity is common where Earth’s tectonic plates meet. But alien worlds don’t show evidence of plate tectonics, though they can be volcanically active, German says. Studying how hydrothermal life forms near volcanoes that aren’t along tectonic boundaries on Earth could reveal a lot about other celestial bodies.
“This is a better analog of what we expect to them to be like,” says German, but “it is what’s least studied.”
What comes next As of December 5, Kilauea had not erupted for three months, suggesting it’s in what’s called a pause – still active but not spewing lava. Observations from previous eruptions suggest that the next phase of Kilauea’s volcanic cycle may be a quieter one. But the volcano likely won’t stay quiet forever, says Christina Neal, the head scientist at the USGS Hawaiian Volcano Observatory and a coauthor of the Science paper. “We’re in this lull and we just don’t know what is going to happen next,” she says.Life finds a way Under the sea, life moved in around the brand-new land surprisingly quickly. Using a remotely operated vehicle to explore the seafloor, researchers in September found evidence of hydrothermal activity along newly deposited lava flows about 650 meters deep. More surprising, bright yellow, potentially iron-oxidizing microbes had already moved in.
“There’s no reason why we should have expected there would be hydrothermal activity that would be alive within the first 100 days,” Chris German, a geologist at Woods Hole Oceanographic Institution in Falmouth, Mass., said at the news conference. “This is actually life here!”
The discovery suggests “how volcanism can give rise to the chemical energy that can drive primitive microbial organisms and flower a whole ecosystem,” he said.
Studying these ecosystems can provide insight into how life may form in places like Enceladus, an icy moon of Saturn. Hydrothermal activity is common where Earth’s tectonic plates meet. But alien worlds don’t show evidence of plate tectonics, though they can be volcanically active, German says. Studying how hydrothermal life forms near volcanoes that aren’t along tectonic boundaries on Earth could reveal a lot about other celestial bodies. Scientists are tracking ground swelling near the Puu Oo vent, where much of Kilauea’s lava has flowed from during the volcano’s 35-year eruption history. That inflation is an indication that magma may still be on the move deep below.
The terrain surrounding this remote region is dense with vegetation, making it a difficult area to study. But new methods tested during the 2018 eruption, such as the use of uncrewed aerial vehicles, for example, could aid in tracking the recent deformation.
Scientists are also watching the volcano next door: Mauna Loa. History has shown that Mauna Loa can act up during periods when Kilauea sleeps. For the past several years, volcanologists have kept an eye on Kilauea’s larger sister volcano, which went silent last fall, after a period with few earthquakes and intermittent deformation. “We’re seeing a little bit of inflation at Mauna Loa and some earthquake swarms where it had been active, Neal says. “So that’s another issue of concern for us going into the future.”
When an outbreak of a viral hemorrhagic fever hit Nigeria in 2018, scientists were ready: They were already in the country testing new disease-tracking technology, and within weeks managed to steer health workers toward the most appropriate response.
Lassa fever, which is transmitted from rodents to humans, pops up every year in West Africa. But 2018 was the worst season on record for Nigeria. By mid-March, there were 376 confirmed cases — more than three times as many as by that point in 2017 — and another 1,495 suspected. Health officials weren’t sure if the bad year was being caused by the strains that usually circulate, or by a new strain that might be more transmissible between humans and warrant a stronger response. New technology for analyzing DNA in the field helped answer that question mid-outbreak, confirming the outbreak was being caused by pretty much the same strains transmitted from rodents to humans in past years. That rapid finding helped Nigeria shape its response, allowing health officials to focus efforts on rodent control and safe food storage, rather than sinking time and money into measures aimed at stopping unlikely human-to-human transmission, researchers report in the Jan. 4 Science.
While the scientists were reporting their results to the Nigeria Centre for Disease Control, they were also discussing the data with other virologists and epidemiologists in online forums. This kind of real-time collaboration can help scientists and public health workers “see the bigger picture about pathogen spread,” says Nicholas Loman, a microbial genomicist at the University of Birmingham in England who was not involved in the research.
Portable DNA sequencers, some as small as a cell phone, have allowed scientists to read the genetic information of viruses emerging in places without extensive lab infrastructure. Looking for genetic differences between patient samples can give clues to how a virus is being transmitted and how quickly it’s changing over time — key information for getting outbreaks under control. If viral DNA from several patients is very similar, that suggests the virus may be transmitted between people; if the DNA is more distinct, people might be picking up the virus independently from other animals.
The technology has also been used amid recent Ebola and Zika outbreaks. But the Lassa virus presents a unique challenge, says study coauthor Stephan Günther, a virologist at the Bernhard-Nocht-Institute for Tropical Medicine in Hamburg, Germany. Unlike Ebola or Zika, Lassa has a lot of genetic variation between strains. So while the same small regions of DNA from various strains of Ebola or Zika can be identified for analysis, it’s hard to accurately target similar regions for comparison among Lassa strains. Instead, Günther and his team used a tactic called metagenomics: They collected breast milk, plasma and cerebrospinal fluid from patients and sequenced all the DNA within — human, viral and anything else lurking. Then, the team picked out the Lassa virus DNA from that dataset.
All told, the scientists analyzed Lassa virus DNA from 120 patients, far more than initially intended. “We went to the field to do a pilot study,” Günther says. “Then the outbreak came. And we quickly scaled up.” Preexisting relationships in Nigeria helped make that happen: The team had been collaborating for a decade with researchers at the Irrua Specialist Teaching Hospital and working alongside the World Health Organization and the Nigeria Centre for Disease Control.
Analyzing and interpreting the massive amounts of data generated by the metagenomics approach was a challenge, especially with limited internet connection, Günther says. Researchers analyzed 36 samples during the outbreak — less than a third of their total dataset, but still enough to guide the response. The full analysis, completed after the outbreak, confirmed the initial findings.
A metagenomics approach could be useful in disease surveillance more broadly. Currently, “we look for things that we know about and expect to find. Yet evidence from Ebola in West Africa and Zika in the Americas is that emerging pathogens can pop up in unexpected places, and take too long to be recognized,” Loman says. Sequencing all DNA in a sample, he says, could allow scientists to detect problem pathogens before they cause outbreaks.Instead, Günther and his team used a tactic called metagenomics: They collected breast milk, plasma and cerebrospinal fluid from patients and sequenced all the DNA within — human, viral and anything else lurking. Then, the team picked out the Lassa virus DNA from that dataset.
All told, the scientists analyzed Lassa virus DNA from 120 patients, far more than initially intended. “We went to the field to do a pilot study,” Günther says. “Then the outbreak came. And we quickly scaled up.” Preexisting relationships in Nigeria helped make that happen: The team had been collaborating for a decade with researchers at the Irrua Specialist Teaching Hospital and working alongside the World Health Organization and the Nigeria Centre for Disease Control.
Analyzing and interpreting the massive amounts of data generated by the metagenomics approach was a challenge, especially with limited internet connection, Günther says. Researchers analyzed 36 samples during the outbreak — less than a third of their total dataset, but still enough to guide the response. The full analysis, completed after the outbreak, confirmed the initial findings.
A metagenomics approach could be useful in disease surveillance more broadly. Currently, “we look for things that we know about and expect to find. Yet evidence from Ebola in West Africa and Zika in the Americas is that emerging pathogens can pop up in unexpected places, and take too long to be recognized,” Loman says. Sequencing all DNA in a sample, he says, could allow scientists to detect problem pathogens before they cause outbreaks.
Approval of the first and only treatment in the United States specifically targeting postpartum depression offers hope for millions of women each year who suffer from the debilitating mental health disorder after giving birth.
The new drug brexanolone — marketed under the name Zulresso and approved on March 19 by the U.S. Food and Drug Administration — is expected to become available to the public in late June. Developed by Sage Therapeutics, based in Cambridge, Mass., the drug is costly and treatment is intensive: It’s administered in the hospital as a 60-hour intravenous drip, and a treatment runs between $20,000 and $35,000. But researchers say that it could help many of the estimated 11.5 percent of U.S. new moms each year who experience postpartum depression, which can interfere with normal bonding between mothers and infants and lead to feeling hopeless, sad or overly anxious. Here’s a closer look at the drug, its benefits and some potential drawbacks.
How does the new drug work? How exactly brexanolone works is not known. But because the drug’s chemical structure is nearly identical to the natural hormone allopregnanolone, it’s thought that brexanolone operates in a similar way.
Allopregnanolone enhances the effects of a neurochemical called gamma-aminobutyric acid, or GABA, which stops nerve cells in the brain from firing. Ultimately this action helps quell a person’s anxiety or stress. During pregnancy, the concentration of allopregnanolone in a woman’s brain spikes. This leads some neurons to compensate by temporarily tuning out GABA so that the nerve cells don’t become too inactive. Levels of the steroid typically return to normal quickly after a baby is born, and the neurons once again responding to GABA shortly thereafter. But for some women, this process can take longer, possibly resulting in postpartum depression.
Brexanolone temporarily elevates the brain’s allopregnanolone levels again, which results in a patient’s mood improving. But it’s still not clear exactly why the drug has this effect, says Samantha Meltzer-Brody, a reproductive psychiatrist at the University of North Carolina School of Medicine in Chapel Hill and the lead scientist of the drug’s clinical trials. Nor is it clear whether allopregnanolone’s, and thus possible brexanolone’s, influence on GABA is affecting only postpartum depression. But the drug clearly “has this incredibly robust response,” she says, “unlike anything currently available.”
How effective was the drug in clinical trials? Brexanolone went through three separate clinical trials in which patients were randomly given either the drug or a placebo: one Phase II trial, which tests the drug’s effectiveness and proper dosage, and two Phase III trials, which tested the drug’s effects on moderate or severe postpartum depression and were necessary to gain approval for the drug’s commercial use in people.
Of 234 people who completed the trials, 94 received the suggested dosage of brexanolone. About 70 of those patients, or 75 percent, had what Meltzer-Brody described as a “robust response” to just one course of treatment. And of those patients with positive responses, 94 percent continued to feel well 30 days after the treatment. The results suggest that the drug may be most effective for those with severe postpartum depression; among those with moderate symptoms, the drug and the placebo had a fairly similar impact.
Can people take the drug again? “There’s nothing prohibiting” a second course of brexanolone, but the effects of a repeat course have not been studied, Meltzer-Brody says. The drug was designed to be taken in tandem with the start of antidepressants, which take effect after about two to four weeks. So by the time the brexanolone wears off, the antidepressants would have kicked in.
It’s not clear yet if some patients could need a second dose. The clinical trials compared a group of women taking both antidepressants and brexanolone with another group taking only brexanolone and found no difference in the two group’s response 30 days after tests ended, Meltzer-Brody says. Because the study ended at 30 days, it’s unclear if the effects of brexanolone on its own last longer.
Can women breastfeed while taking brexanolone? As a precaution, treated women did not breastfeed until six days after taking the drug. But in tests of breastmilk from 12 treated, lactating women, concentrations of brexanolone in breastmilk were negligible — less than 10 nanograms per millileter — in most of the women 36 hours after they received the infusion, according to Sage’s briefing document for the FDA. The FDA has yet to issue guidance on breast feeding.
Are there side effects? About a third of the trial patients experienced sleepiness, sedation or headaches. The possibility of drowsiness led to the FDA’s requirement that the drug be administered by IV drip in a supervised setting. “If someone isn’t supervised, then there would be the risk that someone could get sleepy and pass out,” Meltzer-Brody says.
Are there plans for different versions of the drug? Sage Therapeutics is developing a pill version of a drug called SAGE-217. It’s chemically similar to brexanolone and has similar antidepressant effects. Early results from a Phase III trial reported by the company in January show that, of 78 women treated with the pill, 72 percent responded favorably within two weeks, and 53 percent had not experienced a recurrence of symptoms four weeks later.
Is it worth the price and time? Setting aside 60 hours to be hospitalized for an expensive drug could be discouraging for some. “It’s going to be very important for insurance to cover it in order for it be accessible,” Meltzer-Brody says. “I’m hoping that will be the case.” But based on the reaction of women with severe postpartum depression who participated in the trials, “two-and-a-half days seems like nothing if your debilitating, depressive symptoms will be gone.”