Gene editing of human embryos yields early results

Scientists have long sought a strategy for curing genetic diseases, but — with just a few notable exceptions — have succeeded only in their dreams. Now, though, researchers in China and Texas have taken a step toward making the fantasies a reality for all inherited diseases.

Using the gene-editing tool known as CRISPR/Cas9, the researchers have successfully edited disease-causing mutations out of viable human embryos. Other Chinese groups had previously reported editing human embryos that could not develop into a baby because they carried extra chromosomes, but this is the first report involving viable embryos (SN Online: 4/8/16; SN Online: 4/23/15).
In the new work, reported March 1 in Molecular Genetics and Genomics, Jianqiao Liu of Guangzhou Medical University in China and colleagues used embryos with a normal number of chromosomes. The embryos were created using eggs and sperm left over from in vitro fertilization treatments. In theory, the embryos could develop into a baby if implanted into a woman’s uterus.

Researchers in Sweden and England are also conducting gene-editing experiments on viable human embryos (SN: 10/29/16, p. 15), but those groups have not yet reported results.

Human germline editing wasn’t realistic until CRISPR/Cas9 and other new gene editors came along, says R. Alta Charo, a bioethicist at the University of Wisconsin Law School in Madison. “We’ve now gotten to the point where it’s possible to imagine a day when it would be safe enough” to be feasible. Charo was among the experts on a National Academies of Sciences and Medicine panel that in February issued an assessment of human gene editing. Altering human embryos, eggs, sperm or the cells that produce eggs and sperm would be permissible, provided there were no other alternatives and the experiments met other strict criteria, the panel concluded (SN: 3/18/17, p. 7).
Still, technical hurdles remain before CRISPR/Cas9 can cross into widespread use in treating patients.

CRISPR/Cas9 comes in two parts: a DNA-cutting enzyme called Cas9, and a “guide RNA” that directs Cas9 to cut at a specified location in DNA. Guide RNAs work a little like a GPS system, says David Edgell, a molecular biologist at Western University in London, Ontario. Given precise coordinates or a truly unique address, a good GPS should take you to the right place every time.

Scientists design guide RNAs so that they will carry Cas9 to only one stretch of about 20 bases (the information-carrying subunits of DNA) out of the entire 6 billion base pairs that make up the human genetic instruction book, or genome. But most 20-base locations in the human genome aren’t particularly distinctive. They are like Starbucks coffee shops: There are a lot of them and they are often similar enough that a GPS might get confused about which one you want to go to, says Edgell. Similarly, guide RNAs sometimes direct Cas9 to cut alternative, or “off-target,” sites that are a base or two different from the intended destination. Off-target cutting is a problem because such edits might damage or change genes in unexpected ways.

“It’s a major issue for sure,” says Bruce Korf, a geneticist at the University of Alabama at Birmingham and president of the American College of Medical Genetics and Genomics Foundation. Doctors trying to correct one genetic defect in a patient want to be sure they aren’t accidentally introducing another.

But CRISPR/Cas9’s propensity to cut undesired sites may be exaggerated, says Alasdair MacKenzie, a molecular biologist at the University of Aberdeen in Scotland. In experiments with mice, MacKenzie and colleagues limited how much Cas9 was produced in cells and made sure the enzyme didn’t stick around after it made an edit. No off-target cuts were detected in any of the mice resulting from successfully edited embryos, MacKenzie and colleagues reported in November in Neuropeptides.

Other researchers have experimented with assembling the Cas9 and guide RNAs outside of the cell and then putting the preassembled protein-RNA complex into cells. That’s the strategy the Chinese researchers took in the new human embryo–editing study. No off-target cuts were detected in that study either, although only one edited embryo was closely examined.

Other researchers have been tinkering with the genetic scissors to produce high-fidelity versions of Cas9 that are far less likely to cut at off-target sites in the first place.

When a guide RNA leads Cas9 to a site that isn’t a perfect match, the enzyme can latch onto DNA’s phosphate backbone and stabilize itself enough to make a cut, says Benjamin Kleinstiver, a biochemist in J. Keith Joung’s lab at Harvard Medical School. By tweaking Cas9, Kleinstiver and colleagues essentially eliminated the enzyme’s ability to hold on at off-target sites, without greatly harming its on-target cutting ability.

Regular versions of Cas9 cut between two and 25 off-target sites for seven guide RNAs the researchers tested. But the high-fidelity Cas9 worked nearly flawlessly for those guides. For instance, high-fidelity Cas9 reduced off-target cutting from 25 sites to just one for one of the guide RNAs, the researchers reported in January 2016 in Nature. That single stray snip, however, could be a problem if the technology were to be used in patients.
A group led by CRISPR/Cas9 pioneer Feng Zhang of the Broad Institute of MIT and Harvard tinkered with different parts of the Cas9 enzyme. That team also produced a cutter that rarely cleaved DNA at off-target sites, the team reported last year in Science.

Another problem for gene editing has been that it is good at disabling, or “knocking out,” genes that are causing a problem but not at replacing genes that have gone bad. Knocking out a gene is easy because all Cas9 has to do is cut the DNA. Cells generally respond by gluing the cut ends back together. But, like pieces of a broken vase, they rarely fit perfectly again. Small flaws introduced in the regluing can cause the problem gene to produce nonfunctional proteins. Knocking out genes may help fight Huntington’s disease and other genetic disorders caused by single, rogue versions of genes.

Many genetic diseases, such as cystic fibrosis or Tay-Sachs, are caused when people inherit two mutated, nonfunctional copies of the same gene. Knocking those genes out won’t help. Instead, researchers need to insert undamaged versions of the genes to restore health. Inserting a gene starts with cutting the DNA, but instead of gluing the cut ends together, cells use a matching piece of DNA as a template to repair the damage.

In the new human embryo work, Liu and colleagues, including Wei-Hua Wang of the Houston Fertility Institute in Texas, first tested this type of repair on embryos with an extra set of chromosomes. Efficiency was low; about 10 to 20 percent of embryos contained the desired edits. Researchers had previously argued that extra chromosomes could interfere with the editing process, so Liu’s group also made embryos with the normal two copies of each chromosome (one from the father and one from the mother). Sperm from men that have genetic diseases common in China were used to fertilize eggs. In one experiment, Liu’s group made 10 embryos, two of which carried a mutation in the G6PD gene. Mutations in that gene can lead to a type of anemia.

Then the team injected Cas9 protein already leashed to its guide RNA, along with a separate piece of DNA that embryos could use as a template for repairing the mutant gene. G6PD mutations were repaired in both embryos. Since both of the two embryos had the repair, the researchers say they achieved 100 percent efficiency. But one embryo was a mosaic: It carried the fix in some but not all of its cells. Another experiment to repair mutations in the HBB gene, linked to blood disorders, worked with 50 percent efficiency, but with some other technical glitches.

Scientists don’t know whether editing just some cells in an embryo will be enough to cure genetic diseases. For that reason, some researchers think it may be necessary to step back from embryos to edit the precursor cells that produce eggs and sperm, says Harvard University geneticist George Church. Precursor cells can produce many copies of themselves, so some could be tested to ensure that proper edits have been made with no off-target mutations. Properly edited cells would then be coaxed into forming sperm or eggs in lab dishes. Researchers have already succeeded in making viable sperm and eggs from reprogrammed mouse stem cells (SN: 11/12/16, p. 6). Precursors of human sperm and eggs have also been grown in lab dishes (SN Online: 12/24/14), but researchers have yet to report making viable human embryos from such cells.

The technology to reliably and safely edit human germline cells will probably require several more years of development, researchers say.

Germline editing — as altering embryos, eggs and sperm or their precursors is known — probably won’t be the first way CRISPR/Cas9 is used to tackle genetic diseases. Doctors are already planning experiments to edit genes in body cells of patients. Those experiments come with fewer ethical questions but have their own hurdles, researchers say.

“We still have a few years to go,” says MacKenzie, “but I’ve never been so hopeful as I am now of the capacity of this technology to change people’s lives.”

Drowned wildebeests can feed a river ecosystem for years

More than a million wildebeests migrate each year from Tanzania to Kenya and back again, following the rains and abundant grass that springs up afterward. Their path takes them across the Mara River, and some of the crossings are so dangerous that hundreds or thousands of wildebeests drown as they try to traverse the waterway.

Those animals provide a brief, free buffet for crocodiles and vultures. And, a new study finds, they’re feeding an aquatic ecosystem for years.

Ecologist Amanda Subalusky of the Cary Institute of Ecosystem Studies in Millbrook, N.Y., had been studying water quality in the Mara River when she and her colleagues noticed something odd. Commonly used indicators of water quality, such as dissolved oxygen and turbidity, were sometimes poorest where the river flowed through a protected area. They quickly realized that it was because of the animals that flourished there. Hippos, which eat grass at night and defecate in the water during the day, were one contributor. And dead wildebeests were another.

“Wildebeest are especially good at following the rains, and they’re willing to cross barriers to follow it,” says Subalusky. The animals tend to cross at the same spots year after year, and some are more dangerous than others. “Once they’ve started using a site, they continue, even if it’s bad,” she notes. And on average, more than 6,000 wildebeests drown each year. (That may sound like a lot, but it’s only about 0.5 percent of the herd.) Their carcasses add the equivalent of the mass of 10 blue whales into the river annually.

Subalusky and her colleagues set out to see how all that meat and bone affected the river ecosystem. When they heard about drownings, they would go to the river to count carcasses. They retrieved dead wildebeests from the water to test what happened to the various parts over time. And they measured nutrients up and downstream from river crossings to see what the wildebeest carcasses added to the water.

“There are some interesting challenges working in this system,” Subalusky says. For instance, in one experiment, she and her colleagues put pieces of wildebeest carcass into mesh bags that went into the river. The plan was that they would retrieve the bags over time and see how quickly or slowly the pieces decomposed. “We spent a couple of days putting the whole thing together and we came back the next day to collect our first set of samples,” she recalls. “At least half the bags with wildebeest meat were just gone. Crocodiles and Nile monitors had plucked them off the chain.”
The researchers determined that the wildebeests’ soft tissue decomposes in about two to 10 weeks. This provides a pulse of nutrients — carbon, nitrogen and phosphorus — to the aquatic food web as well as the nearby terrestrial system. Subalusky and her colleagues are still working out the succession of scavengers that feast on the wildebeests, but vultures, marabou storks, egg-laying bugs and things that eat bugs are all on the list.
Once the soft tissue is gone, the bones remain, sometimes piling up in bends in the river or other spots downstream. “They take years to decompose,” Subalusky says, slowly leaching out most of the phosphorus that had been in the animal. The bones can also become covered in a biofilm of algae, fungi and bacteria that provides food for fish.

What initially looks like a short-lived event actually provides resources for seven years or more, Subalusky and her colleagues report June 19 in the Proceedings of the National Academy of Sciences.

The wildebeest migration is the largest terrestrial migration on the planet, and others of its kind have largely disappeared as humans have killed off animals or cut off their migration routes.

Only a few hundred years ago, for instance, millions of bison roamed the western United States. There are accounts in which thousands of bison drowned in rivers, similar to what happens with wildebeests. Those rivers may have fundamentally changed after bison were nearly wiped out, Subalusky and her colleagues contend.

We’ll never know if that was the case, but there are still some places where scientists may be able to study the effects of mass drownings on rivers. A large herd of caribou reportedly drowned in Canada in the 1980s, and there are still some huge migrations of animals, such as reindeer. Like the wildebeests, these animals might be feeding an underwater food web that no one has ever noticed.

How earthquake scientists eavesdrop on North Korea’s nuclear blasts

On September 9 of last year, in the middle of the morning, seismometers began lighting up around East Asia. From South Korea to Russia to Japan, geophysical instruments recorded squiggles as seismic waves passed through and shook the ground. It looked as if an earthquake with a magnitude of 5.2 had just happened. But the ground shaking had originated at North Korea’s nuclear weapons test site.

It was the fifth confirmed nuclear test in North Korea, and it opened the latest chapter in a long-running geologic detective story. Like a police examiner scrutinizing skid marks to figure out who was at fault in a car crash, researchers analyze seismic waves to determine if they come from a natural earthquake or an artificial explosion. If the latter, then scientists can also tease out details such as whether the blast was nuclear and how big it was. Test after test, seismologists are improving their understanding of North Korea’s nuclear weapons program.
The work feeds into international efforts to monitor the Comprehensive Nuclear-Test-Ban Treaty, which since 1996 has banned nuclear weapons testing. More than 180 countries have signed the treaty. But 44 countries that hold nuclear technology must both sign and ratify the treaty for it to have the force of law. Eight, including the United States and North Korea, have not.

To track potential violations, the treaty calls for a four-pronged international monitoring system, which is currently about 90 percent complete. Hydroacoustic stations can detect sound waves from underwater explosions. Infrasound stations listen for low-frequency sound waves rumbling through the atmosphere. Radio­nuclide stations sniff the air for the radioactive by-products of an atmospheric test. And seismic stations pick up the ground shaking, which is usually the fastest and most reliable method for confirming an underground explosion.

Seismic waves offer extra information about an explosion, new studies show. One research group is exploring how local topography, like the rugged mountain where the North Korean government conducts its tests, puts its imprint on the seismic signals. Knowing that, scientists can better pinpoint where the explosions are happening within the mountain — thus improving understanding of how deep and powerful the blasts are. A deep explosion is more likely to mask the power of the bomb.
Separately, physicists have conducted an unprecedented set of six explosions at the U.S. nuclear test site in Nevada. The aim was to mimic the physics of a nuclear explosion by detonating chemical explosives and watching how the seismic waves radiate outward. It’s like a miniature, nonnuclear version of a nuclear weapons test. Already, the scientists have made some key discoveries, such as understanding how a deeply buried blast shows up in the seismic detectors.
The more researchers can learn about the seismic calling card of each blast, the more they can understand international developments. That’s particularly true for North Korea, where leaders have been ramping up the pace of military testing since the first nuclear detonation in 2006. On July 4, the country launched its first confirmed ballistic missile — with no nuclear payload — that could reach as far as Alaska.

“There’s this building of knowledge that helps you understand the capabilities of a country like North Korea,” says Delaine Reiter, a geophysicist with Weston Geophysical Corp. in Lexington, Mass. “They’re not shy about broadcasting their testing, but they claim things Western scientists aren’t sure about. Was it as big as they claimed? We’re really interested in understanding that.”

Natural or not
Seismometers detect ground shaking from all sorts of events. In a typical year, anywhere from 1,200 to 2,200 earthquakes of magnitude 5 and greater set off the machines worldwide. On top of that is the unnatural shaking: from quarry blasts, mine collapses and other causes. The art of using seismic waves to tell one type of event from the others is known as forensic seismology.

Forensic seismologists work to distinguish a natural earthquake from what could be a clandestine nuclear test. In March 2003, for instance, seismometers detected a disturbance coming from near Lop Nor, a dried-up lake in western China that the Chinese government, which signed but hasn’t ratified the test ban treaty, has used for nuclear tests. Seismologists needed to figure out immediately what had happened.

One test for telling the difference between an earthquake and an explosion is how deep it is. Anything deeper than about 10 kilometers is almost certain to be natural. In the case of Lop Nor, the source of the waves seemed to be located about six kilometers down — difficult to tunnel to, but not impossible. Researchers also used a second test, which compares the amplitudes of two different kinds of seismic waves.

Earthquakes and explosions generate several types of seismic waves, starting with P, or primary, waves. These waves are the first to arrive at a distant station. Next come S, or secondary, waves, which travel through the ground in a shearing motion, taking longer to arrive. Finally come waves that ripple across the surface, including those called Rayleigh waves.
In an explosion as compared with an earthquake, the amplitudes of Rayleigh waves are smaller than those of the P waves. By looking at those two types of waves, scientists determined the Lop Nor incident was a natural earthquake, not a secretive explosion. (Seismology cannot reveal the entire picture. Had the Lop Nor event actually been an explosion, researchers would have needed data from the radionuclide monitoring network to confirm the blast came from nuclear and not chemical explosives.)

For North Korea, the question is not so much whether the government is setting off nuclear tests, but how powerful and destructive those blasts might be. In 2003, the country withdrew from the Treaty on the Nonproliferation of Nuclear Weapons, an international agreement distinct from the testing ban that aims to prevent the spread of nuclear weapons and related technology. Three years later, North Korea announced it had conducted an underground nuclear test in Mount Mantap at a site called Punggye-ri, in the northeastern part of the country. It was the first nuclear weapons test since India and Pakistan each set one off in 1998.

By analyzing seismic wave data from monitoring stations around the region, seismologists concluded the North Korean blast had come from shallow depths, no more than a few kilometers within the mountain. That supported the North Korean government’s claim of an intentional test. Two weeks later, a radionuclide monitoring station in Yellowknife, Canada, detected increases in radioactive xenon, which presumably had leaked out of the underground test site and drifted eastward. The blast was nuclear.

But the 2006 test raised fresh questions for seismologists. The ratio of amplitudes of the Rayleigh and P waves was not as distinctive as it usually is for an explosion. And other aspects of the seismic signature were also not as clear-cut as scientists had expected.

Researchers got some answers as North Korea’s testing continued. In 2009, 2013 and twice in 2016, the government set off more underground nuclear explosions at Punggye-ri. Each time, researchers outside the country compared the seismic data with the record of past nuclear blasts. Automated computer programs “compare the wiggles you see on the screen ripple for ripple,” says Steven Gibbons, a seismologist with the NORSAR monitoring organization in Kjeller, Norway. When the patterns match, scientists know it is another test. “A seismic signal generated by an explosion is like a fingerprint for that particular region,” he says.

With each test, researchers learned more about North Korea’s capabilities. By analyzing the magnitude of the ground shaking, experts could roughly calculate the power of each test. The 2006 explosion was relatively small, releasing energy equivalent to about 1,000 tons of TNT — a fraction of the 15-kiloton bomb dropped by the United States on Hiroshima, Japan, in 1945. But the yield of North Korea’s nuclear tests crept up each time, and the most recent test, in September 2016, may have exceeded the size of the Hiroshima bomb.
Digging deep
For an event of a particular seismic magnitude, the deeper the explosion, the more energetic the blast. A shallow, less energetic test can look a lot like a deeply buried, powerful blast. Scientists need to figure out precisely where each explosion occurred.

Mount Mantap is a rugged granite mountain with geology that complicates the physics of how seismic waves spread. Western experts do not know exactly how the nuclear bombs are placed inside the mountain before being detonated. But satellite imagery shows activity that looks like tunnels being dug into the mountainside. The tunnels could be dug two ways: straight into the granite or spiraled around in a fishhook pattern to collapse and seal the site after a test, Frank Pabian, a nonproliferation expert at Los Alamos National Laboratory in New Mexico, said in April in Denver at a meeting of the Seismological Society of America.

Researchers have been trying to figure out the relative locations of each of the five tests. By comparing the amplitudes of the P, S and Rayleigh waves, and calculating how long each would have taken to travel through the ground, researchers can plot the likely sites of the five blasts. That allows them to better tie the explosions to the infrastructure on the surface, like the tunnels spotted in satellite imagery.

One big puzzle arose after the 2009 test. Analyzing the times that seismic waves arrived at various measuring stations, one group calculated that the test occurred 2.2 kilometers west of the first blast. Another scientist found it only 1.8 kilometers away. The difference may not sound like a lot, Gibbons says, but it “is huge if you’re trying to place these relative locations within the terrain.” Move a couple of hundred meters to the east or west, and the explosion could have happened beneath a valley as opposed to a ridge — radically changing the depth estimates, along with estimates of the blast’s power.

Gibbons and colleagues think they may be able to reconcile these different location estimates. The answer lies in which station the seismic data come from. Studies that rely on data from stations within about 1,500 kilometers of Punggye-ri — as in eastern China — tend to estimate bigger distances between the locations of the five tests when compared with studies that use data from more distant seismic stations in Europe and elsewhere. Seismic waves must be leaving the test site in a more complicated way than scientists had thought, or else all the measurements would agree.
When Gibbons’ team corrected for the varying distances of the seismic data, the scientists came up with a distance of 1.9 kilometers between the 2006 and 2009 blasts. The team also pinpointed the other explosions as well. The September 2016 test turned out to be almost directly beneath the 2,205-meter summit of Mount Mantap, the group reported in January in Geophysical Journal International. That means the blast was, indeed, deeply buried and hence probably at least as powerful as the Hiroshima bomb for it to register as a magnitude 5.2 earthquake.

Other seismologists have been squeezing information out of the seismic data in a different way — not in how far the signals are from the test blast, but what they traveled through before being detected. Reiter and Seung-Hoon Yoo, also of Weston Geophysical, recently analyzed data from two seismic stations, one 370 kilometers to the north in China and the other 306 kilometers to the south in South Korea.

The scientists scrutinized the moments when the seismic waves arrived at the stations, in the first second of the initial P waves, and found slight differences between the wiggles recorded in China and South Korea, Reiter reported at the Denver conference. Those in the north showed a more energetic pulse rising from the wiggles in the first second; the southern seismic records did not. Reiter and Yoo think this pattern represents an imprint of the topography at Mount Mantap.

“One side of the mountain is much steeper,” Reiter explains. “The station in China was sampling the signal coming through the steep side of the mountain, while the southern station was seeing the more shallowly dipping face.” This difference may also help explain why data from seismic stations spanning the breadth of Japan show a slight difference from north to south. Those differences may reflect the changing topography as the seismic waves exited Mount Mantap during the test.

Learning from simulations
But there is only so much scientists can do to understand explosions they can’t get near. That’s where the test blasts in Nevada come in.

The tests were part of phase one of the Source Physics Experiment, a $40-million project run by the U.S. Department of Energy’s National Nuclear Security Administration. The goal was to set off a series of chemical explosions of different sizes and at different depths in the same borehole and then record the seismic signals on a battery of instruments. The detonations took place at the nuclear test site in southern Nevada, where between 1951 and 1992 the U.S. government set off 828 underground nuclear tests and 100 atmospheric ones, whose mushroom clouds were seen from Las Vegas, 100 kilometers away.

For the Source Physics Experiment, six chemical explosions were set off between 2011 and 2016, ranging up to 5,000 kilograms of TNT equivalent and down to 87 meters deep. The biggest required high-energy–density explosives packed into a cylinder nearly a meter across and 6.7 meters long, says Beth Dzenitis, an engineer at Lawrence Livermore National Laboratory in California who oversaw part of the field campaign. Yet for all that firepower, the detonation barely registered on anything other than the instruments peppering the ground. “I wish I could tell you all these cool fireworks go off, but you don’t even know it’s happening,” she says.

The explosives were set inside granite rock, a material very similar to the granite at Mount Mantap. So the seismic waves racing outward behaved very much as they might at the North Korean nuclear test site, says William Walter, head of geophysical monitoring at Livermore. The underlying physics, describing how seismic energy travels through the ground, is virtually the same for both chemical and nuclear blasts.
The results revealed flaws in the models that researchers have been using for decades to describe how seismic waves travel outward from explosions. These models were developed to describe how the P waves compress rock as they propagate from large nuclear blasts like those set off starting in the 1950s by the United States and the Soviet Union. “That worked very well in the days when the tests were large,” Walter says. But for much smaller blasts, like those North Korea has been detonating, “the models didn’t work that well at all.”
Walter and Livermore colleague Sean Ford have started to develop new models that better capture the physics involved in small explosions. Those models should be able to describe the depth and energy release of North Korea’s tests more accurately, Walter reported at the Denver meeting.

A second phase of the Source Physics Experiment is set to begin next year at the test site, in a much more rubbly type of rock called alluvium. Scientists will use that series of tests to see how seismic waves are affected when they travel through fragmented rock as opposed to more coherent granite. That information could be useful if North Korea begins testing in another location, or if another country detonates an atomic bomb in fragmented rock.

For now, the world’s seismologists continue to watch and wait, to see what the North Korean government might do next. Some experts think the next nuclear test will come at a different location within Mount Mantap, to the south of the most recent tests. If so, that will provide a fresh challenge to the researchers waiting to unravel the story the seismic waves will tell.

“It’s a little creepy what we do,” Reiter admits. “We wait for these explosions to happen, and then we race each other to find the location, see how big it was, that kind of thing. But it has really given us a good look as to how [North Korea’s] nuclear program is progressing.” Useful information as the world’s nations decide what to do about North Korea’s rogue testing.

Confusion lingers over health-related pros and cons of marijuana

No one knows whether chronic marijuana smoking causes emotional troubles or is a symptom of them…. This dearth of evidence has a number of explanations: serious lingering reactions, if they exist, occur after prolonged use, rarely after a single dose; marijuana has no known medical use, unlike LSD, so scientists have had little reason to study the drug…. Also, marijuana has been under strict legal sanctions … for more than 30 years. – Science News, October 7, 1967

In 29 states and in Washington, D.C., marijuana is now commonly prescribed for post-traumatic stress disorder and chronic pain. But the drug’s pros and cons remain hazy. Regular pot use has been linked to psychotic disorders and to alcohol and drug addiction (SN Online: 1/12/17). And two recent research reviews conclude that very little high-quality data exist on whether marijuana effectively treats PTSD or pain. Several large-scale trials are under way to assess how well cannabis treats these conditions.

The Arecibo Observatory will remain open, NSF says

The iconic Arecibo Observatory has survived a hurricane and dodged deep budget cuts. On November 16, the National Science Foundation, which funds the bulk of the observatory’s operating costs, announced that they would continue funding the radio telescope at a reduced level.

It’s not clear yet who will manage the observatory in the long run, or where the rest of the funding will come from. But scientists are celebrating. For example:
Arecibo, a 305-meter-wide radio telescope located about 95 kilometers west of San Juan, is the second largest radio telescope in the world. It has been instrumental in tasks as diverse as monitoring near-Earth asteroids, watching for bright blasts of energy called fast radio bursts and searching for extraterrestrial intelligence.

But the NSF, which covers $8.3 million of the observatory’s nearly $12 million annual budget, has been trying to back away from that responsibility for several years. After Hurricane Maria hit Puerto Rico on September 20, damaging the telescope’s main antenna, the observatory’s future seemed unclear (SN: 9/29/17).

On November 16, the NSF released a statement announcing it would continue science operations at Arecibo “with reduced agency funding,” and would search for new collaborators to cover the rest.、
“This plan will allow important research to continue while accommodating the agency’s budgetary constraints and its core mission to support cutting-edge science and education,” the statement says.

A new map exhibit documents evolving views of Earth’s interior

Much of what happens on the Earth’s surface is connected to activity far below. “Beneath Our Feet,” a temporary exhibit at the Norman B. Leventhal Map Center in the Boston Public Library, explores the ways people have envisioned, explored and exploited what lies underground.

“We’re trying to visualize those places that humans don’t naturally go to,” says associate curator Stephanie Cyr. “Everybody gets to see what’s in the sky, but not everyone gets to see what’s underneath.”
“Beneath Our Feet” displays 70 maps, drawings and archaeological artifacts in a bright, narrow exhibit space. (In total, the library holds a collection of 200,000 maps and 5,000 atlases.) Many objects have two sets of labels: one for adults and one for kids, who are guided by a cartoon rat mascot called Digger Burrows.

The layout puts the planet’s long history front and center. Visitors enter by walking over a U.S. Geological Survey map of North America that is color-coded to show how topography has changed over geologic time.
Beyond that, the exhibit is split into two main themes, Cyr says: the natural world, and how people have put their fingerprints on it. Historical and modern maps hang side by side, illustrating how ways of thinking about the Earth developed as the tools for exploring it improved.

For instance, a 1665 illustration drawn by Jesuit scholar Athanasius Kircher depicts Earth’s water systems as an underground network that churned with guidance from a large ball of fire in the planet’s center, Cyr says. “He wasn’t that far off.” Under Kircher’s drawing is an early sonar map of the seafloor in the Pacific Ocean, made by geologists Marie Tharp and Bruce Heezen in 1969 (SN: 10/6/12, p. 30). Their maps revealed the Mid-Atlantic Ridge. Finding that rift helped to prove the existence of plate tectonics and that Earth’s surface is shaped by the motion of vast subsurface forces.

On another wall, a 1794 topological-relief drawing of Mount Vesuvius — which erupted and destroyed the Roman city of Pompeii in A.D. 79 — is embellished by a cartouche of Greek mythological characters, including one representing death. The drawing hangs above a NASA satellite image of the same region, showing how the cities around Mount Vesuvius have grown since the eruption that buried Pompeii, and how volcano monitoring has improved.

The tone turns serious in the latter half of the exhibit. Maps of coal deposits in 1880s Pennsylvania sit near modern schematics explaining how fracking works (SN: 9/8/12, p. 20). Reproductions of maps of the Dakotas from 1886 may remind visitors of ongoing controversies with the Dakota Access Pipeline, proposed to run near the Standing Rock Sioux Reservation, and maps from the U.S. Environmental Protection Agency mark sites in Flint, Mich., with lead-tainted water.

Maps in the exhibit are presented dispassionately and without overt political commentary. Cyr hopes the zoomed-out perspectives that maps provide will allow people to approach controversial topics with cool heads.

“The library is a safe place to have civil discourse,” she says. “It’s also a place where you have access to factual materials and factual resources.”

A key virus fighter is implicated in pregnancy woes

An immune system mainstay in the fight against viruses may harm rather than help a pregnancy. In Zika-infected mice, this betrayal appears to contribute to fetal abnormalities linked to the virus, researchers report online January 5 in Science Immunology. And it could explain pregnancy complications that arise from infections with other pathogens and from autoimmune disorders.

In pregnant mice infected with Zika virus, those fetuses with a docking station, or receptor, for immune system proteins called type I interferons either died or grew more poorly compared with fetuses lacking the receptor. “The type I interferon system is one of the key mechanisms for stopping viral infections,” says Helen Lazear, a virologist at the University of North Carolina at Chapel Hill, who coauthored an editorial accompanying the study. “That same [immune] process is actually causing fetal damage, and that’s unexpected.”
Cells infected by viruses begin the fight against the intruder by producing type I interferons. These proteins latch onto their receptor on the surfaces of neighboring cells and kick-start the production of hundreds of other antiviral proteins.

Akiko Iwasaki, a Howard Hughes Medical Institute investigator and immunologist at Yale School of Medicine, and her colleagues were interested in studying what happens to fetuses when moms are sexually infected with Zika virus. The researchers mated female mice unable to make the receptor for type I interferons to males with one copy of the gene needed to make the receptor. This meant that moms would carry some pups with the receptor and some without in the same pregnancy.

Pregnant mice were infected vaginally with Zika at one of two times — one corresponding to mid‒first trimester in humans, the other to late first trimester. Of the fetuses exposed to infection earlier, those that had the interferon receptor died, while those without the receptor continued to develop. For fetuses exposed to infection a bit later in the pregnancy, those with the receptor were much smaller than their receptor-lacking counterparts.

Story continues below graphic
The fetuses without the receptor still grew poorly due to the Zika infection, which is expected given their inability to fight the infection. What was striking, Iwasaki says, is that the fetuses able to fight the infection were more damaged, and were the only fetuses that died.

It’s unclear how this antiviral immune response causes fetal damage. But the placentas—which, like their fetuses, had the receptor — didn’t appear to provide those fetuses with enough oxygen, Iwasaki says.

The researchers also infected pregnant mice that had the receptor for type I interferons with a viral mimic — a bit of genetic material that goads the body to begin its antiviral immune response — to see if the damage happened only during a Zika infection. These fetuses also died early in the pregnancy, an indication that perhaps the immune system could cause fetal damage during other viral infections, Iwasaki notes.

Iwasaki and colleagues next added type I interferon to samples of human placental tissue in dishes. After 16 to 20 hours, the placental tissues developed structures that resembled syncytial knots. These knots are widespread in the placentas of pregnancies with such complications as preeclampsia and restricted fetal growth.

Figuring out which of the hundreds of antiviral proteins made when type I interferon ignites the immune system can trigger placental and fetal damage is the next step, says Iwasaki. That could provide more understanding of miscarriage generally; other infections that cause congenital diseases, like toxoplasmosis and rubella; and autoimmune disorders that feature excessive type I interferon production, such as lupus, she says.

Hormone replacement makes sense for some menopausal women

Internist Gail Povar has many female patients making their way through menopause, some having a tougher time than others. Several women with similar stories stand out in her mind. Each came to Povar’s Silver Spring, Md., office within a year or two of stopping her period, complaining of frequent hot flashes and poor sleep at night. “They just felt exhausted all the time,” Povar says. “The joy had kind of gone out.”

And all of them “were just absolutely certain that they were not going to take hormone replacement,” she says. But the women had no risk factors that would rule out treating their symptoms with hormones. So Povar suggested the women try hormone therapy for a few months. “If you feel really better and it makes a big difference in your life, then you and I can decide how long we continue it,” Povar told them. “And if it doesn’t make any difference to you, stop it.”
At the follow-up appointments, all of these women reacted the same way, Povar recalls. “They walked in beaming, absolutely beaming, saying, ‘I can’t believe I didn’t do this a year ago. My life! I’ve got my life back.’ ”

That doesn’t mean, Povar says, that she’s pushing hormone replacement on patients. “But it should be on the table,” she says. “It should be part of the discussion.”

Hormone replacement therapy toppled off the table for many menopausal women and their doctors in 2002. That’s when a women’s health study, stopped early after a data review, published results linking a common hormone therapy to an increased risk of breast cancer, heart disease, stroke and blood clots. The trial, part of a multifaceted project called the Women’s Health Initiative, or WHI, was meant to examine hormone therapy’s effectiveness in lowering the risk of heart disease and other conditions in women ages 50 to 79. It wasn’t a study of hormone therapy for treating menopausal symptoms.

But that nuance got lost in the coverage of the study’s results, described at the time as a “bombshell,” a call to get off of hormone therapy right away. Women and doctors in the United States heeded the call. A 2012 study in Obstetrics & Gynecology showed that use plummeted: Oral hormone therapy, taken by an estimated 22 percent of U.S. women 40 and older in 1999–2000, was taken by fewer than 12 percent of women in 2003–2004. Six years later, the number of women using oral hormone therapy had sunk below 5 percent.
Specialists in women’s health say it’s time for the public and the medical profession to reconsider their views on hormone therapy. Research in the last five years, including a long-term follow-up of women in the WHI, has clarified the risks, benefits and ideal ages for hormone therapy. Medical organizations, including the Endocrine Society in 2015 and the North American Menopause Society in 2017, have released updated recommendations. The overall message is that hormone therapy offers more benefits than risks for the relief of menopausal symptoms in mostly healthy women of a specific age range: those who are under age 60 or within 10 years of stopping menstruation.

“A generation of women has missed out on effective treatment because of misinformation,” says JoAnn Pinkerton, executive director of the North American Menopause Society and a gynecologist who specializes in menopause at the University of Virginia Health System in Charlottesville. It’s time to move beyond 2002, she says, and have a conversation based on “what we know now.”

End of an era
Menopause, the final menstrual period, signals the end of fertility and is confirmed after a woman has gone 12 months without having a period. From then on she is postmenopausal. Women reach menopause around age 51, on average. In the four to eight years before, called perimenopause, the amount of estrogen in the body declines as ovarian function winds down. Women may have symptoms related to the lack of estrogen beginning in perimenopause and continuing after the final period.

Probably the best-known symptom is the hot flash, a sudden blast of heat, sweating and flushing in the face and upper chest. These temperature tantrums can occur at all hours. At night, hot flashes can produce drenching sweats and disrupt sleep.

Hot flashes arise because the temperature range in which the body normally feels comfortable narrows during the menopause transition, partly in response to the drop in estrogen. Normally, the body takes small changes in core body temperature in stride. But for menopausal women, the slightest uptick in degree can be a trigger for the vessels to dilate, which increases blood flow and sweating.

About 75 to 80 percent of menopausal women experience hot flashes and night sweats, on and off, for anywhere from a couple of years to more than a decade. In a study in JAMA Internal Medicine in 2015, more than half of almost 1,500 women enrolled at ages 42 to 52 reported frequent hot flashes — occurring at least six days in the previous two weeks — with symptoms lasting more than seven years.

A sizable number of women have moderate or severe hot flashes, which spread throughout the body and can include profuse sweating, heart palpitations or anxiety. In a study of 255 menopausal women, moderate to severe hot flashes were most common, occurring in 46 percent of women, during the two years after participants’ last menstrual period. A third of all the women still experienced heightened hot flashes 10 years after menopause, researchers reported in 2014 in Menopause.

Besides hot flashes and night sweats, roughly 40 percent of menopausal women experience irritation and dryness of the vulva and vagina, which can make sexual intercourse painful. These symptoms tend to arise after the final period.

Alarm bells
In the 1980s and ’90s, researchers observed that women using hormone therapy for menopausal symptoms had a lower risk of heart disease, bone fractures and overall death. Some doctors began recommending the medication not just for symptom relief, but also for disease prevention.

Observational studies of the apparent health benefits of hormone therapy spurred a more stringent study, a randomized controlled trial, which tested the treatment’s impact by randomly assigning hormones to some volunteers and not others. The WHI hormone therapy trials assessed heart disease, breast cancer, stroke, blood clots, colorectal cancer, hip fractures and deaths from other causes in women who used the hormones versus those who took a placebo. Two commonly prescribed formulations were tested: a combined hormone therapy — estrogen sourced from horses plus synthetic progesterone — and estrogen alone. (Today, additional U.S. Food and Drug Administration–approved formulations are available.)
The 2002 WHI report in JAMA, which described early results of the combined hormone therapy, shocked the medical community. The study was halted prematurely because after about five years, women taking the hormones had a slightly higher risk of breast cancer and an overall poor risk-to-benefit ratio compared with women taking the placebo. While the women taking hormones had fewer hip fractures and colorectal cancers, they had more breast cancers, heart disease, blood clots and strokes. The findings were reported in terms of the relative risk, the ratio of how often a disease happened in one group versus another. News of a 26 percent increase in breast cancers and a 41 percent increase in strokes caused confusion and alarm.

Women dropped the hormones in droves. From 2001 to 2009, the use of all hormone therapy among menopausal women, as reported by physicians based on U.S. office visits, fell 52 percent, according to a 2011 study in Menopause.

But, researchers say, the message that hormone therapy was bad for all was unwarranted. “The goal of the WHI was to evaluate the balance of benefits and risks of menopausal hormone therapy when used for prevention of chronic disease,” says JoAnn Manson, a physician epidemiologist at Harvard-affiliated Brigham and Women’s Hospital in Boston and one of the lead investigators of the WHI. “It was not intended to evaluate its role in managing menopausal symptoms.”

Along with the focus on prevention, the WHI hormone therapy trials were largely studies of older women — in their 60s and 70s. Only around one-third of participants started the trial between ages 50 and 59, the age group more likely to be in need of symptom relief. Hormone therapy “was always primarily a product to use in women entering menopause,” says Howard Hodis, a physician scientist who focuses on preventive medicine at the University of Southern California’s Keck School of Medicine in Los Angeles. “The observational studies were based on these women.”

Also lost in the coverage of the 2002 study results was the absolute risk, the actual difference in the number of cases of disease between two groups. The group on combined hormone therapy had eight more cases of breast cancer per 10,000 women per year than the group taking a placebo. Hodis notes that that absolute risk translates to less than one extra case for every 1,000 women, which is classified as a rare risk by the Council for International Organizations of Medical Sciences, a World Health Organization group. There was also less than one additional case for every 1,000 women per year for heart disease and for stroke in the hormone-treated women compared with those on placebo.

In 2004, researchers published results of the WHI study of estrogen-only therapy, taken for about seven years by women who had had their uteruses surgically removed. (Progesterone is added to hormone therapy to protect the uterus lining from a risk of cancer seen with estrogen alone.) The trial, also stopped early, reported a decreased risk of hip fractures and breast cancer, but an increased risk of stroke. The study didn’t change the narrative that hormone therapy wasn’t safe.

Timing is everything
Since the turn away from hormone therapy, follow-up studies have brought nuance not initially captured by the first two reports. Researchers were finally able to tease out the results that applied to “the young women — and I love saying this — young women 50 to 59 who are most apt to present with symptoms of menopause,” says Cynthia Stuenkel, an internist and endocrinologist at the University of California, San Diego School of Medicine in La Jolla.

In 2013, Manson and colleagues reported data from the WHI grouped by age. It turned out that absolute risks were smaller for 50- to 59-year-olds than they were for older women, especially those 70 to 79 years old, for both combined therapy and estrogen alone. For example, in the combined hormone therapy trial, treated 50- to 59-year-olds had five additional cases of heart disease and five more strokes per 10,000 women annually compared with the same-aged group on placebo. But the treated 70- to 79-year-olds had 19 more heart disease cases and 13 more strokes per 10,000 women annually than women of the same age taking a placebo. “So a lot more of these events that were of concern were in the older women,” Stuenkel says.

Story continues below graphs
A Danish study reported in 2012 of about 1,000 recently postmenopausal women, ages 45 to 58, also supported the idea that timing of hormone treatment matters. The randomized controlled trial examined the use of different formulations of estrogen (17β-estradiol) and progesterone than the WHI. The researchers reported in BMJ that after 10 years, women taking hormone therapy — combined or estrogen alone — had a reduced risk of mortality, heart failure or heart attacks, and no added risk of cancer, stroke or blood clots compared with those not treated.

These findings provide evidence for the timing hypothesis, also supported by animal studies, as an explanation for the results seen in younger women, especially in terms of heart disease and stroke. In healthy blood vessels, more common in younger women, estrogen can slow the development of artery-clogging plaques. But in vessels that already have plaque buildup, more likely in older women, estrogen may cause the plaques to rupture and block an artery, Manson explains.

Recently, Manson and colleagues published a long-term study of the risk of death in women in the two WHI hormone therapy trials — combined therapy and estrogen alone — from the time of trial enrollment in the mid-1990s until the end of 2014. Use of either hormone therapy was not associated with an added risk of death during the study or follow-up periods due to any cause or, specifically, death from heart disease or cancer, the researchers reported in JAMA in September 2017. The study provides reassurance that taking hormone therapy, at least for five to seven years, “does not show any mortality concern,” Stuenkel says.

Both the Endocrine Society and the North American Menopause Society state that, for symptom relief, the benefits of FDA-approved hormone therapy outweigh the risks in women younger than 60 or within 10 years of their last period, absent health issues such as a high risk of breast cancer or heart disease. The menopause society position statement adds that there are also benefits for women at high risk of bone loss or fracture.

Today, the message about hormone therapy is “not everybody needs it, but if you’re a candidate, let’s talk about the pros and cons, and let’s do it in a science-based way,” Pinkerton says.

Hormone therapy is the most effective treatment for hot flashes, night sweats and genital symptoms, she says. A review of randomized controlled trials, published in 2004, reported that hormone therapy decreased the frequency of hot flashes by 75 percent and reduced their severity as well.

More than 50 million U.S. women will be older than 51 by 2020, Manson says. Yet today, many women have a hard time finding a physician who is comfortable prescribing hormone therapy or even just managing a patient’s menopausal symptoms, she says.

Stuenkel, who says many younger doctors stopped learning about hormone therapy after 2002, is trying to play catch up. When she teaches medical students and doctors about treating menopausal symptoms, she brings up three questions to ask patients. First, how bothersome are the symptoms? Some women say “fix it, get me through the day and the night, put me back in order,” Stuenkel says. Other women’s symptoms are not as disruptive. Second, what does the patient want? Third, what is safe for this particular woman, based on her health? If a woman’s health history doesn’t support the use of hormone therapy, or she just isn’t interested, there are nonhormonal options, such as certain antidepressants, and also nondrug lifestyle approaches.

Menopause looms large for many women, Povar says, and discussing a patient’s expectations as well as whether hormone therapy is the right approach becomes a unique discussion with each patient, she says. “This is one of the most individual decisions a woman makes.”

When it’s playtime, many kids prefer reality over fantasy

Young children travel to fantasy worlds every day, packing just imaginations and a toy or two.

Some preschoolers scurry across ocean floors carrying toy versions of cartoon character SpongeBob SquarePants. Other kids trek to distant universes with miniature replicas of Star Wars robots R2-D2 and C-3PO. Throngs of youngsters fly on broomsticks and cast magic spells with Harry Potter and his Hogwarts buddies. The list of improbable adventures goes on and on.

Parents today take for granted that kids need toys to fuel what comes naturally — outlandish bursts of make-believe. Kids’ flights of fantasy are presumed to soar before school and life’s other demands yank the youngsters down to Earth.
Yet some researchers call childhood fantasy play — which revolves around invented characters and settings with no or little relationship to kids’ daily lives — highly overrated. From at least the age when they start talking, little ones crave opportunities to assist parents at practical tasks and otherwise learn how to be productive members of their cultures, these investigators argue.

New findings support the view that children are geared more toward helping than fantasizing. Preschoolers would rather perform real activities, such as cutting vegetables or feeding a baby, than pretend to do those same things, scientists say. Even in the fantastical realm of children’s fiction books, reality may have an important place. Young U.S. readers show signs of learning better from human characters than from those ever-present talking pigs and bears.
Studies of children in traditional societies illustrate the dominance of reality-based play outside modern Western cultures. Kids raised in hunter-gatherer communities, farming villages and herding groups rarely play fantasy games. Children typically play with real tools, or small replicas of tools, in what amounts to practice for adult work. Playgroups supervised by older children enact make-believe versions of what adults do, such as sharing hunting spoils.
These activities come much closer to the nature of play in ancient human groups than do childhood fantasies fueled by mass-produced toys, videos and movies, researchers think.
Handing over household implements to toddlers and preschoolers and letting them play at working, or allowing them to lend a hand on daily tasks, generates little traction among Western parents, says psychologist Angeline Lillard of the University of Virginia in Charlottesville. Many adults, leaning heavily on adult-supervised playdates, assume preschoolers and younger kids need to be protected from themselves. Lillard suspects that preschoolers, whose early helping impulses get rebuffed by anxious parents, often rebel when told to start doing household chores a few years later.

“Kids like to do real things because they want a role in the real world,” Lillard says. “Our society has gone overboard in stressing the importance of pretense and fantasy for young children.”

Keep it real
Lillard suspects most preschoolers agree with her.

More than 40 years of research fails to support the widespread view that playing pretend games generates special social or mental benefits for young children, Lillard and colleagues wrote in a 2013 review in Psychological Bulletin. Studies that track children into their teens and beyond are sorely needed to establish any beneficial effects of pretending to be other people or acting out imaginary situations, the researchers concluded.

Even the assumption that kids naturally gravitate toward make-believe worlds may be unrealistic. When given a choice, 3- to 6-year-olds growing up in the United States — one of many countries saturated with superhero movies, video games and otherworldly action figures — preferred performing real activities over pretending to do them, Lillard and colleagues reported online June 20 in Developmental Science.
One hundred youngsters, most of them white and middle class, were tested either in a children’s museum, a preschool or a university laboratory. An experimenter showed each child nine pairs of photographs. Each photo in a pair featured a boy or a girl, to match the sex of the youngster being tested. One photo showed a child in action. Depicted behaviors included cutting vegetables with a knife, talking on a telephone and bottle-feeding a baby. In the second photo, a different child pretended to do what the first child did for real.

When asked by the experimenter whether they would rather, say, cut real vegetables with a knife like the first child or pretend to do so like the second child, preschoolers chose the real activity almost two-thirds of the time. Among the preschoolers, hard-core realists outnumbered fans of make-believe, the researchers found. Whereas 16 kids always chose real activities, only three wanted to pretend on every trial. Just as strikingly, 48 children (including seven of 26 of the 3-year-olds) chose at least seven real activities of the nine depicted. Only 14 kids (mostly the younger ones) selected at least seven pretend activities.

Kids often said they liked real activities for practical reasons, such as wanting to learn how to feed babies to help mom. Hands-on activities also got endorsed for being especially fun or novel. “I’ve never talked on the real phone,” one child explained. Reasons for choosing pretend activities centered on being afraid of the real activity or liking to pretend.

In a preliminary follow-up study directed by Lillard, 16 girls and boys, ages 3 to 6, chose between playing with 10 real objects, such as a microscope, or toy versions of the same objects. During 10-minute play periods, kids spent an average of about twice as much time with real items. That preference for real things increased with age. Three-year-olds spent nearly equal time playing with genuine and pretend items, but the older children strongly preferred the real deal.

Lillard’s findings illustrate that kids want and need real experiences, says psychologist Thalia Goldstein of George Mason University in Fairfax, Va. “Modern definitions of childhood have swung too far toward thinking that young children should live in a world of fantasy and magic,” she maintains.

But pretend play, including fantasy games, still has value in fostering youngsters’ social and emotional growth, Goldstein and Matthew Lerner of Stony Brook University in New York reported online September 15 in Developmental Science. After participating in 24 play sessions, 4- and 5-year-olds from poor families were tested on empathy and other social skills. Those who played dramatic pretend games (being a superhero, animal or chef, for instance) were less likely than kids who played with blocks or read stories to become visibly upset upon seeing an experimenter who the kids believed had hurt a knee or finger, the researchers found. Playing pretend games enabled kids to rein in distress at seeing the experimenter in pain, the researchers proposed.

It’s not known whether fantasy- and reality-based games shape kids’ social skills in different ways over the long haul, Goldstein says.

True fiction
Even on the printed page, where youngsters gawk at Maurice Sendak’s goggle-eyed Wild Things and Dr. Seuss’ mustachioed Lorax, the real world exerts a special pull.

Consider 4- to 6-year-olds who were read either a storybook about a little raccoon that learns to share with other animals or the same storybook with illustrations of human characters learning to share. Both versions told of how characters felt better after giving some of what they had to others. A third set of kids heard an illustrated storybook about seeds that had nothing to do with sharing. Each group consisted of 32 children.

Only kids who heard the realistic story displayed a general willingness to act on its message, reported a team led by psychologist Patricia Ganea of the University of Toronto in a paper published online August 2 in Developmental Science. On a test of children’s willingness to share any of 10 stickers with a child described as unable to participate in the experiment, listeners to the tale with human characters forked over an average of nearly three stickers, about one more than the kids had donated before the experiment.

Children who heard stories with animal characters became less giving, sharing an average of 1.7 stickers after having originally donated an average of 2.3 stickers. Sticker sharing declined similarly among kids who heard the seed story. These results fit with several previous studies showing that preschoolers more easily apply knowledge learned from realistic stories to the real world, as opposed to information encountered in fantasy stories.

Even for fiction stories that are highly unrealistic, youngsters generally favor realistic endings, say Boston University psychologist Melissa Kibbe and colleagues. In a study from the team published online June 15 in Psychology of Aesthetics, Creativity and the Arts, an experimenter read 90 children, ages 4 to 6, one of three illustrated versions of a story. In the tale, a child gets lost on the way to a school bus. A realistic version was set in a present-day city. A futuristic science fiction version was set on the moon. A fantasy version occurred in medieval times and included magical characters. Stories ended with descriptions and illustrations of a child finally locating either a typical school bus, a futuristic school bus with rockets on its sides or a magical coach with dragon wings.
When given the chance, 40 percent of kids inserted a typical school bus into the ending for the science fiction story and nearly 70 percent did so for the fantasy tale. “Children have a bias toward reality when completing stories,” Kibbe says.
Hands on
Outside Western cultures, children’s bias toward reality takes an extreme turn, especially during play.

Nothing keeps it real like a child merrily swinging around a sharp knife as adults go about their business. That’s cause for alarm in Western households. But in many foraging communities, children play with knives and even machetes with their parents’ blessing, says anthropologist David Lancy of Utah State University in Logan.

Lancy describes reported instances of youngsters from hunter-gatherer groups playing with knives in his 2017 book Raising Children. Among Maniq foragers inhabiting southern Thailand’s forests, for instance, one researcher observed a father looking on approvingly as his baby crawled along holding a knife about as long as a dollar bill. The same investigator observed a 4-year-old Maniq girl sitting by herself cutting pieces of vegetation with a machete.

In East Africa, a Hadza infant can grab a knife and suck on it undisturbed, at least until an adult needs to use the tool. On Vanatinai Island in the South Pacific, children freely experiment with knives and pieces of burning wood from campfires.

Yes, accidents happen. That doesn’t mean hunter-gatherer parents are uncaring or indifferent toward their children, Lancy says. In these egalitarian societies, where sharing food and other resources is the norm, parents believe it’s wrong to impose one’s will on anyone, including children. Hunter-gatherer adults assume that a child learns best through hands-on, sometimes risky, exploration on his or her own and in groups with other kids. In that way, the adults’ thinking goes, youngsters develop resourcefulness, creativity and determination. Self-inflicted cuts and burns represent learning opportunities.

In many societies, adults make miniature tools for children to play with or give kids cast-off tools to use as toys. For instance, Inuit boys have been observed mimicking seal hunts with items supplied by parents, such as pieces of sealskin and miniature harpoons. Girls in Ecuador’s Conambo tribe mold clay balls provided by their mothers into various shapes as a first step toward becoming potters.
Childhood games and toys in foraging groups and farming villages, as in Western nations, reflect cultural values. Hunter-gatherer kids rarely engage in rough-and-tumble or competitive games. In fact, competition is discouraged. These kids concoct games with no winners, such as throwing a weighted feather in the air and flicking the feather back up as it descends. Children in many farming villages and herding societies play basic forms of marbles, in which each player shoots a hard object at similar objects to knock the targets out of a defined area. The rules change constantly as players decide among themselves what counts and what doesn’t.

Children in traditional societies don’t invent fantasy characters to play with, Lancy says. Consider imaginative play among children of Aka foragers in the Central African Republic. These kids may pretend to be forest animals, but the animals are creatures from the children’s surroundings, such as antelope. The children aim to take the animals’ perspective to determine what route to follow while exploring, says anthropologist Adam Boyette of Duke University. Aka youngsters sometimes pretend to be spirits that adults have told the kids about. In this way, kids become familiar with community beliefs and rituals.
Aka childhood activities are geared toward adult work, Boyette says. Girls start foraging for food within the first few years of life. Boys take many years to master dangerous tasks, such as climbing trees to raid honey from bees’ nests (SN: 8/20/16, p. 10). By around age 7, boys start to play hunting games and graduate to real hunts as teenagers.

In 33 hunter-gatherer societies around the world, parents typically take 1- to 2-year-olds on foraging expeditions and give the youngsters toy versions of tools to manipulate, reported psychologist Sheina Lew-Levy of the University of Cambridge and her colleagues in the December Human Nature. Groups of children at a range of ages play make-believe versions of what adults do and get in some actual practice at tasks such as toolmaking. Youngsters generally become proficient food collectors and novice toolmakers between ages 8 and 12, the researchers conclude. Adults, but not necessarily parents, begin teaching hunting and complex toolmaking skills to teens. For the report, Lew-Levy’s group reviewed 58 papers on childhood learning among hunter-gatherers, most published since 2000.

“There’s a blurred line between work and play in foraging societies because children are constantly rehearsing for adult roles by playing,” Boyette says.

Children in Western societies can profitably mix fantasy with playful rehearsals for adult tasks, observes George Mason’s Goldstein, who was a professional stage actor before opting for steadier academic work. “My 5-year-old son is never happier than when he’s helping to check us out at the grocery store,” she says. “But he also likes to pretend to be a robot, and sometimes a robot who checks us out at the grocery store.”

Not too far in the future, preschoolers pretending to be robots may encounter real robots running grocery-store checkouts. Playtime will never be the same.

By 2100, damaged corals may let waves twice as tall as today’s reach coasts

A complex coral reef full of nooks and crannies is a coastline’s best defense against large ocean waves. But coral die-offs over the next century could allow taller waves to penetrate the corals’ defenses, simulations suggest. A new study finds that at some Pacific Island sites, waves reaching the shore could be more than twice as high as today’s by 2100.

The rough, complex structures of coral reefs dissipate wave energy through friction, calming waves before they reach the shore. As corals die due to warming oceans (SN: 2/3/18, p. 16), the overall complexity of the reef also diminishes, leaving a coast potentially more exposed. At the same time, rising sea levels due to climate change increasingly threaten low-lying coastal communities with inundation and beach erosion — and stressed corals may not be able to grow vertically fast enough to match the pace of sea level rise. That could also make them a less effective barrier.

Researchers compared simulations of current and future sea level and reef conditions at four sites with differing wave energy near the French Polynesian islands of Moorea and Tahiti. The team then simulated the height of a wave after it has passed the reef, known as the back-reef wave height, under several scenarios. The most likely scenario studied was based on the Intergovernmental Panel on Climate Change’s projections of sea level height by 2100 and corresponding changes in reef structure.

Under those conditions, the average back-reef wave heights at the four sites would be 2.4 times as high in 2100 as today, the team reports February 28 in Science Advances. That change would be largely due to the decrease in coral reef complexity rather than rising sea levels, the simulations suggest. Coastal communities around the world will likely see similar wave height increases, dependent on local reef structures and extent of sea level rise. The finding, the researchers say, shows that conserving reefs is crucial to protecting coastal communities in a changing climate.