Neural Noise Shows the Uncertainty of Our Memories

The electrical chatter of our working memories reflects our lack of confidence about their contents.
a woman with a cutout of her head filled with clocks
Neuroscientific studies suggest that when we call up a memory to use it, our uncertainty about its accuracy is part of the recollection.Illustration: Myriam Wares/Quanta Magazine

In the moment between reading a phone number and punching it into your phone, you may find that the digits have mysteriously gone astray—even if you’ve seared the first ones into your memory, the last ones may still blur unaccountably. Was the 6 before the 8 or after it? Are you sure?

Maintaining such scraps of information long enough to act on them draws on an ability called visual working memory. For years, scientists have debated whether working memory has space for only a few items at a time or if it just has limited room for detail: Perhaps our mind’s capacity is spread across either a few crystal-clear recollections or a multitude of more dubious fragments.

The uncertainty in working memory may be linked to a surprising way that the brain monitors and uses ambiguity, according to a recent paper in Neuron from neuroscience researchers at New York University. Using machine learning to analyze brain scans of people engaged in a memory task, they found that signals encoded an estimate of what people thought they saw—and the statistical distribution of the noise in the signals encoded the uncertainty of the memory. The uncertainty of your perceptions may be part of what your brain is representing in its recollections. And this sense of the uncertainties may help the brain make better decisions about how to use its memories.

The findings suggests that “the brain is using that noise,” said Clayton Curtis, a professor of psychology and neuroscience at NYU and an author of the new paper.

The work adds to a growing body of evidence that, even if humans don’t seem adept at understanding statistics in their everyday lives, the brain routinely interprets its sensory impressions of the world, both current and recalled, in terms of probabilities. The insight offers a new way of understanding how much value we assign to our perceptions of an uncertain world.

Predictions Based on the Past

Neurons in the visual system fire in response to specific sights, like an angled line, a particular pattern, or even cars or faces, sending off a flare to the rest of the nervous system. But by themselves, the individual neurons are noisy sources of information, so “it’s unlikely that single neurons are the currency the brain is using to infer what it is it sees,” Curtis said.

To Clayton Curtis, a professor of psychology and neuroscience at New York University, recent analyses suggest that the brain uses the noise in its neuroelectric signals to represent uncertainty about the encoded perceptions and memories.Courtesy of Clayton Curtis

More likely, the brain is combining information from populations of neurons. It’s important, then, to understand how it does so. It might, for instance, be averaging information from the cells: If some neurons fire most strongly at the sight of a 45-degree angle and others at 90 degrees, then the brain might weight and average their inputs to represent a 60-degree angle in the eyes’ field of view. Or perhaps the brain has a winner-take-all approach, with the most strongly firing neurons taken as the indicators of what’s perceived.

“But there is a new way of thinking about it, influenced by Bayesian theory,” Curtis said.

Bayesian theory—named for its developer, the 18th-century mathematician Thomas Bayes, but independently discovered and popularized later by Pierre-Simon Laplace—incorporates uncertainty into its approach to probability. Bayesian inference addresses how confidently one can expect an outcome to occur given what is known of the circumstances. As applied to vision, that approach could mean the brain makes sense of neural signals by constructing a likelihood function: Based on data from previous experiences, what are the most likely sights to have generated a given firing pattern?

Wei Ji Ma, a professor of neuroscience and psychology at NYU, provided some of the first concrete evidence that populations of neurons can perform optimal Bayesian inference calculations.Courtesy of Wei Ji Ma

Laplace recognized that conditional probabilities are the most accurate way to talk about any observation, and in 1867 the physician and physicist Hermann von Helmholtz connected them to the calculations that our brains might make during perception. Yet few neuroscientists gave much attention to these ideas until the 1990s and early 2000s, when researchers began finding that people did something like probabilistic inference in behavioral experiments, and Bayesian methods started to prove useful in some models of perception and motor control.

“People started talking about the brain as being Bayesian,” said Wei Ji Ma, a professor of neuroscience and psychology at NYU and another of the new Neuron paper’s authors.

In a 2004 review, Alexandre Pouget (now a professor of neuroscience at the University of Geneva) and David Knill of the University of Rochester argued the case for a “Bayesian coding hypothesis,” which posits that the brain uses probability distributions to represent sensory information.

Scanning for Memories

At the time there was almost no evidence of this from neuron studies. But in 2006, Ma, Pouget and their colleagues at the University of Rochester presented strong evidence that populations of simulated neurons could perform optimal Bayesian inference calculations. Further work by Ma and other researchers over the past dozen years offered additional confirmations from electrophysiology and neuroimaging that the theory applies to vision by using machine learning programs called Bayesian decoders to analyze actual neural activity.

Neuroscientists have used decoders to predict what people are looking at from fMRI (functional magnetic resonance imaging) scans of their brains. The programs can be trained to find the links between a presented image and the pattern of blood flow and neural activity in the brain that results when people see it. Instead of making a single guess—that the subject is looking at an 85-degree angle, for instance—Bayesian decoders produce a probability distribution. The mean of the distribution represents the likeliest prediction of what the subject is looking at. The standard deviation, which describes the width of the distribution, is thought to reflect the subject’s uncertainty about the sight (is it 85 degrees or could it be 84 or 86?).

In the recent study, Curtis, Ma and their colleagues applied this idea to working memory. First, to test whether the Bayesian decoder could track people’s memories rather than their perceptions, they had subjects in an fMRI machine stare at the center of a circle with a dot on its perimeter. After the dot disappeared, the volunteers were asked to shift their gaze to where they remembered the dot being.

Photograph: Samuel Vasquez/Quanta Magazine

The researchers gave the decoder fMRI images of 10 brain areas involved in vision and working memory taken during the memory task. The team looked at whether the means of the neural activity distributions aligned with the reported memory—where the subjects thought the dot was—or if they reflected where the dot had actually been. In six of the areas, the means did hew more closely to the memory, which made a second experiment possible.

The Bayesian coding hypothesis suggested that the width of the distributions from at least some of these brain areas should reflect people’s confidence in what they remembered. “If it’s very flat, and you’re equally likely to draw from the extremes as you are towards the middle, your memory should be more uncertain,” said Curtis.

To assess people’s uncertainty, the researchers asked them to make a bet about the remembered location of the dot. The subjects had an incentive to be accurate and precise—they got more points if they guessed a smaller range of locations, and no points if they missed the real location. The bets were in effect a self-reported measure of their uncertainty, so the researchers could look for correlations between the bets and the standard deviation of the decoder’s distribution. In two areas of the visual cortex, V3AB and IPS1, the standard deviation of the distribution was consistently linked to the magnitude of individuals’ uncertainty.

Noisy Measurements

The observed patterns of activity could mean that the brain uses the same neural populations encoding the memory of an angle to encode confidence in that memory, rather than storing the uncertainty information in a separate part of the brain. “It is an efficient mechanism,” Curtis said. “This is what’s really remarkable, because it’s jointly encoded into the same thing.”

Still, “one thing to realize is that the actual correlations are very low,” said Paul Bays, a neuroscientist at the University of Cambridge who also studies visual working memory. Compared to the visual cortex, fMRI scans are very coarse-grained: Each data point in a scan represents the activity of thousands, perhaps even millions of neurons. Given the limitations of the technology, it’s notable that the researchers were able to make the kinds of observations in this study at all.

Hsin-Hung Li, a postdoctoral researcher in Curtis’ laboratory at NYU, used a brain scanner to measure the neural activity associated with a working memory, then assessed the research subject’s uncertainty about the memory.Courtesy of Hsin-Hung Li

“We are using a very noisy measurement to tease apart a very tiny thing,” said Hsin-Hung Li, a postdoctoral researcher at NYU and first author of the new paper. Future studies, he said, might clarify the correlations by causing a wider range of uncertainty during the task, with some images that subjects can be quite sure about and others that make them quite unsure.

Intriguing as the findings are, they can only be a preliminary and partial answer to the question of how uncertainty is encoded. “This paper is arguing for one particular account of that, which is effectively that the uncertainty is encoded in the level of activity [in groups of neurons],” said Bays. “But there’s only so much you can do with fMRI to demonstrate that that’s what is going on.”

Other interpretations may also be possible. Maybe a memory and its uncertainty are not stored by the same neurons—the uncertainty neurons might just be nearby. Or perhaps something other than the firing of individual neurons correlates more strongly with the uncertainty, but it can’t be resolved by current techniques. Ideally, a variety of evidence types—behavioral, computational and neuronal—should line up and point to the same conclusion.

But the idea that we are walking around with probability distributions in our heads all the time has a certain beauty to it. And it is probably not just vision and working memory that are structured like this, according to Pouget. “This Bayesian theory is extremely general,” he said. “There’s a general computational factor that’s at work here,” whether the brain is making a decision, assessing whether you’re hungry or navigating a route.

Yet if computing probabilities is such an integral part of how we perceive and think about the world, why have humans gained a reputation for being bad at probability? Well-known findings, most notably from economics and behavioral science, have shown that people make myriad mistakes of estimation, leading them to overestimate the likelihood of some dangerous things happening and discount others. “When you ask people to estimate explicitly and verbally probability, they suck. There’s no other word,” said Pouget.

But that kind of estimation, which can be couched in word problems and diagrams, depends on a cognitive system in the brain that evolved far more recently than the system used for tasks like the one in this study, said Ma. Perception, memory and motor behaviors have been honed by a much longer process of natural selection in which failing to spot a predator or misjudging danger meant death. For eons, the ability to make a snap judgment of a remembered perception, perhaps including an estimate of its uncertainty, kept our ancestors alive.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.


More Great WIRED Stories