This is a technology problem. It will remain so for a long time and certainly matches the problem of creating the Holodec. Recall that what we do observe in the brain is a derivative of a vastly complex neuro - network we are anable to map or grasp, let alone understand.
Now grasp my conjecture which says that memory relies on addressing a physical point in space and time outside our present. Thus the brain actually operates not only in this page of time, but conveniently across many past pages of time as well. Thus any such investigation must be four dimensional at the least.
In the meantime we are watching a lot of ducks on the pond while speculating on the nature of the pond's surface.
..
“Hyperscans” Show How Brains Sync as People Interact
Social neuroscientists ask what happens at the level of neurons when you tell someone a story or a group watches movies
https://www.scientificamerican.com/article/hyperscans-show-how-brains-sync-as-people-interact/
The vast majority of
neuroscientific studies contain three elements: a person, a cognitive
task and a high-tech machine capable of seeing inside the brain. That
simple recipe can produce powerful science. Such studies now routinely
yield images that a neuroscientist used to only dream about. They allow
researchers to delineate the complex neural machinery that makes sense
of sights and sounds, processes language and derives meaning from
experience.
But something has been largely missing from these studies: other people. We humans are innately social, yet even social neuroscience, a field explicitly created to explore the neurobiology of human interaction, has not been as social as you would think. Just one example: no one has yet captured the rich complexity of two people’s brain activity as they talk together. “We spend our lives having conversation with each other and forging these bonds,” neuroscientist Thalia Wheatley of Dartmouth College says. “[Yet] we have very little understanding of how it is people actually connect. We know almost nothing about how minds couple.”
Previous limits on technology were a major obstacle to studying real human interaction. Brain imaging requires stillness, and scientific rigor demands a level of experimental control that is anything but natural. As a result, it is hard to generate high-quality data about one brain. Doing so for two brains is “more than twice as hard,” neuroscientist David Poeppel of New York University says. “You have to synchronize the machinery, the data and the data acquisition.”
But something has been largely missing from these studies: other people. We humans are innately social, yet even social neuroscience, a field explicitly created to explore the neurobiology of human interaction, has not been as social as you would think. Just one example: no one has yet captured the rich complexity of two people’s brain activity as they talk together. “We spend our lives having conversation with each other and forging these bonds,” neuroscientist Thalia Wheatley of Dartmouth College says. “[Yet] we have very little understanding of how it is people actually connect. We know almost nothing about how minds couple.”
Group Brainprints
That is beginning to change. A growing cadre of neuroscientists is using sophisticated technology—and some very complicated math—to capture what happens in one brain, two brains, or even 12 or 15 at a time when their owners are engaged in eye contact, storytelling, joint attention focused on a topic or object, or any other activity that requires social give and take. Although the field of interactive social neuroscience is in its infancy, the hope remains that identifying the neural underpinnings of real social exchange will change our basic understanding of communication and ultimately improve education or inform treatment of the many psychiatric disorders that involve social impairments.Previous limits on technology were a major obstacle to studying real human interaction. Brain imaging requires stillness, and scientific rigor demands a level of experimental control that is anything but natural. As a result, it is hard to generate high-quality data about one brain. Doing so for two brains is “more than twice as hard,” neuroscientist David Poeppel of New York University says. “You have to synchronize the machinery, the data and the data acquisition.”
Nevertheless, the first study to successfully monitor two brains at
the same time took place nearly 20 years ago. Physicist Read Montague,
now at Virginia Tech, and his colleagues put two people in separate
functional magnetic resonance imaging (fMRI) machines and observed their
brain activity as they engaged in a simple competitive game in which
one player (the sender) transmitted a signal about whether he or she had
just seen the color red or green and the other player (the receiver)
had to decide if the sender was telling the truth or lying. Correct
guesses resulted in rewards. Montague called the technique
hyperscanning, and his work proved it was possible to observe two brains
at once.
Initially, Montague’s lead was followed mostly by other neuroeconomists rather than social neuroscientists. But the term hyperscanning is now applied to any brain imaging research that involves more than one person. Today the techniques that fit the bill include electroencephalography (EEG), magnetoencephalography and functional near-infrared spectroscopy. Use of these varied techniques, many of them quite new, has broadened the range of possible experiments and made hyperscanning less cumbersome and, as a consequence, much more popular.
Yes, apparently there is. The evidence is growing, says psychiatrist and social neuroscientist Leonhard Schilbach of the Max Planck Institute of Psychiatry in Munich, that “social cognition is fundamentally different when you’re directly engaged with another person as opposed to observing another person.”
Demonstrating those differences does not necessarily require studies of more than one brain at a time, but it does require relatively naturalistic experiments that are challenging to design within the constraints imposed on standard laboratory protocols. Psychologist Elizabeth Redcay of the University of Maryland studies social interaction in autism, with a focus on middle childhood. Back in 2010, when she was a postdoctoral fellow working with Rebecca Saxe at the Massachusetts Institute of Technology, Redcay set up a pioneering experiment featuring one participant inside the scanner and another (actually a researcher) outside it interacting live through a video feed. Recorded videos of another interlocutor served as a control. In the live versus the recorded interactions, Redcay saw greater activation in brain areas involved in social cognition and reward.
Initially, Montague’s lead was followed mostly by other neuroeconomists rather than social neuroscientists. But the term hyperscanning is now applied to any brain imaging research that involves more than one person. Today the techniques that fit the bill include electroencephalography (EEG), magnetoencephalography and functional near-infrared spectroscopy. Use of these varied techniques, many of them quite new, has broadened the range of possible experiments and made hyperscanning less cumbersome and, as a consequence, much more popular.
Engagement Matters
Beyond the practical challenges of interactive neuroscience, a more philosophical question has circulated as to whether the neural information obtained from measuring people during social interaction is significantly different from scans taken when people are alone or acting simply as observers. Does it matter if the person you look at looks back? Is there a difference between speaking a sentence and speaking it to someone who is listening?Yes, apparently there is. The evidence is growing, says psychiatrist and social neuroscientist Leonhard Schilbach of the Max Planck Institute of Psychiatry in Munich, that “social cognition is fundamentally different when you’re directly engaged with another person as opposed to observing another person.”
Demonstrating those differences does not necessarily require studies of more than one brain at a time, but it does require relatively naturalistic experiments that are challenging to design within the constraints imposed on standard laboratory protocols. Psychologist Elizabeth Redcay of the University of Maryland studies social interaction in autism, with a focus on middle childhood. Back in 2010, when she was a postdoctoral fellow working with Rebecca Saxe at the Massachusetts Institute of Technology, Redcay set up a pioneering experiment featuring one participant inside the scanner and another (actually a researcher) outside it interacting live through a video feed. Recorded videos of another interlocutor served as a control. In the live versus the recorded interactions, Redcay saw greater activation in brain areas involved in social cognition and reward.
Her subsequent studies have continued to establish differences in the
way the interacting brain responds. In children’s brains, more of the
regions involved in thinking about the mental states of
others—mentalizing, in the vernacular—are engaged when they believe they
are interacting with a peer than when they are not. In studies of joint
attention, a critical component of social interaction,
Redcay found that the mentalizing regions of the brain such as the
temporal parietal junction responded differently when sharing attention
rather than when looking at something by oneself. Now she wants to know
if there are further differences in how the brains of individuals with
autism interact. “Is the extent to which people engage those mentalizing
regions related to their successes in a social interaction?” she
wonders. It is too soon to say, but clearly, says Redcay, “you’re not
getting the full story if you just rely on observer approaches.”
Schilbach has been one of the foremost proponents of what he calls second-person neuroscience. His studies have included virtual characters who seem to respond to a participant’s gaze. In such situations, “the so-called mentalizing network and the action-observation network seem to be much more closely connected [than we knew],” Schilbach says. “They influence each other, sometimes in a complementary and sometimes in an inhibitory fashion.” Schilbach has also found that even very simple acts such as gazing at another individual and believing they are gazing back—an interaction in which you sense that your own behavior has an effect on another person—spurs activity in the brain’s reward circuitry, particularly the ventral striatum. And the more rewarding we find a behavior, the more likely we are to repeat it.
Schilbach has been one of the foremost proponents of what he calls second-person neuroscience. His studies have included virtual characters who seem to respond to a participant’s gaze. In such situations, “the so-called mentalizing network and the action-observation network seem to be much more closely connected [than we knew],” Schilbach says. “They influence each other, sometimes in a complementary and sometimes in an inhibitory fashion.” Schilbach has also found that even very simple acts such as gazing at another individual and believing they are gazing back—an interaction in which you sense that your own behavior has an effect on another person—spurs activity in the brain’s reward circuitry, particularly the ventral striatum. And the more rewarding we find a behavior, the more likely we are to repeat it.
What is happening in the other person’s brain? Eye contact was a
logical place to look. Making eye contact activates the social brain and
signals to another person that you are paying attention. It is one way
we share intention and emotion. Norihiro Sadato of the National
Institute for Physiological Sciences in Japan and his colleagues used
hyperscanning to show, early in 2019, that eye contact prepares the
social brain to empathize by activating the same areas of each person’s
brain simultaneously: the cerebellum, which helps predict the sensory
consequences of actions, and the limbic mirror system, a set of brain
areas that become active both when we move any part of the body
(including the eyes) and when we observe someone else’s movements. The
limbic system, in general, underlies our ability to recognize and share
emotion. In other words, it is critical to regulating our capacity for
empathy.
The tales we tell each other are an ideal means of exploring the
social glue that binds. Neuroscientist Uri Hasson of Princeton
University conducted seminal experiments in brain coupling by using
storytelling. In one such study, he put an individual in a scanner and
had that person tell a story. Then he put someone new in the scanner and
had the volunteer listen to a recording of the story told by the first
person. Hasson compared the brain processing of speaker and listener
across time, matching activity moment by moment, and he found evidence
of the two brains coupling. “The brain of the listener becomes similar
to the brain of the speaker,” Hasson says. And the more aligned the
brains were, the greater the listener’s reported comprehension. Says
Hasson, “Your brain as an individual is really determined by the brains
you’re connected to.”
Hasson has recently joined forces with Dartmouth’s Wheatley to see if
they can measure brains coupling during conversation. A good
conversation, says Wheatley, means “creating new ideas together and
experiences you couldn't have gotten to alone.” She wants to see that
experience in the brain. Their study includes scanners at different
universities connected online. (Most psychology
departments only have one scanner.) With one person in each scanner, the
subjects complete a story taking turns—one participant utters a few
sentences, and the other picks up where the companion left off. If the
scientists can capture brain states during this interaction, Wheatley
says, they might be able to see how two brains alternately get closer
and then move apart from each other during conversation.
Beyond Pairs
Perhaps inevitably, neuroscientists have moved to studying not just
two, but many brains at once. These experiments require the use of EEG
because it is portable. Early studies showed that when we engage in
group activities like concerts or movies, our brain waves become
synchronized—the audience’s rapt attention means they process the
symphonic finale or a love or fight scene in the same way. That is not
all that surprising, but now scientists are applying the same approach
in classrooms, where the findings could add to what we know about how
students learn best.
In a series of studies in New York City high schools, a team of New
York University researchers including Poeppel, Suzanne Dikker and Ido
Davidesco took repeated EEG recordings from every student in a biology
class over the course of a semester. They found that students’
brainwaves are more in sync with each other when they are more engaged
in class. Brain-to-brain synchrony also reflects how much students like
each other and the teacher—closer relationships lead to more
synchronization. Their current study is examining whether levels of
brain synchrony during class predict retention of material learned. “I
think what we’re doing is very useful,” Poeppel says. “How [do we] use
these techniques in a targeted way for STEM learning?”
Schilbach believes interactive neuroscience has real-life
applications in psychiatry as well. It could make it possible to predict
which therapist will work best with which patient, for example. And the
focus on real-life situations helps ensure that any findings have value
for patients. “As a psychiatrist,” Schilbach says, “I’m not interested
in helping a person to get better on a particular social cognitive task.
I’m trying to help that person to lead a happy and satisfying life.”
No comments:
Post a Comment