Friday, January 31, 2025

The Power of Small Brain Networks



You know that we could actually piece together a model logic processor out of all this.

It is certaily where we are going, just because we have it as our mathematica..

We are getting glimmers of just how this all works.  Actually took decades and it is getting clearer.


The Power of Small Brain Networks


It only takes four neurons to achieve big things.

By Elena Renken
November 15, 2024

https://nautil.us/the-power-of-small-brain-networks-1138987/

Small may be mightier than we think when it comes to brains. This is what neuroscientist Marcella Noorman is learning from her neuroscientific research into tiny animals like fruit flies, whose brains hold around 140,000 neurons each, compared to the roughly 86 billion in the human brain.

In work published earlier this month in Nature Neuroscience, Noorman and colleagues showed that a small network of cells in the fruit fly brain was capable of completing a highly complex task with impressive accuracy: maintaining a consistent sense of direction. Smaller networks were thought to be capable of only discrete internal mental representations, not continuous ones. These networks can “perform more complex computations than we previously thought,” says Noorman, an associate at the Howard Hughes Medical Institute.


You know which way you’re facing even if you close your eyes and stand still.

The scientists monitored the brains of fruit flies as they walked on tiny rotating foam balls in the dark, and recorded the activity of a network of cells responsible for keeping track of head direction. This kind of brain network is called a ring attractor network, and it is present in both insects and in humans. Ring attractor networks maintain variables like orientation or angular velocity—the rate at which an object rotates—over time as we navigate, integrating new information from the senses and making sure we don’t lose track of the original signal, even when there are no updates. You know which way you’re facing even if you close your eyes and stand still, for example.

After finding that this small circuit in fruit fly brains—which contains only about 50 neurons in the core of the network—could accurately represent head direction, Noorman and her colleagues built models to identify the minimum size of a network that could still theoretically perform this task. Smaller networks, they found, required more precise signaling between neurons. But hundreds or thousands of cells weren’t necessary for this basic task. As few as four cells could form a ring attractor, they found.

“Attractors are these beautiful things,” says Mark Brandon of McGill University, who was not involved in the study. Ring attractor networks are a type of “continuous” attractor network, used not just to navigate, but also for memory, motor control, and many other tasks. “The analysis they did of the model is very thorough,” says Brandon, of the study. If the findings extend to humans, it hints that a large brain circuit could be capable of more than researchers thought.

Noorman says a lot of neuroscience research focuses on large neural networks, but she was inspired by the tiny brain of the fruit fly. “The fly’s brain is capable of performing complex computations underlying complex behaviors,” she says. The findings may have implications for artificial intelligence, she says. “Certain kinds of computations might only require a small network,” she says. “And I think it’s important that we keep our minds open to that perspective.”

The Spiritual Consciousness of Christof Koch





A interesting discussion.  I do think that all life has consiousness, but also is deeply networked as well.  Simple enough to imagine extending our own experience with computers.  We could not imagine this a century ago.

Now consider something else.  Imagine a network as consiousness embracing just our Earth.  What can such a consiousness percieive?  How about the thought of six billions of us?  How about the thoughts of all our ancestors as well?

Local prception is a convenience but completely different scalling up or down.  Again we are ignoring the scaling problem.  We do not even have natural limits.

The Spiritual Consciousness of Christof Koch

What the neuroscientist is discovering is both humbling and frightening him.

By Steve Paulson
October 13, 2021




Consciousness is a thriving industry. It’s not just the meditation retreats and ayahuasca shamans. Or the conferences with a heady mix of philosophers, quantum physicists, and Buddhist monks. Consciousness is a buzzing business in neuroscience labs and brain institutes. But it wasn’t always this way. Just a few decades ago, consciousness barely registered as a credible subject for science.

Perhaps no one did more to legitimize its study than Francis Crick, who launched a second career in neurobiology after cracking the genetic code. In the 1980s Crick found a brilliant collaborator in the young scientist Christof Koch. In some ways, they made an unlikely team. Crick, a legend in science, was an outspoken atheist, while Koch, 40 years younger, was a Catholic yearning for ultimate meaning. Together, they published a series of pioneering articles on the neural correlates of consciousness until Crick died in 2004.


WHAT’S THE BUZZ: Bees have all the complicated brain components that humans have, but in a smaller package. “So yes, I do believe it feels like something to be a honey bee,” Christof Koch says.


Koch went on to a distinguished career at Caltech before joining the Allen Institute for Brain Science in Seattle. Today, as the president and chief scientific officer, he supervises several hundred scientists, engineers, and informatics experts trying to map the brain and figure out how our neural circuits process information. The Institute recently made news with the discovery of three giant neurons connecting many regions of the mouse brain, including one that wraps around the entire brain. The neurons extend from a set of cells known as the claustrum, which Crick and Koch maintained could act as a seat of consciousness.

Koch is one of the great thinkers about consciousness. He has a philosophical frame of mind and jumps readily from one big idea to the next. He can talk about the tough ethical decisions regarding brain-impaired patients and also zoom out to give a quick history of Christian thinking on the soul. In our conversation, he ranged over a number of far-out ideas—from panpsychism and runaway artificial intelligence to the consciousness of bees and even bacteria.



You’ve said you always loved dogs. Did growing up with a dog lead to your fascination with consciousness?

I’ve wondered about dogs since early childhood. I grew up in a devout Roman Catholic family, and I asked my father and then my priest, “Why don’t dogs go to heaven?” That never made sense to me. They’re like us in certain ways. They don’t talk, but they obviously have strong emotions of love and fear, hate and excitement, of happiness. Why couldn’t they be resurrected at the end of time?

Are scientific attitudes about animal consciousness simplistic?

The fact is, I don’t even know that you’re conscious. The only thing I know beyond any doubt—and this is one of the central insights of Western philosophy—is Cogito ergo sum. What Descartes meant is the only thing I’m absolutely sure of is my own consciousness. I assume you’re conscious because your behavior is similar to mine, and I could see your brain if I put you in an MRI scanner. When you have a patient who’s locked-in, who can’t talk to me, I have to infer it. The same with animals. I can see they’re afraid when it’s appropriate to be afraid, and they display all the behavioral traits of being afraid, including the release of hormones in their bloodstream. If you look at a piece of dog brain or mouse brain and compare that to a piece of human brain the same size, only an expert with a microscope can tell for sure that this is a dog brain or a human brain. You really have to be an expert neuroanatomist.

We share much of our evolutionary history with dogs and even dolphins. But what about lizards or ants? What about bacteria? Can they be conscious?

It becomes progressively more difficult. The brain of a bird or a lizard has a very different evolutionary history, so it becomes more difficult to assert without having a general theory. Ultimately, you need a theory that tells us which physical systems can be conscious. By the time you get to a worm, let alone to bacteria, you can believe that it feels like something to be a worm because that’s ultimately what consciousness is. If it feels like something to be a worm, then it’s conscious. Right now, most people believe it doesn’t feel like anything to be my iPhone. Yet it may well be true that it feels like something to be a bee. But it’s not easy to test that assertion in a scientific way.

What do you mean when you say “it feels like something?”

It feels like something to be you. I can’t describe it to you if you’re a zombie. If you were born blind, I can never describe what it means to see colors. You are simply unable to comprehend that. So it is with consciousness. It’s impossible to describe it unless you have it. And we have these states of consciousness unless we are deeply asleep or anesthetized or in a coma. In fact, it’s impossible not to be conscious of something. Even if you wake up discombobulated in a dark hotel room, you’re jet-lagged and your eyes are still closed, you are already there. Before there was just nothing, nada, rien. Then slowly some of your brain boots up and you realize, “Oh, I’m here. I’m in Beijing and I flew in last night.” The difference between nothing and something is a base-level consciousness.

Is this self-awareness?

It’s even much simpler. I might not even know who I am when I’m waking up. It takes time to boot up and realize who you are, where you are, what time of day it is. First, you open your eyes and just see darkness. Darkness is different from nothing. It’s not that I see darkness behind my head; I just don’t see at all. That’s what consciousness is. It’s a basic feeling.


We might be surrounded by consciousness everywhere and find it in places where we don’t expect.

You said bees could be conscious. They do amazing things, and yet they have tiny brains.

Yes, they do very complicated things. We know that individual bees can fly mazes. They can remember scents. They can return to a distant flower. In fact, they can communicate with each other, through a dance, about the location and quality of a distant food source. They have facial recognition and can recognize their beekeeper. Under normal conditions, they would never sting their beekeeper; it’s probably a combination of visual and olfactory cues.

Their brains contain roughly a million neurons. By comparison, our brains contain about 100 billion, so a hundred thousand times more. Yet the complexity of the bee’s brain is staggering, even though it’s smaller than a piece of quinoa. It’s roughly 10 times higher in terms of density than our cortex. They have all the complicated components that we have in our brains, but in a smaller package. So yes, I do believe it feels like something to be a honey bee. It probably feels very good to be dancing in the sunlight and to drink nectar and carry it back to their hive. I try not to kill bees or wasps or other insects anymore.

You’re talking about the consciousness of an individual bee—not the hive, which has another level of complexity.

I’m talking about the potential for sentience in individual bees. Would we exclude them because they can’t talk? Well, lots of people can’t talk. Babies can’t talk, impaired patients can’t talk. Because they don’t have a human brain? Well, that’s completely arbitrary. Yes, their evolution diverged from us 250 million years ago or so, but they share with us a lot of the basic metabolism and machinery of the brain. They have neurons, ionic channels, neurotransmitters, and dopamine just like we have.

So brain size is not the key factor in consciousness?

That’s entirely correct. In fact, there’s no principal reason to assume that brain size should be the be-all and end-all of consciousness.

We also know Neanderthals had bigger brains than the Homo sapiens who lived near them in Europe. Yet we survived and they didn’t.

Their brain was maybe 10 percent larger than our brain. We don’t know why we survived. Did we just outbreed them? Were we more aggressive? There’s some research showing that dogs play a role here. At the same time when Homo neanderthalensis became extinct—around 35,000 years ago—Homo sapiens domesticated the wolf and they became the two apex hunters. Homo sapiens and wolves/dogs started to collaborate. We became this ultra-efficient hunting cooperative because we now had the ability to be much more efficient at hunting down prey over long distances and exhausting them. So the creature with the larger brain didn’t survive and the one with the smaller brain did.


FATAL INTELLIGENCE: Given the probable existence of trillions of planets, why haven’t we detected life elsewhere? It’s likely, Christof Koch says, that sufficiently complex and intelligent life would destroy itself.NASA

Why were humans able to create civilizations that have transformed the planet?

We don’t have a precise answer. We have big brains and are, by some measure, the most intelligent species, at least in the short term. We’ll see whether we’ll actually survive in the long term, given our propensity for mass violence. And we’ve manipulated the planet to such an extent that we are now talking about entering a new geological age, the Anthropocene. But it’s unclear why whales or dolphins—some of which have bigger brains and more neurons in their cortex than we do—why they are not called smarter or more successful. Maybe because they have flippers and live in the ocean, which is a relatively static environment. With flippers, you’re unable to build sophisticated tools. Of course, human civilization is all about tools, whether it’s a little stone, an arrow, a bomb, or a computer.

So hands are crucial for their ability to manipulate tools.

You need not only a brain, but also hands that can manipulate the environment. Otherwise, you can think about the world but you can’t act upon it. That’s probably why this particular species of primate excelled and took over the planet.

There are fascinating questions about how deep consciousness goes. You’ve embraced the old philosophy of panpsychism. Isn’t this the idea that everything in nature has some degree of consciousness or mind?

Yes, there’s this ancient belief in panpsychism: “Pan” meaning “every,” “psyche” meaning “soul.” There are different versions of it depending on which philosophical or religious tradition you follow, but basically it meant that everything is ensouled. Now, I don’t believe that a stone is ensouled or a planet is ensouled. But if you take a more conceptual approach to consciousness, the evidence suggests there are many more systems that have consciousness—possibly all animals, all unicellular bacteria, and at some level maybe even individual cells that have an autonomous existence. We might be surrounded by consciousness everywhere and find it in places where we don’t expect it because our intuition says we’ll only see it in people and maybe monkeys and also dogs and cats. But we know our intuition is fallible, which is why we need science to tell us what the actual state of the universe is.


The Internet and runaway AI will not have our value system. It may not care at all about humans. Why should it?

Most scientists would dismiss panpsychism as ancient mythology. Why does this idea resonate for you?

It’s terribly elegant in its simplicity. You don’t say consciousness only exists if you have more than 42 neurons or 2 billion neurons or whatever. Instead, the system is conscious if there’s a certain type of complexity. And we live in a universe where certain systems have consciousness. It’s inherent in the design of the universe. Why is that so? I don’t know. Why does the universe follow the laws of quantum mechanics? I don’t know. Can I imagine a universe where the laws of quantum mechanics don’t hold? Yes, but I don’t happen to live in such a universe, so I believe our universe has certain types of complexity and a system that gives rise to consciousness. Suddenly the world is populated by entities that have conscious awareness, and that one simple principle leads to a number of very counterintuitive predictions that can, in principle, be verified.

So it all comes down to how complex the system is? And for the human brain, how its neurons and synapses are wired together?

It comes down to the circuitry of the brain. We know that most organs in your body do not give rise to consciousness. Your liver, for example, is very complicated, but it doesn’t seem to have any feelings. We also know that consciousness does not require your entire brain. You can lose 80 percent of your neurons. You can lose the little brain at the back of your brain called the cerebellum. There was recently a 24-year-old Chinese woman who discovered, when she had to get a brain scan, that she has absolutely no cerebellum. She’s one of the extremely rare cases of people born without a cerebellum, including deep cerebellar nuclei. She never had one. She talks in a somewhat funny way and she’s a bit ataxic. It took her several years to learn how to walk and speak, but you can communicate with her. She’s married and has a child. She can talk to you about her conscious experiences. So clearly you don’t need the cerebellum.

Yet the cerebellum has everything you expect of neurons. It has gorgeous neurons. In fact, some of the most beautiful neurons in the brain, so-called Purkinje cells, are found in the cerebellum. Why does the cerebellum not contribute to consciousness? It has a very repetitive and monotonous circuitry. It has 69 billion neurons, but they have simple feed-forward loops. So I believe the way the cerebellum is wired up does not give rise to consciousness. Yet another part of the brain, the cerebral cortex, seems to be wired up in a much more complicated way. We know it’s really the cortex that gives rise to conscious experience.

It sounds like you’re saying our intelligence comes from this wiring, not from some special substance in the neurons. Could a conscious system be made of something totally different?

That’s correct. There’s nothing inherently magical about the human brain. It obeys all the laws of physics like everything else in the universe. There isn’t anything supernatural that’s added to my brain or my cortex that gives rise to a conscious experience.

Is it like a computer?

A computer shares some similarities with the brain, but this is a metaphor and that can be dangerous. One is evolved, the other one is constructed. In the one case you have software and hardware. It’s much more difficult to make that distinction in the brain. I think we have to be cautious about comparisons between a brain and a computer. But in theory, a system that’s complex enough could be conscious. It may be possible that human-built artifacts would feel like something and would also experience the world.

The Internet is an extremely complex system. Could it feel happy or depressed?

If a computer or the Internet has sentience, the challenge is how we relate its conscious state to ours because its evolutionary history is radically different. It doesn’t have our senses or our reward systems. Of course, this is also a threat. The Internet and runaway AI will not have our value system. It may not care at all about humans. Why should it? We don’t care about ants or bugs. Most of us don’t even care about chickens or cows except when we want to eat them. This is a concern moving forward if we endow these entities not just with consciousness but intelligence. Is that really such a good idea?

We’re not the dominant species on the planet because we are wiser or swifter or more powerful. It’s because we’re more intelligent and ruthless. If we build intelligent systems that exceed even our intelligence, we may believe we can control them. “Oh yeah, I always have this kill-switch. Don’t worry, it’ll be OK.” Well, one day somebody’s going to say, “Oops, I didn’t want that. I didn’t mean that to happen.” And it may be our last invention.


I’m not a mystic. I’m a scientist. But this is a feeling I have. I find myself in a wonderful universe with a very positive and romantic outlook on life.

That’s the scenario in a lot of science fiction. But you really believe artificial intelligence could develop a certain level of complexity and wipe us out?

This is independent of the question of computer consciousness. Yes, if you have an entity that has enough AI and deep machine learning and access to the Cloud, etc., it’s possible in our lifetime that we’ll see creatures that we can talk to with almost the same range of fluidity and depth of conversation that you and I have. Once you have one of them, you replicate them in software and you can have billions of them. If you link them together, you could get superhuman intelligence. That’s why I think it behooves all of us to think hard about this before it may be too late. Yes, there’s a promise of untold benefits, but we all know human nature. It has its dark side. People will misuse it for their own purposes.

How do we build in those checks to make sure computers don’t rule the world?

That’s a very good question. The only reason we don’t have a nuclear bomb in every backyard is because you can’t build it easily. It’s hard to get the material. It takes a nation state and tens of thousands of people. But that may be different with AI. If current trends accelerate, it may be that 10 programmers in Timbuktu could unleash something truly malevolent onto mankind. These days, I’m getting more pessimistic about the fate of a technological species such as ours. Of course, this might also explain the Fermi paradox.

Remind us what the Fermi paradox is.

We have yet to detect a single intelligent species, even though we know there are probably trillions of planets. Why is that? Well, one explanation is it’s just extremely unlikely for life to arise and we’re the only one. But I think a more likely possibility is that any time you get life that’s sufficiently complex, with advanced technology, it has somehow managed to annihilate itself, either by nuclear war or by the rise of machines.JUST LIKE HEAVEN: “In a cathedral, I get a feeling of luminosity out of the numinous,” says Christof Koch. Gaudi’s La Sagrada Familia is seen above. “You can get that feeling without being a Catholic.”Pixabay

You are a pessimist! You really think any advanced civilization is going to destroy itself?

If it’s very aggressive like ours and it’s based in technology. You can imagine other civilizations that are not nearly as aggressive and live more in harmony with themselves and nature. Some people have thought of it as a bottleneck. As soon as you develop technology to escape the boundary of the planet, there’s an argument that civilization will also develop computers and nuclear fusion and fission. Then the question is, can it grow up? Can it become a full-grown, mature adult without killing itself?

You have embraced Integrated Information Theory, which was developed by your colleague Giulio Tononi. What can this tell us about consciousness?

The Integrated Information Theory of consciousness derives a mathematical calculus and gives rise to something known as a consciousness meter, which a variety of clinical groups are now testing. If you have an anesthetized patient, or a patient who’s been in a really bad traffic accident, you don’t really know if this person is minimally conscious or in a vegetative state; you treat them as if they’re conscious, but they don’t respond in any meaningful way.

How can you be sure they’re conscious?

You’re never really sure. So you want a brain-based test that tells you if this person is capable of some experience. People have developed that based on this integrated information series. That’s big progress. The current state of my brain influences what happens in my brain the next second, and the past state of my brain influences what my brain does right now. Any system that has this cause-effect power upon itself is conscious. It derives from a mathematical measure. It could be a number that’s zero, which means a system with no cause-effect power upon itself. It’s not conscious. Or you have systems that are “Phi,” different from zero. The Phi measures, in some sense, the maximum capacity of the system to experience something. The higher the number, the more conscious the system.

So you could assign a number to everything that might have some degree of consciousness—whether it’s an ant, a lizard, bacteria, or a vegetative human being?

Yes, you or me, the Dalai Lama or Albert Einstein.

The higher the number, the more conscious?

The number by itself doesn’t tell you it’s now thinking, or is conscious of an image or a smell. But it tells you the capacity of the system to have a conscious experience. In some deep philosophical sense, the number tells you how much it exists. The higher the number, the more the system exists for itself. There isn’t a Turing Test for consciousness. You have to look at the way the system is built. You have to look at the circuitry, not its behavior, whether it’s a computer or a biological brain. This has now been tested and validated in many patients, including locked-in patients who are fully conscious, people under anesthesia who are not conscious, people in deep sleep, and those in vegetative states or minimal-conscious states. So the question now is whether this can be turned into something practical that can be used at every clinic in the country or the world to test patients who’ve just been in a bad traffic accident.

Obviously, there are huge implications. Do you turn off the life-support machines?

First, does the patient suffer or is nobody home anymore? In the famous case of Terri Schiavo, we could tell the brain stem was still functioning but there wasn’t anybody home. Her consciousness had disappeared 15 years earlier.

Isn’t there still the old “mind-body problem?” How do three pounds of goo in the human brain, with its billions of neurons and synapses, generate our thoughts and feelings? There seems to be an unbridgeable gap between the physical world and the mental world.

No, it’s just how you look at it. The philosopher Bertrand Russell had this idea that physics is really just about external relationships—between a proton and electron, between planets and stars. But consciousness is really physics from the inside. Seen from the inside, it’s experience. Seen from the outside, it’s what we know as physics, chemistry, and biology. So there aren’t two substances. Of course, a number of mystics throughout the ages have taken this point of view.

It does look strange if you grew up like me, as a Roman Catholic, believing in a body and a soul. But it’s unclear how the body and the soul should interact. After a while, you realize this entire notion of a special substance that can’t be tracked by science—that I have but animals don’t have, which gets inserted during the developmental process and then leaves my body—sounds like wishful thinking and just doesn’t cohere with what we know about the actual world.

It sounds like you lost your religious faith as you learned about science.

I lost my religious faith as I matured. I still look fondly back upon it. I still love the religious music of Bach. I still get this feeling of awe. In a cathedral, I can get a feeling of luminosity out of the numinous. When I’m on a mountain top, when I hear a dog howling, I still wake up some mornings and say, “I’m amazed that I exist. I’m amazed there is this world.” But you can get that without being a Catholic.

Does that experience of awe or the numinous feel religious?

Not in a traditional sense. I was raised to believe in God, the Trinity, and particularly the Resurrection. Unfortunately, I now know four words: “No brain, never mind.” That’s bad news. Once my brain dies, unless I can somehow upload it into the Cloud, I die with it. I wish it were otherwise, but I’m not going to believe something if it’s opposed by all the facts.

A few years ago, you and some other scientists spent a week with the Dalai Lama. Was that a meaningful experience?

Yes, it was. There were thousands of monks in the Drepung Monastery who were listening to our exchange. This particular Tibetan Buddhist tradition is quite fascinating. I’m not a scholar of it, but they view the mind primarily from an interior perspective. They’ve developed very sophisticated ways of analyzing it that are different from our way. We take the external way of Western science, which is independent of the observer. But ultimately, we’re trying to approach the same thing. We’re trying to approach this phenomenon of conscious experience. They have no trouble with the idea of evolution and other creatures being sentient. I found that very heartening—in particular the Dalai Lama’s insistence on the primacy of science. I asked him, “What happens if science is in conflict with certain tenets of Buddhist faith?” He laughed and said, “Well, if this belief doesn’t accord with what science ultimately discovers about the universe, then we have to throw it out.”

But the Dalai Lama believes in reincarnation.

We talked about that. In fact, I said, “Well, I’m really sorry, Your Holiness, but I think we just have to agree that Western science shows that if there’s no physical carrier, you’re not going to get a mind. You’re not going to get memory because you need some mechanism to retain the memory.” I asked him, “Were you not reincarnated from the previous Dalai Lama?” And he just laughed and said, “Well, I don’t remember anything about that anymore.”

Has this scientific knowledge helped you sort out the deep existential questions about meaning, about why we’re here?

My last book is titled Confessions of a Romantic Reductionist. I’m a reductionist because I do what scientists do. I take a complex phenomenon and try to pull it apart and reduce it to something at a lower level. I’m also romantic in the sense that I believe I can decipher the distant contrails of meanings. I find myself in a universe that seems to be conducive to life—the Anthropic Principle. And for reasons I don’t understand, I also find myself in a universe that became conscious, ultimately reflecting upon itself. Who knows what might happen in the future if we continue to evolve without destroying ourselves? To what extent can we become conscious of the universe as a whole?

I don’t know who put all of this in motion. It’s certainly not the almighty God I was raised with. It’s a god that resides in this mystical notion of all-nothingness. I’m not a mystic. I’m a scientist. But this is a feeling I have. I find myself in a wonderful universe with a very positive and romantic outlook on life. If only we humans could make a better job of getting along with each other.

The Faster-Than-Light Flash That (Probably) Gave Birth to Our Universe


 



When i see these discussions, I am amazed how everyone ignores the core reality.  It is that the act of creation is best defined as an act of consciousness and that this creates TIME and sub light matter.

Sublight tells us that all observed matter will also be bounded just like a Galaxy.  Every Galaxy is produced by a local BIG BANG and all matter is sub light and at best travels to the edge of its Galaxy.  I consider it most likely that every planet and star happens to be a BIG BANG.

The moment you recall sublight, the rest follows naturally.  In fact Galaxies become obvious as a maximal sublight expression of our universe.


The Faster-Than-Light Flash That (Probably) Gave Birth to Our Universe

By CHARLIE WOOD

A few months ago, I wrote about one of the wildest and most controversial theories of reality, the multiverse. Today I’ll describe the less wild but still somewhat controversial theory that birthed it: cosmic inflation.

https://mailchi.mp/quantamagazine.org/why-colliding-particles-reveal-reality-4865987?e=69d36d2113

When astronomers observe the sky, they see a few features that cry out for explanation. One is that the universe appears to be as evenly mixed as the milk in a well-stirred cup of coffee. Similar numbers of galaxies lie in every direction, even though the universe doesn’t seem to have been around long enough for one side to mix with the other. This is the “horizon problem.”


Another is that based on observations of ancient light, space-time appears to have no measurable curvature. This “flatness” requires the universe to maintain a very special density of matter and energy for billions and billions of years. This is the “flatness problem.”


The standard Big Bang theory — the notion that the universe started out as a small, hot, dense patch of space that has been steadily expanding ever since — doesn’t address either issue. This gentle expansion doesn’t bring new regions of the cosmos into contact in a way that could facilitate mixing. It would also tend to push the density of matter and energy away from the “critical” density that keeps space-time flat.


In the 1980s, Alan Guth, Andrei Linde, Paul Steinhardt and others addressed these and other problems by adding a prologue to the Big Bang story. All we see, they posited, sprang from a subatomic speck of space where everything inside was thoroughly mixed up, a condition that solved the horizon problem. Then, for a tiny fraction of a second, this speck swelled exponentially, doubling in size many dozens of times. In this moment, the universe expanded at a rate far faster than the speed of light (a feat possible only for the fabric of space-time itself). This “inflation” stretched out any initial curvature, solving the flatness problem.


Decades of astronomical observations have aligned with inflation’s predictions, firmly establishing it as the leading theory of how the cosmos came to be. “I don’t think that one should seriously question inflation,” Zohar Komargodski, a theoretical physicist at Stony Brook University, told me on a recent call. “There is so much evidence.” Yet theorists eventually realized that inflation came with an awkward side effect: While the exponential growth may have stopped in our region, it likely would have continued in other regions, leading to the eternal production of bubble universes. This “multiverse” has turned some cosmologists against inflation. More on that in a future newsletter.


What's New and Noteworthy


One apparent advance occurred in 2014. Inflation should have stretched tiny ripples in space-time in a way that affects light traveling across the cosmos. And after about a decade of searching for the subtle signal using telescopes located at the South Pole, an international collaboration of scientists announced they had found it. Pioneers of inflation popped champagne in celebration of the theory’s most direct proof yet.


But within the year, the claim fell apart. Dust in our galaxy had twisted light much as inflation would have. Researchers continue to search for the ripples of inflation but haven’t found them yet. The theory doesn’t predict how strong these ripples should be. So while detecting them would strongly support the theory, not detecting them does little to rule it out.


A selling point of inflation is that it explains why the sky looks the way it does. As the universe swelled in size, tiny quantum fluctuations deposited dollops of energy here and there, which became today’s galaxies. So in principle, physicists can learn about the rules governing the quantum realm through careful cosmological measurements. Some of those rules are unknown, because they operate at distances too small for colliders to probe. Theoretical physicists are trying to figure out how to read those unknown rules from the smattering of galaxies in the sky. It’s a big challenge, but Quanta has covered developments in this research program in 2019 and 2021.


In 2022, two analyses of astronomical data found just such a hint of new quantum laws. By studying huge numbers of tetrads of galaxies, researchers counted more groups of one orientation than another — suggesting that quantum fluctuations during inflation were similarly lopsided in a way that tried-and-true quantum rules can’t explain. The discovery may not hold up to scrutiny, but if it’s confirmed it could be Nobel-worthy, Quanta reported at the time.

How Neanderthals Kept Our Ancestors Warm

 v


I have seen one report suggesting a remote nenderthal band out in the woods.  And clearly looking for mates.  The core advantage of humanity was simply numbers.  This literally absorbed the scattered Neanderthal poppulation as we today watch the first nation populations also been absorbed.

The genetic informatio was absorbed by the women allowing their children to exist.  Unwelcome aspects seem to then drop away.  We really do not have the date, but we do now need to look far more precisely at our animal models to discover how all this happens.

Any expeiment with mice an select by cutting out various populations of males in tye breeding population.

This is well worth doing because it will provide empirical direction for all forms of animal breeding.  Even an actual equation!




How Neanderthals Kept Our Ancestors Warm

New DNA studies reveal more benefits from our hominin friends

By Adam Piore January 24, 2025

https://nautil.us/how-neanderthals-kept-our-ancestors-warm-1185278/

Beneath a Medieval castle in Ranis, Germany, a cave sheltered the remains of six humans who died more than 45,000 years ago. Not long ago, scientists sequenced their genomes—the oldest known set of Homo sapiens DNA ever found in Europe. Not much is known about what the lives of these ancient people were like. But this much seems certain: They were probably very cold.




To stay alive in an Ice-Age environment more akin to present-day Siberia than Germany, the early humans—a mother, daughter, and four distant cousins—would have needed cultural and physical traits foreign to their ancestors in Africa. They likely wrapped themselves in hides and furs culled from woolly rhinoceroses, reindeer, and other big game killed on the steppes of their frigid home. Fire would have been important.




The recent analysis of the ancient DNA, derived from 13 bone fragments, suggests these early humans adapted to their icy surroundings with physical traits passed on by their former mates: Neanderthals. The results, reported in Nature last month, identified large segments of Neanderthal DNA in the human genome. A similar study published the same month in Science shows how Neanderthals helped keep some modern human ancestors warm. Both studies offer further evidence of how Neanderthal DNA helped those ancestors survive.




Neanderthal genes were passed on to humans that helped them spread across the world.




Early humans and Neanderthals hooked up outside of Africa, including in Europe, from about 50,000 to 43,000 years ago. (They mated in the Middle East as far back as 100,000 years ago.) In the recent Science paper, researchers show that Neanderthal genes related to skin color, metabolism, and immune function seemed to be the most common across the sample of early humans.




“Because Neanderthals were living outside of Africa for several thousand years before modern humans arrived there, they presumably were adapted to the climate and adapted to life outside Africa,” says geneticist Manjusha Chintalapati, a former postdoctoral fellow at the University of California, Berkeley, who is now at the company Ancestry DNA. “So when Neanderthals and humans interbred, genes were passed on to humans that helped them adapt to that climate and spread across the world.”




Similar findings have been reported before in other papers. But none had ever examined such a large sample of human DNA. The authors of the Science paper examined 59 previously sequenced ancient Homo sapiens who lived in Europe and Western and Central Asia over the past 45,000 years, and the complete genomes of 300 contemporary humans.




“The novelty in our study comes from the fact that we looked at these Neanderthal ancestry segments in all samples,” Chintalapati says. “Our study shows that these regions were at high frequency since probably a hundred generations after the initial event. So that was probably quite beneficial to humans.” The Neanderthal variants related to skin color conferred lighter skin, which likely made it easier to absorb vitamin D—crucial for bone health—in conditions of low sunlight.




Thanks to molecular biologist Svante Pääbo, we’ve known since 2010 that most early humans and Neanderthals were more than just neighbors. The pioneering researcher at the Max Planck Institute for Evolutionary Anthropology, in Germany, sequenced the first Neanderthal genome and subsequently won a Nobel Prize for the innovations that allowed him to do so. At the time, the revelation of crossbreeding surprised the world. But it also explained the origins of large chunks of DNA found at that time in humans of European ancestry, which were entirely absent in those native to Africa—chunks far too varied to have evolved gradually in humans on their own. Today scientists estimate that most present-day human genomes, including those of people living in Africa, contain at least some Neanderthal DNA.




Tony Capra, an evolutionary genomics professor at the University of California, San Francisco, has no doubt that a small portion of Neanderthal DNA likely made a big difference in Ice-Age Europe. He has spent the last decade combining high-powered computational techniques, genetic sequencing, and medical records databanks to analyze the effects of Neanderthal DNA on contemporary humans.




The most powerful genetic Neanderthal signals found to date have been in the immune system.




He has found, among other things, that the DNA affecting metabolic pathways—biochemical reactions linked together in a cell—changed the way most modern humans break down fat. Since the game these humans hunted in colder climes tended to have fatty deposits to keep them warm, genetic variants that might have helped early humans more quickly process fat for energy would have given them an edge.




Neanderthal DNA also likely helped modern humans survive threats that went beyond the challenges of the cold climate. One intriguing variant identified by Capra in 2016 relates to blood clotting. Using medical records, Capra and his team linked the variant to thrombosis, which can increase the risk of a heart attack or cancer.




But it’s not hard to imagine how humans might have benefited from having it, says Chris Stringer, an evolutionary anthropologist at London’s Natural History Museum. Life was rough then. “People were hunting dangerous animals,” Stringer says. “They were working with sharp stones for tools that could cut them. Women were giving birth without medical support. [They] picked [the variant] up because to have a gene that actually sped up the process of blood clotting was good news 50,000 years ago.” But modern sedentary lifestyles and longer lives come with a great risk of thrombosis.




The variant, which also would have reduced the risk of infection by quickly sealing wounds, is just one of many that helped the body fight environmental pathogens, Stringer says. The most powerful genetic Neanderthal signals found to date have been in the immune system. Since Homo sapiens evolved in Africa, most of the natural defenses to pathogens and parasites they developed were endemic to the local conditions. Neanderthals had evolved defenses against microscopic threats in the new environment.




The conspicuous absence of Neanderthal genes suggests they were weeded out by the evolutionary process.




Most of the Neanderthal immune variants that persist in the genomes of humans code for certain proteins, known as human leukocyte antigens, that get expressed on the surface of most cells. These molecules bind to small fragments of compounds within the cell, and then display them on the cell surface. The compounds on display serve as identification markers, allowing patrolling immune cells to identify bodily threats and mount an immune response when pathogens are detected.




The immune system is among the fastest evolving parts of the body, and it benefits from having lots of genetic variation, “especially genetic variation from people that have seen different kinds of viruses or pathogens,” Stringer says. “Neanderthals had been living in Asia and Europe for hundreds of thousands of years before modern humans ever got there. And so by interbreeding within Neanderthals, we got some genetic variants that were preadapted to the pathogens and environments that they were living in.”




It’s hard to say how much credit Neanderthal genes should get for any single useful trait. “Even when we look at some of these positive effects, we can’t really say that we should thank Neanderthals entirely for some new adaptation,” Capra says. “They contributed some genetic variation that is a small fraction of all the genetic variation that controls that trait. So a lot of these traits I’m talking about, there are hundreds or thousands of different parts of the genome that influence them, and Neanderthals contribute a few of those.”




For Capra, the most interesting finding in the recent Science paper wasn’t what Neanderthal DNA did for some non-African early humans but what it failed to do. Vast stretches of the human genome—segments associated with essential biological functions, like sexual reproduction and social interactions—were entirely devoid of Neanderthal DNA, Capra says.




The conspicuous absence of Neanderthal genes suggests they were selected against, weeded out by the evolutionary process. And the speed with which that happened, he says, suggests those who inherited those genes were at a profound disadvantage and perished. What wasn’t working? Genes involved in male fertility, including many expressed in testis or on the X chromosome, are mostly without Neanderthal DNA. For Capra, this suggests that male hybrids may have been less fertile.




The results had Capra wondering what it was about humans, the ways they thought and behaved, that allowed them to survive when so many of their fellow hominins fell. Did Neanderthals have to die out? We may never know. But at least we’re seeing more clearly how Neanderthals live on today.

Thursday, January 30, 2025

The Jagged, Monstrous Function That Broke Calculus




It is worth the read.  He shows us that contemplation of a specific function led directly to his effective invention of analysis.  This still give students headaches today, about the time they think they are hot.

My own expansion of the Pythagorean metric came from creating and contemplating what I call the axial equation.  And this rigor led to the SPACE TIME pendulum.

So a little history is in order and every mathematician knows him..  Fractal analysis is an emegtent area of mathematical research, at least over my lifetime.

The Jagged, Monstrous Function That Broke Calculus

In the late 19th century, Karl Weierstrass invented a fractal-like function that was decried as nothing less than a “deplorable evil.” In time, it would transform the foundations of mathematics.




No matter how much you zoom in, the Weierstrass function gets more and more serrated.


Paul Chaikin for Quanta Magazine

Introduction


BySolomon Adams

Contributing Writer

January 23, 2025View PDF/Print Mode




Calculus is a powerful mathematical tool. But for hundreds of years after its invention in the 17th century, it stood on a shaky foundation. Its core concepts were rooted in intuition and informal arguments, rather than precise, formal definitions.

Two schools of thought emerged in response, according to Michael Barany(opens a new tab), a historian of math and science at the University of Edinburgh. French mathematicians were by and large content to keep going. They were more concerned with applying calculus to problems in physics — using it to compute the trajectories of planets, for instance, or to study the behavior of electric currents. But by the 19th century, German mathematicians had begun to tear things down. They set out to find counterexamples that would undermine long-held assumptions, and eventually used those counterexamples to put calculus on more stable and durable footing.

One of these mathematicians was Karl Weierstrass. Though he showed an early aptitude for math, his father pressured him to study public finance and administration, with an eye toward joining the Prussian civil service. Bored with his university coursework, Weierstrass is said to have spent most of his time drinking and fencing; in the late 1830s, after failing to get his degree, he became a secondary school teacher, giving lessons in everything from math and physics to penmanship and gymnastics.

Weierstrass didn’t begin his career as a professional mathematician until he was nearly 40. But he would go on to transform the field by introducing a mathematical monster.
The Pillars of Calculus

In 1872, Weierstrass published a function that threatened everything mathematicians thought they understood about calculus. He was met with indifference, anger and fear, particularly from the mathematical giants of the French school of thought. Henri Poincaré condemned Weierstrass’ function as “an outrage against common sense.” Charles Hermite called it a “deplorable evil.”

To understand why Weierstrass’ result was so unnerving, it helps to first understand two of the most fundamental concepts in calculus: continuity and differentiability.

A continuous function is exactly what it sounds like — a function that has no gaps or jumps. You can trace a path from any point on such a function to any other without lifting your pencil.

Calculus is in large part about determining how quickly such continuous functions change. It works, loosely speaking, by approximating a given function with straight, nonvertical lines.


Mark Belan/Quanta Magazine

At any given point on this curve, you can draw a “tangent” line — a line that best approximates the curve near that point. The slope, or steepness, of the tangent line measures how quickly the function is changing at that point. You can define another function, called the derivative, that provides the slope of the tangent line at each point on your original function. If the derivative exists at every point, then the original function is said to be differentiable.

Functions that contain discontinuities are never differentiable: You won’t be able to draw a tangent line that approximates the gaps, meaning your derivative won’t exist there. But even continuous functions aren’t always differentiable at every point. Consider the “absolute value” function, which looks like this:

On the left side of this V-shaped curve, tangent lines slope downward. On the right side, they slope upward. At the bottom vertex, the slope abruptly changes directions. The function’s derivative does not exist at that point, even though it’s well defined everywhere else.

This didn’t faze most 19th-century mathematicians. They saw it as an isolated phenomenon: So long as your function is continuous, they claimed, there can only be finitely many points where the derivative is not defined. At all other points, the function should still be nice and smooth. In other words, a function can only zig and zag so much.

In fact, in 1806, a prominent French mathematician and physicist named André-Marie Ampère claimed that he’d proved this. For decades, his reasoning went unchallenged. Then along came Weierstrass.
Weierstrass’ Monster

Weierstrass discovered a function that, according to Ampère’s proof, should have been impossible: It was continuous everywhere yet differentiable nowhere.

He built it by adding together infinitely many wavelike “cosine” functions. The more terms he added, the more his function zigzagged — until ultimately, it changed direction abruptly at every point, resembling an infinitely jagged sawtooth comb.

Many mathematicians dismissed the function. It was an anomaly, they said — the work of a pedant, mathematically useless. They couldn’t even visualize it. At first, when you try to plot the graph of Weierstrass’ function, it looks smooth in certain regions. Only by zooming in will you see that those regions are jagged, too, and that they’ll continue to get more serrated and badly behaved (what mathematicians call “pathological”) with each additional magnification.

But Weierstrass had proved beyond doubt that, though his function had no discontinuities, it was never differentiable. To show this, he first revisited the definitions of “continuity” and “differentiability” that had been formulated decades earlier by the mathematicians Augustin-Louis Cauchy and Bernard Bolzano. These definitions relied on vague, plain-language descriptions and inconsistent notation, making them easy to misinterpret.



Karl Weierstrass didn’t begin his mathematical career until he was nearly 40. His dedication to rigor and logic ultimately led to the birth of modern analysis.


Conrad Fehr/Public Domain

So Weierstrass rewrote them, using precise language and concrete mathematical formulas. (Every calculus student learns the epsilon-delta definition of a limit; it was Weierstrass who introduced the modern version of it and used it as the foundation for his definitions of continuity and differentiability.)

He was then able to show that his function satisfied his more rigorous definition of continuity. At the same time, he could also prove that at every point, his new formal definition of the function’s derivative never had a finite value; it always “blew up” to infinity. In other words, continuity did not imply differentiability. His function was just as monstrous as mathematicians had feared.

The proof demonstrated that calculus could no longer rely on geometric intuition, as its inventors had done. It ushered in a new standard for the subject, one that was rooted in the careful analysis of equations. Mathematicians were forced to follow in Weierstrass’ footsteps, further sharpening their definition of functions, their understanding of the relationship between continuity and differentiability, and their methods for computing derivatives and integrals. This work to standardize calculus has since grown into the field known as analysis; Weierstrass is considered one of its founders.



But his function’s legacy extends far beyond the foundations of calculus and analysis. It revealed that mathematics is full of monsters: impossible-seeming functions, strange objects (it’s one of the earliest examples of a fractal), wild behaviors. “There’s a whole universe of possibilities, and the Weierstrass function is supposed to be opening your eyes to it,” said Philip Gressman(opens a new tab) of the University of Pennsylvania.

It also turned out to have many practical applications. In the early 20th century, physicists wanted to study Brownian motion, the random movement of particles in a liquid or gas. Because this movement is continuous but not smooth — characterized by rapid and infinitely tiny fluctuations — functions like Weierstrass’ were perfect for modeling it. Similarly, such functions have been used to model uncertainty in how people make decisions and take risks, as well as the complicated behavior of financial markets.

Much like Weierstrass himself, the consequences of his function have sometimes been late to bloom. But they’re continuing to shape mathematics and its applications today.

Computer Scientists Prove That Heat Destroys Quantum Entanglement


Finally we can stop the handwaving around entanglement.  We have at least one limit.

And just how well do we really understand heat?

So much of our physics isdone at arms length and so much cannot be rewally correct.  That is why my Cloud cosmology is built upon a rigorous mathematica imagining the SPACE TIME Pendulum which creats matter and TIME.  We can build and simulate from that and ask questions we can answer.


Computer Scientists Prove That Heat Destroys Quantum Entanglement

While devising a new quantum algorithm, four researchers accidentally established a hard limit on the "spooky" phenomenon.


Kristina Armitage/Quanta Magazine



ByBen Brubaker

Staff Writer




August 28, 2024



algorithmscomputer scienceentanglementphysicsQuanta Podcastquantum computingquantum information theoryquantum physicsthermodynamicsAll topics

https://www.quantamagazine.org/computer-scientists-prove-that-heat-destroys-entanglement-20240828/


Nearly a century ago, the physicist Erwin Schrödinger called attention to a quirk of the quantum world that has fascinated and vexed researchers ever since. When quantum particles such as atoms interact, they shed their individual identities in favor of a collective state that’s greater, and weirder, than the sum of its parts. This phenomenon is called entanglement.




Researchers have a firm understanding of how entanglement works in idealized systems containing just a few particles. But the real world is more complicated. In large arrays of atoms, like the ones that make up the stuff we see and touch, the laws of quantum physics compete with the laws of thermodynamics, and things get messy.




At very low temperatures, entanglement can spread over long distances, enveloping many atoms and giving rise to strange phenomena such as superconductivity. Crank up the heat, though, and atoms jitter about, disrupting the fragile links that bind entangled particles.




Quanta Science Podcast

Quantum entanglement lets atoms copy each others’ moves, but if the particle dance floor overheats, things come to a quick halt.

Physicists have long struggled to pin down the details of this process. Now, a team of four researchers has proved(opens a new tab) that entanglement doesn’t just weaken as temperature increases. Rather, in mathematical models of quantum systems such as the arrays of atoms in physical materials, there’s always a specific temperature above which it vanishes completely. “It’s not just that it’s exponentially small,” said Ankur Moitra(opens a new tab) of the Massachusetts Institute of Technology, one of the authors of the new result. “It’s zero.”




Researchers had previously observed hints of this behavior and dubbed it the “sudden death(opens a new tab)” of entanglement. But their evidence was mostly indirect. The new finding establishes a much stronger limit on entanglement in a mathematically rigorous way.




Curiously, the four researchers behind the new result aren’t even physicists, and they didn’t set out to prove anything about entanglement. They’re computer scientists who stumbled on the proof accidentally while developing a new algorithm.




Regardless of their intent, the results have excited researchers in the area. “It’s a very, very strong statement,” said Soonwon Choi(opens a new tab), a physicist at MIT. “I was very impressed.”




Finding Equilibrium

The team made their discovery while exploring the theoretical capabilities of future quantum computers — machines that will exploit quantum behavior, including entanglement and superposition, to perform certain calculations far faster than the conventional computers we know today.




One of the most promising applications of quantum computing is in the study of quantum physics itself. Let’s say you want to understand the behavior of a quantum system. Researchers need to first develop specific procedures, or algorithms, that quantum computers can use to answer your questions.




Ewin Tang in a white shirt and brown sweater stands outside

Ewin Tang helped devise a new fast algorithm for simulating how certain quantum systems behave at high temperatures.




Xinyu Tan

But not all questions about quantum systems are easier to answer using quantum algorithms. Some are equally easy for classical algorithms, which run on ordinary computers, while others are hard for both classical and quantum ones.




To understand where quantum algorithms and the computers that can run them might offer an advantage, researchers often analyze mathematical models called spin systems, which capture the basic behavior of arrays of interacting atoms. They then might ask: What will a spin system do when you leave it alone at a given temperature? The state it settles into, called its thermal equilibrium state, determines many of its other properties, so researchers have long sought to develop algorithms for finding equilibrium states.




Whether those algorithms really benefit from being quantum in nature depends on the temperature of the spin system in question. At very high temperatures, known classical algorithms can do the job easily. The problem gets harder as temperature decreases and quantum phenomena grow stronger; in some systems it gets too hard for even quantum computers to solve in any reasonable amount of time. But the details of all this remain murky.






“When do you go to the space where you need quantum, and when do you go to the space where quantum doesn’t even help you?” said Ewin Tang(opens a new tab), a researcher at the University of California, Berkeley and one of the authors of the new result. “Not that much is known.”




In February, Tang and Moitra began thinking about the thermal equilibrium problem together with two other MIT computer scientists: a postdoctoral researcher named Ainesh Bakshi(opens a new tab) and Moitra’s graduate student Allen Liu(opens a new tab). In 2023, they’d all collaborated on a groundbreaking quantum algorithm(opens a new tab) for a different task involving spin systems, and they were looking for a new challenge.




“When we work together, things just flow,” Bakshi said. “It’s been awesome.”




Before that 2023 breakthrough, the three MIT researchers had never worked on quantum algorithms. Their background was in learning theory, a subfield of computer science that focuses on algorithms for statistical analysis. But like ambitious upstarts everywhere, they viewed their relative naïveté as an advantage, a way to see a problem with fresh eyes. “One of our strengths is that we don’t know much quantum,” Moitra said. “The only quantum we know is the quantum that Ewin taught us.”




The team decided to focus on relatively high temperatures, where researchers suspected that fast quantum algorithms would exist, even though nobody had been able to prove it. Soon enough, they found a way to adapt an old technique from learning theory into a new fast algorithm. But as they were writing up their paper, another team came out with a similar result(opens a new tab): a proof that a promising algorithm(opens a new tab) developed the previous year would work well at high temperatures. They’d been scooped.




Sudden Death Reborn

A bit bummed that they’d come in second, Tang and her collaborators began corresponding with Álvaro Alhambra(opens a new tab), a physicist at the Institute for Theoretical Physics in Madrid and one of the authors of the rival paper. They wanted to work out the differences between the results they’d achieved independently. But when Alhambra read through a preliminary draft of the four researchers’ proof, he was surprised to discover that they’d proved something else in an intermediate step: In any spin system in thermal equilibrium, entanglement vanishes completely above a certain temperature. “I told them, ‘Oh, this is very, very important,’” Alhambra said.




Allen Liu in a gray shirt and backpack stands in front of a large overlook. Ainesh Bakshi in a blue shirt stands in front of a red sculpture. Ankur Moitra sits in a checkered shirt.

From left: Allen Liu, Ainesh Bakshi and Ankur Moitra collaborated with Tang, drawing on their background in a different branch of computer science. “One of our strengths is that we don’t know much quantum,” Moitra said.




From left: Courtesy of Allen Liu; Amartya Shankha Biswas; Gretchen Ertl

The team quickly revised their draft to highlight the accidental result. “It turns out that this just falls out of our algorithm,” Moitra said. “We get more than what we bargained for.”




Researchers had observed this sudden death of entanglement since the 2000s, in experiments and simulations on ordinary classical computers. But none of those earlier works had been able to measure the disappearance of entanglement directly. They had also studied the phenomenon only in small systems, which aren’t the most interesting ones.




“It could have been that for larger and larger systems you would have to go to higher and higher temperatures to see lack of entanglement,” Alhambra said. In that case, the sudden-death phenomenon might happen at such high temperatures as to be irrelevant in real materials. The only previous theoretical limit, from 2003(opens a new tab), left open that possibility. Instead, Tang and her collaborators showed that the temperature at which entanglement vanishes doesn’t depend on the total number of atoms in the system. The only thing that matters is the details of the interactions between nearby atoms.




Álvaro Alhambra in a green shirt stands in front of a chalkboard with figures on it

Álvaro Alhambra, a physicist working on the same problem as Tang, Moitra, Bakshi and Liu, realized they had accidentally proved a new result about quantum entanglement while developing their algorithm.




Laura Marcos

The approach they used in their proof was itself unusual. Most algorithms for finding thermal equilibrium states are inspired by the way real physical systems approach equilibrium. But Tang and company used techniques far removed from quantum theory.




“That’s what’s so amazing about this paper,” said Nikhil Srivastava(opens a new tab), a computer scientist at Berkeley. “The proof kind of ignores the physics.”




The Search Continues

The four researchers’ proof that high-temperature spin systems lack any entanglement helps explain another interesting feature of their new algorithm: Very little of it is actually quantum. True, the algorithm’s output — a full description of how the atoms in a spin system are oriented in thermal equilibrium — is too unwieldy to store on a classical machine. But other than the last step that generates this output, every part of the algorithm is classical.




“It’s essentially the most trivial quantum computation,” Liu said.




Tang has a long track record of discovering “dequantization” results — proofs that quantum algorithms aren’t actually necessary for many problems. She and her collaborators weren’t trying to do that this time, but the proof of vanishing entanglement that they stumbled into amounts to an even more extreme version of dequantization. It’s not just that quantum algorithms don’t offer any advantage in a specific problem involving high-temperature spin systems — there’s nothing quantum about those systems whatsoever.




But that doesn’t mean quantum computing researchers should lose hope. Two recent(opens a new tab) papers(opens a new tab) identified examples of low-temperature spin systems in which quantum algorithms for measuring equilibrium states outperform classical ones, though it remains to be seen how widespread this behavior is. And even though Bakshi and his collaborators proved a negative result, the unorthodox method they used to get there indicates that fruitful new ideas can come from unexpected places.




“We can be optimistic that there are crazy new algorithms to be discovered,” Moitra said. “And that in the process, we can discover some beautiful mathematics.”