Showing posts sorted by date for query insect gravity. Sort by relevance Show all posts
Showing posts sorted by date for query insect gravity. Sort by relevance Show all posts

Tuesday, January 2, 2024

Time is an object


In order for consciousness to make a single decission, it must first create TIME and that is first created by producing the TIME SPACE Pendulum.

time then is a succession of such creations filling the void because any such act produces several potential successor acts of creation impertectly packed.  sort of looks like a BIG BANG filling the Void.  Understand though that all this is sublight and also produces light speed photons as well.

further self assembly and decay then produces the universe we see.  All such acts of creation end up producing a Galaxy of sublight particles which we can see.

In the end TIME looks like the smallest possible scale in an empirical universe and is uniform within its Galaxy at least..

Time as we see it has obviously expanded, but the rate is declining by the inverse of observed size.


Time is an object

Not a backdrop, an illusion or an emergent phenomenon, time has a physical size that can be measured in laboratories


Red-eyed tree frog, near Arenal Volcano, Costa Rica. Photo by Ben Roberts/Panos Pictures


is an astrobiologist and theoretical physicist at Arizona State University, where she is deputy director of the Beyond Center for Fundamental Concepts in Science and professor in the School of Earth and Space Exploration. She is also external professor at the Santa Fe Institute and a fellow at the Berggruen Institute.

Lee Cronin  is Regius Chair of Chemistry at the University of Glasgow in Scotland and CEO of Chemify.


5,100 words

Published in association with Santa Fe Institute, an Aeon Strategic Partner


Atimeless universe is hard to imagine, but not because time is a technically complex or philosophically elusive concept. There is a more structural reason: imagining timelessness requires time to pass. Even when you try to imagine its absence, you sense it moving as your thoughts shift, your heart pumps blood to your brain, and images, sounds and smells move around you. The thing that is time never seems to stop. You may even feel woven into its ever-moving fabric as you experience the Universe coming together and apart. But is that how time really works?

According to Albert Einstein, our experience of the past, present and future is nothing more than ‘a stubbornly persistent illusion’. According to Isaac Newton, time is nothing more than backdrop, outside of life. And according to the laws of thermodynamics, time is nothing more than entropy and heat. In the history of modern physics, there has never been a widely accepted theory in which a moving, directional sense of time is fundamental. Many of our most basic descriptions of nature – from the laws of movement to the properties of molecules and matter – seem to exist in a universe where time doesn’t really pass. However, recent research across a variety of fields suggests that the movement of time might be more important than most physicists had once assumed.

A new form of physics called assembly theory suggests that a moving, directional sense of time is real and fundamental. It suggests that the complex objects in our Universe that have been made by life, including microbes, computers and cities, do not exist outside of time: they are impossible without the movement of time. From this perspective, the passing of time is not only intrinsic to the evolution of life or our experience of the Universe. It is also the ever-moving material fabric of the Universe itself. Time is an object. It has a physical size, like space. And it can be measured at a molecular level in laboratories.

The unification of time and space radically changed the trajectory of physics in the 20th century. It opened new possibilities for how we think about reality. What could the unification of time and matter do in our century? What happens when time is an object?


For Newton, time was fixed. In his laws of motion and gravity, which describe how objects change their position in space, time is an absolute backdrop. Newtonian time passes, but never changes. And it’s a view of time that endures in modern physics – even in the wave functions of quantum mechanics time is a backdrop, not a fundamental feature. For Einstein, however, time was not absolute. It was relative to each observer. He described our experience of time passing as ‘a stubbornly persistent illusion’. Einsteinian time is what is measured by the ticking of clocks; space is measured by the ticks on rulers that record distances. By studying the relative motions of ticking clocks and ticks on rulers, Einstein was able to combine the concepts of how we measure both space and time into a unified structure we now call ‘spacetime’. In this structure, space is infinite and all points exist at once. But time, as Einstein described it, also has this property, which means that all times – past, present and future – are equally real. The result is sometimes called a ‘block universe’, which contains everything that has and will happen in space and time. Today, most physicists support the notion of the block universe.

But the block universe was cracked before it even arrived. In the early 1800s, nearly a century before Einstein developed the concept of spacetime, Nicolas LĂ©onard Sadi Carnot and other physicists were already questioning the notion that time was either a backdrop or an illusion. These questions would continue into the 19th century as physicists such as Ludwig Boltzmann also began to turn their minds to the problems that came with a new kind of technology: the engine.

Though engines could be mechanically reproduced, physicists didn’t know exactly how they functioned. Newtonian mechanics were reversible; engines were not. Newton’s solar system ran equally well moving forward or backward in time. However, if you drove a car and it ran out of fuel, you could not run the engine in reverse, take back the heat that was generated, and unburn the fuel. Physicists at the time suspected that engines must be adhering to certain laws, even if those laws were unknown. What they found was that engines do not function unless time passes and has a direction. By exploiting differences in temperature, engines drive the movement of heat from warm parts to cold parts. As time moves forward, the temperature difference diminishes and less ‘work’ can be done. This is the essence of the second law of thermodynamics (also known as the law of entropy) that was proposed by Carnot and later explained statistically by Boltzmann. The law describes the way that less useful ‘work’ can be done by an engine over time. You must occasionally refuel your car, and entropy must always increase.

Do we really live in a universe that has no need for time as a fundamental feature?

This makes sense in the context of engines or other complex objects, but it is not helpful when dealing with a single particle. It is meaningless to talk about the temperature of a single particle because temperature is a way of quantifying the average kinetic energy of many particles. In the laws of thermodynamics, the flow and directionality of time are considered an emergent property rather than a backdrop or an illusion – a property associated with the behaviour of large numbers of objects. While thermodynamic theory introduced how time should have a directionality to its passage, this property was not fundamental. In physics, ‘fundamental’ properties are reserved for those properties that cannot be described in other terms. The arrow of time in thermodynamics is therefore considered ‘emergent’ because it can be explained in terms of more fundamental concepts, such as entropy and heat.

Charles Darwin, working between the steam engine era of Carnot and the emergence of Einstein’s block universe, was among the first to clearly see how life must exist in time. In the final sentence from On the Origin of Species (1859), he eloquently captured this perspective: ‘[W]hilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been and are being evolved.’ The arrival of Darwin’s ‘endless forms’ can be explained only in a universe where time exists and has a clear directionality.

During the past several billion years, life has evolved from single-celled organisms to complex multicellular organisms. It has evolved from simple societies to teeming cities, and now a planet potentially capable of reproducing its life on other worlds. These things take time to come into existence because they can emerge only through the processes of selection and evolution.

We think Darwin’s insight does not go deep enough. Evolution accurately describes changes observed across different forms of life, but it does much more than this: it is the only physical process in our Universe that can generate the objects we associate with life. This includes bacteria, cats and trees, but also things like rockets, mobile phones and cities. None of these objects fluctuates into existence spontaneously, despite what popular accounts of modern physics may claim can happen. These objects are not random flukes. Instead, they all require a ‘memory’ of the past to be made in the present. They must be produced over time – a time that continually moves forward. And yet, according to Newton, Einstein, Carnot, Boltzmann and others, time is either nonexistent or merely emergent.

The times of physics and of evolution are incompatible. But this has not always been obvious because physics and evolution deal with different kinds of objects. Physics, particularly quantum mechanics, deals with simple and elementary objects: quarks, leptons and force carrier particles of the Standard Model. Because these objects are considered simple, they do not require ‘memory’ for the Universe to make them (assuming sufficient energy and resources are available). Think of ‘memory’ as a way to describe the recording of actions or processes that are needed to build a given object. When we get to the disciplines that engage with evolution, such as chemistry and biology, we find objects that are too complex to be produced in abundance instantaneously (even when energy and materials are available). They require memory, accumulated over time, to be produced. As Darwin understood, some objects can come into existence only through evolution and the selection of certain ‘recordings’ from memory to make them.

This incompatibility creates a set of problems that can be solved only by making a radical departure from the current ways that physics approaches time – especially if we want to explain life. While current theories of quantum mechanics can explain certain features of molecules, such as their stability, they cannot explain the existence of DNA, proteins, RNA, or other large and complex molecules. Likewise, the second law of thermodynamics is said to give rise to the arrow of time and explanations of how organisms convert energy, but it does not explain the directionality of time, in which endless forms are built over evolutionary timescales with no final equilibrium or heat-death for the biosphere in sight. Quantum mechanics and thermodynamics are necessary to explain some features of life, but they are not sufficient.

These and other problems led us to develop a new way of thinking about the physics of time, which we have called assembly theory. It describes how much memory must exist for a molecule or combination of molecules – the objects that life is made from – to come into existence. In assembly theory, this memory is measured across time as a feature of a molecule by focusing on the minimum memory required for that molecule (or molecules) to come into existence. Assembly theory quantifies selection by making time a property of objects that could have emerged only via evolution.

We began developing this new physics by considering how life emerges through chemical changes. The chemistry of life operates combinatorially as atoms bond to form molecules, and the possible combinations grow with each additional bond. These combinations are made from approximately 92 naturally occurring elements, which chemists estimate can be combined to build as many as 1060 different molecules – 1 followed by 60 zeroes. To become useful, each individual combination would need to be replicated billions of times – think of how many molecules are required to make even a single cell, let alone an insect or a person. Making copies of any complex object takes time because each step required to assemble it involves a search across the vastness of combinatorial space to select which molecules will take physical shape.

Combinatorial spaces seem to show up when life exists

Consider the macromolecular proteins that living things use as catalysts within cells. These proteins are made from smaller molecular building blocks called amino acids, which combine to form long chains typically between 50 and 2,000 amino acids long. If every possible 100-amino-acid-long protein was assembled from the 20 most common amino acids that form proteins, the result would not just fill our Universe but 1023 universes.


Photo by Donna Enriquez/Flickr

The space of all possible molecules is hard to fathom. As an analogy, consider the combinations you can build with a given set of Lego bricks. If the set contained only two bricks, the number of combinations would be small. However, if the set contained thousands of pieces, like the 5,923-piece Lego model of the Taj Mahal, the number of possible combinations would be astronomical. If you specifically needed to build the Taj Mahal according to the instructions, the space of possibilities would be limited, but if you could build any Lego object with those 5,923 pieces, there would be a combinatorial explosion of possible structures that could be built – the possibilities grow exponentially with each additional block you add. If you connected two Lego structures you had already built every second, you would not be able to exhaust all possible objects of the size of the Lego Taj Mahal set within the age of the Universe. In fact, any space built combinatorially from even a few simple building blocks will have this property. This includes all possible cell-like objects built from chemistry, all possible organisms built from different cell-types, all possible languages built from words or utterances, and all possible computer programs built from all possible instruction sets. The pattern here is that combinatorial spaces seem to show up when life exists. That is, life is evident when the space of possibilities is so large that the Universe must select only some of that space to exist. Assembly theory is meant to formalise this idea. In assembly theory, objects are built combinatorially from other objects and, just as you might use a ruler to measure how big a given object is spatially, assembly theory provides a measure – called the ‘assembly index’ – to measure how big an object is in time.

The Lego Taj Mahal set is equivalent to a complex molecule in this analogy. Reproducing a specific object, like a Lego set, in a way that isn’t random requires selection within the space of all possible objects. That is, at each stage of construction, specific objects or sets of objects must be selected from the vast number of possible combinations that could be built. Alongside selection, ‘memory’ is also required: information is needed in the objects that exist to assemble the specific new object, which is implemented as a sequence of steps that can be completed in finite time, like the instructions required to build the Lego Taj Mahal. More complex objects require more memory to come into existence.

In assembly theory, objects grow in their complexity over time through the process of selection. As objects become more complex, their unique parts will increase, which means local memory must also increase. This ‘local memory’ is the causal chain of events in how the object is first ‘discovered’ by selection and then created in multiple copies. For example, in research into the origin of life, chemists study how molecules come together to become living organisms. For a chemical system to spontaneously emerge as ‘life’, it must self-replicate by forming, or catalysing, self-sustaining networks of chemical reactions. But how does the chemical system ‘know’ which combinations to make? We can see ‘local memory’ in action in these networks of molecules that have ‘learned’ to chemically bind together in certain ways. As the memory requirements increase, the probability that an object was produced by chance drops to zero because the number of alternative combinations that weren’t selected is just too high. An object, whether it’s a Lego Taj Mahal or a network of molecules, can be produced and reproduced only with memory and a construction process. But memory is not everywhere, it’s local in space and time. This means an object can be produced only where there is local memory that can guide the selection of which parts go where, and when.

In assembly theory, ‘selection’ refers to what has emerged in the space of possible combinations. It is formally described through an object’s copy number and complexity. Copy number or concentration is a concept used in chemistry and molecular biology that refers to how many copies of a molecule are present in a given volume of space. In assembly theory, complexity is as significant as the copy number. A highly complex molecule that exists only as a single copy is not important. What is of interest to assembly theory are complex molecules with a high copy number, which is an indication that the molecule has been produced by evolution. This complexity measurement is also known as an object’s ‘assembly index’. This value is related to the amount of physical memory required to store the information to direct the assembly of an object and set a directionality in time from the simple to the complex. And, while the memory must exist in the environment to bring the object into existence, in assembly theory the memory is also an intrinsic physical feature of the object. In fact, it is the object.

Life is stacks of objects building other objects that build other objects – it’s objects building objects, all the way down. Some objects emerged only relatively recently, such as synthetic ‘forever chemicals’ made from organofluorine chemical compounds. Others emerged billions of years ago, such as photosynthesising plant cells. Different objects have different depths in time. And this depth is directly related to both an object’s assembly index and copy number, which we can combine into a number: a quantity called ‘Assembly’, or A. The higher the Assembly number, the deeper an object is in time.

To measure assembly in a laboratory, we chemically analyse an object to count how many copies of a given molecule it contains. We then infer the object’s complexity, known as its molecular assembly index, by counting the number of parts it contains. These molecular parts, like the amino acids in a protein string, are often inferred by determining an object’s molecular assembly index – a theoretical assembly number. But we are not inferring theoretically. We are ‘counting’ the molecular components of an object using three visualising techniques: mass spectrometry, infrared and nuclear magnetic resonance (NMR) spectroscopy. Remarkably, the number of components we’ve counted in molecules maps to their theoretical assembly numbers. This means we can measure an object’s assembly index directly with standard lab equipment.

A high Assembly number – a high assembly index and a high copy number – indicates that it can be reliably made by something in its environment. This could be a cell that constructs high-Assembly molecules like proteins, or a chemist that makes molecules with an even higher Assembly value, such as the anti-cancer drug Taxol (paclitaxel). Complex objects with high copy numbers did not come into existence randomly but are the result of a process of evolution or selection. They are not formed by a series of chance encounters, but by selection in time. More specifically, a certain depth in time.

It’s like throwing the 5,923 Lego Taj Mahal pieces in the air and expecting them to come together spontaneously

This is a difficult concept. Even chemists find this idea hard to grasp since it is easy to imagine that ‘complex’ molecules form by chance interactions with their environment. However, in the laboratory, chance interactions often lead to the production of ‘tar’ rather than high-Assembly objects. Tar is a chemist’s worst nightmare, a messy mixture of molecules that cannot be individually identified. It is found frequently in origin-of-life experiments. In the US chemist Stanley Miller’s ‘prebiotic soup’ experiment in 1953, the amino acids that formed at first turned into a mess of unidentifiable black gloop if the experiment was run too long (and no selection was imposed by the researchers to stop chemical changes taking place). The problem in these experiments is that the combinatorial space of possible molecules is so vast for high-Assembly objects that no specific molecules are produced in high abundance. ‘Tar’ is the result.

It’s like throwing the 5,923 pieces from the Lego Taj Mahal set in the air and expecting them to come together, spontaneously, exactly as the instructions specify. Now imagine taking the pieces from 100 boxes of the same Lego set, throwing them into the air, and expecting 100 copies of the exact same building. The probabilities are incredibly low and might be zero, if assembly theory is on the right track. It is as likely as a smashed egg spontaneously reforming.

But what about complex objects that occur naturally without selection or evolution? What about snowflakes, minerals and complex storm systems? Unlike objects generated by evolution and selection, these do not need to be explained through their ‘depth in time’. Though individually complex, they do not have a high Assembly value because they form randomly and require no memory to be produced. They have a low copy number because they never exist in identical copies. No two snowflakes are alike, and the same goes for minerals and storm systems.

Assembly theory not only changes how we think about time, but how we define life itself. By applying this approach to molecular systems, it should be possible to measure if a molecule was produced by an evolutionary process. That means we can determine which molecules could have been made only by a living process, even if that process involves chemistries different to those on Earth. In this way, assembly theory can function as a universal life-detection system that works by measuring the assembly indexes and copy numbers of molecules in living or non-living samples.

In our laboratory experiments, we found that only living samples produce high-Assembly molecules. Our teams and collaborators have reproduced this finding using an analytical technique called mass spectrometry, in which molecules from a sample are ‘weighed’ in an electromagnetic field and then smashed into pieces using energy. Smashing a molecule to bits allows us to measure its assembly index by counting the number of unique parts it contains. Through this, we can work out how many steps were required to produce a molecular object and then quantify its depth in time with standard laboratory equipment.

To verify our theory that high-Assembly objects can be generated only by life, the next step involved testing living and non-living samples. Our teams have been able to take samples of molecules from across the solar system, including diverse living, fossilised and abiotic systems on Earth. These solid samples of stone, bone, flesh and other forms of matter were dissolved in a solvent and then analysed with a high-resolution mass spectrometer that can identify the structure and properties of molecules. We found that only living systems produce abundant molecules with an assembly index above an experimentally determined value of 15 steps. The cut-off between 13 and 15 is sharp, meaning that molecules made by random processes cannot get beyond 13 steps. We think this is indicative of a phase transition where the physics of evolution and selection must take over from other forms of physics to explain how a molecule was formed.

These experiments verify that only objects with a sufficiently high Assembly number – highly complex and copied molecules – seem to be found in life. What is even more exciting is that we can find this information without knowing anything else about the molecule present. Assembly theory can determine whether molecules from anywhere in the Universe were derived from evolution or not, even if we don’t know what chemistry is being used.

The possibility of detecting living systems elsewhere in the galaxy is exciting, but more exciting for us is the possibility of a new kind of physics, and a new explanation of life. As an empirical measure of objects uniquely producible by evolution, Assembly unlocks a more general theory of life. If the theory holds, its most radical philosophical implication is that time exists as a material property of the complex objects created by evolution. That is, just as Einstein radicalised our notion of time by unifying it with space, assembly theory points to a radically new conception of time by unifying it with matter.

Assembly theory explains evolved objects, such as complex molecules, biospheres, and computers

It is radical because, as we noted, time has never been fundamental in the history of physics. Newton and some quantum physicists view it as a backdrop. Einstein thought it was an illusion. And, in the work of those studying thermodynamics, it’s understood as merely an emergent property. Assembly theory treats time as fundamental and material: time is the stuff out of which things in the Universe are made. Objects created by selection and evolution can be formed only through the passing of time. But don’t think about this time like the measured ticking of a clock or a sequence of calendar years. Time is a physical attribute. Think about it in terms of Assembly, a measurable intrinsic property of a molecule’s depth or size in time.

This idea is radical because it also allows physics to explain evolutionary change. Physics has traditionally studied objects that the Universe can spontaneously assemble, such as elementary particles or planets. Assembly theory, on the other hand, explains evolved objects, such as complex molecules, biospheres, and computers. These complex objects exist only along lineages where information has been acquired specific to their construction.

If we follow those lineages back, beyond the origin of life on Earth to the origin of the Universe, it would be logical to suggest that the ‘memory’ of the Universe was lower in the past. This means that the Universe’s ability to generate high-Assembly objects is fundamentally limited by its size in time. Just as a semi-trailer truck will not fit inside a standard home garage, some objects are too large in time to come into existence in intervals that are smaller than their assembly index. For complex objects like computers to exist in our Universe, many other objects needed to form first: stars, heavy elements, life, tools, technology, and the abstraction of computing. This takes time and is critically path-dependent due to the causal contingency of each innovation made. The early Universe may not have been capable of computation as we know it, simply because not enough history existed yet. Time had to pass and be materially instantiated through the selection of the computer’s constituent objects. The same goes for Lego structures, large language models, new pharmaceutical drugs, the ‘technosphere’, or any other complex object.

The consequences of objects having an intrinsic material depth in time is far reaching. In the block universe, everything is treated as static and existing all at once. This means that objects cannot be ordered by their depth in time, and selection and evolution cannot be used to explain why some objects exist and not others. Re-conceptualising time as a physical dimension of complex matter, and setting a directionality for time could help us solve such questions. Making time material through assembly theory unifies several perplexing philosophical concepts related to life in one measurable framework. At the heart of this theory is the assembly index, which measures the complexity of an object. It is a quantifiable way of describing the evolutionary concept of selection by showing how many alternatives were excluded to yield a given object. Each step in the assembly process of an object requires information, memory, to specify what should and shouldn’t be added or changed. In building the Lego Taj Mahal, for example, we must take a specific sequence of steps, each directing us toward the final building. Each misstep is an error, and if we make too many errors we cannot build a recognisable structure. Copying an object requires information about the steps that were previously needed to produce similar objects.

This makes assembly theory a causal theory of physics, because the underlying structure of an assembly space – the full range of required combinations – orders things in a chain of causation. Each step relies on a previously selected step, and each object relies on a previously selected object. If we removed any steps in an assembly pathway, the final object would not be produced. Buzzwords often associated with the physics of life, such as ‘theory’, ‘information’, ‘memory’, ‘causation’ and ‘selection’, are material because objects themselves encode the rules to help construct other ‘complex’ objects. This could be the case in mutual catalysis where objects reciprocally make each other. Thus, in assembly theory, time is essentially the same thing as information, memory, causation and selection. They are all made physical because we assume they are features of the objects described in the theory, not the laws of how these objects behave. Assembly theory reintroduces an expanding, moving sense of time to physics by showing how its passing is the stuff complex objects are made of: the size of the future increases with complexity.

This new conception of time might solve many open problems in fundamental physics. The first and foremost is the debate between determinism and contingency. Einstein famously said that God ‘does not play dice’, and many physicists are still forced to conclude that determinism holds, and our future is closed. But the idea that the initial conditions of the Universe, or any process, determine the future has always been a problem. In assembly theory, the future is determined, but not until it happens. If what exists now determines the future, and what exists now is larger and more information-rich than it was in the past, then the possible futures also grow larger as objects become more complex. This is because there is more history existing in the present from which to assemble novel future states. Treating time as a material property of the objects it creates allows novelty to be generated in the future.

Novelty is critical for our understanding of life as a physical phenomenon. Our biosphere is an object that is at least 3.5 billion years old by the measure of clock time (Assembly is a different measure of time). But how did life get started? What allowed living systems to develop intelligence and consciousness? Traditional physics suggests that life ‘emerged’. The concept of emergence captures how new structures seem to appear at higher levels of spatial organisation that could not be predicted from lower levels. Examples include the wetness of water, which is not predicted from individual water molecules, or the way that living cells are made from individual non-living atoms. However, the objects traditional physics considers emergent become fundamental in assembly theory. From this perspective, an object’s ‘emergent-ness’ – how far it departs from a physicist’s expectations of elementary building blocks – depends on how deep it lies in time. This points us toward the origins of life, but we can also travel in the other direction.

If we are on the right track, assembly theory suggests time is fundamental. It suggests change is not measured by clocks but is encoded in chains of events that produce complex molecules with different depths in time. Assembled from local memory in the vastness of combinatorial space, these objects record the past, act in the present, and determine the future. This means the Universe is expanding in time, not space – or perhaps space emerges from time, as many current proposals from quantum gravity suggest. Though the Universe may be entirely deterministic, its expansion in time implies that the future cannot be fully predicted, even in principle. The future of the Universe is more open-ended than we could have predicted.

Time may be an ever-moving fabric through which we experience things coming together and apart. But the fabric does more than move – it expands. When time is an object, the future is the size of the Universe.

Tuesday, March 28, 2023

Are coincidences real?



They are not real at all.  They are instead direct evidence of intervention from the other side which certainly has skin in the game and outcome.

Of course we also have scientist claiming that life chemistry also happens by coincidence..  The slightest investigation makes all that absurd.  just like these claim stories.

Over and over again, the right person arrives to intervene in an emergency.  Just how many right people were avbailable?  We even have a culture accepting the idea of passing it on.  Wow.



Are coincidences real?

I am an unequivocal rationalist and yet I still want to see something strange and wonderful in life’s weird coincidences


Photo by Ernst Haas/Getty

Paul Broks is an English neuropsychologist-turned-freelance writer. His work has appeared in Prospect, The Times and The Guardian, among others. He is the author of Into the Silent Land (2002) and The Darker the Night, the Brighter the Stars: a Neuropsychologist’s Odyssey Through Consciousness (2018). He lives in Bath, UK.





https://aeon.co/essays/how-should-we-understand-the-weird-experience-of-coincidence?


In the summer of 2021, I experienced a cluster of coincidences, some of which had a distinctly supernatural feel. Here’s how it started. I keep a journal and record dreams if they are especially vivid or strange. It doesn’t happen often, but I logged one in which my mother’s oldest friend, a woman called Rose, made an appearance to tell me that she (Rose) had just died. She’d had another stroke, she said, and that was it. Come the morning, it occurred to me that I didn’t know whether Rose was still alive. I guessed not. She’d had a major stroke about 10 years ago and had gone on to suffer a series of minor strokes, descending into a sorry state of physical incapacity and dementia.

I mentioned the dream to my partner over breakfast, but she wasn’t much interested. We were staying in the Midlands at the time in the house where I’d spent my later childhood years. The place had been unoccupied for months. My father, Mal, was long gone, and my mother, Doreen, was in a care home drifting inexorably through the advanced stages of Alzheimer’s. We’d just sold the property we’d been living in, and there would be a few weeks’ delay in getting access to our future home, so the old house was a convenient place to stay in the meantime.

I gave no further thought to my strange dream until, a fortnight later, we returned from the supermarket to find that a note had been pushed through the letterbox. It was addressed to my mother, and was from Rose’s daughter, Maggie. Her mother, she wrote, had died ‘two weeks ago’. The funeral would be the following week. I handed the note to my partner and reminded her of my dream. ‘Weird,’ she said, and carried on unloading the groceries. Yes, weird. I can’t recall the last time Rose had entered my thoughts, and there she was, turning up in a dream with news of her own death.

So, what am I to make of this? Here’s one interpretation. Rose died, and her disembodied spirit felt the need to tell me and found its way into my dream. Perhaps she had first tried to contact Doreen, but for one reason or another – the impenetrable wreckage of a damaged brain? – couldn’t get through. Here’s another interpretation. The whole chain of events occurred by sheer coincidence, a chance concatenation of happenings with no deeper significance. There’s nothing at all supernatural about it.

If you ask me which of those two interpretations I prefer, it would, unequivocally, be the second. But here’s the thing. There is a part of me that, despite myself, wants to entertain the possibility that the world really does have supernatural dimensions. It’s the same part of me that gets spooked by ghost stories, and that would feel uneasy about spending a night alone in a morgue. I don’t believe the Universe contains supernatural forces, but I feel it might. This is because the human mind has fundamentally irrational elements. I’d go as far as to say that magical thinking forms the basis of selfhood. Our experience of ourselves and other people is essentially an act of imagination that can’t be sustained through wholly rational modes of thought. We see the light of consciousness in another’s eyes and, irresistibly, imagine some ethereal self behind those eyes, humming with feelings and thoughts, when in fact there’s nothing but the dark and silent substance of the brain. We imagine something similar behind our own eyes. It’s a necessary illusion, rooted deep in our evolutionary history. Coincidence, or rather the experience of coincidence, triggers magical thoughts that are equally deep-rooted.

The term ‘coincidence’ covers a wide range of phenomena, from the cosmic (in a total solar eclipse, the disk of the Moon and the disk of the Sun by sheer chance appear to have precisely the same diameter) to the personal and parochial (my granddaughter has the same birthday as my late wife). On the human, experiential, scale, a broad distinction can be drawn between serendipity – timely, but unplanned, discoveries or development of events – and what the 20th-century Lamarckian biologist and coincidence collector Paul Kammerer called seriality, which he defined as ‘a lawful recurrence of the same or similar things or events … in time and space’.

The biography of the actor Anthony Hopkins contains a striking example of a serendipitous coincidence. On first hearing he’d been cast to play a part in the film The Girl from Petrovka (1974), Hopkins went in search of a copy of the book on which it was based, a novel by George Feifer. He combed the bookshops of London in vain and, somewhat dejected, gave up and headed home. Then, to his amazement, he spotted a copy of The Girl from Petrovka lying on a bench at Leicester Square station. He recounted the story to Feifer when they met on location, and it transpired that the book Hopkins had stumbled upon was the very one that the author had mislaid in another part of London – an advance copy full of red-ink amendments and marginal notes he’d made in preparation for a US edition.

Hollywood provides another choice example of seriality. L Frank Baum was a prolific children’s author, best-known for The Wonderful Wizard of Oz (1900). He didn’t live to see his novel turned into the iconic musical fantasy film, yet he reputedly had a remarkable coincidental connection with the movie. The actor Frank Morgan played five roles in The Wizard of Oz (1939), including the eponymous Wizard. He makes his first appearance in the sepia-toned opening sequences as Professor Marvel, a travelling fortune-teller. Movie lore says that, when it came to screen testing, the coat he was wearing was considered too pristine for an itinerant magician. So the wardrobe department was sent on a thrift-shop mission to find something more suitable, and returned with a whole closetful of possibilities. The one they settled on, a Prince Albert frock coat with worn velvet collars, was a perfect fit for the actor. Only later was it apparently discovered that, sewn into the jacket was a label bearing the inscription: ‘Made by Hermann Bros, expressly for L Frank Baum’. Baum had died some 20 years before the film was released but the coat’s provenance was allegedly authenticated by his widow, Maud, who accepted it as a gift when the film was completed.

While some coincidences seem playful, others feel inherently macabre

Some coincidences seem to contain an element of humour as if engineered by a capricious spirit purely for its own amusement. Not long after first moving to Bath in 2016, I made a dash across the busy London Road, misjudged the height of the kerb on the other side, tripped, fell awkwardly, and fractured my right arm. Over the next five years, I lived variously in Bath, rural Worcestershire and London. Soon after moving back to Bath on a more permanent basis, I noticed a stylish mahogany chair in the window of a charity shop on London Road, went straight in and bought it. I thought I’d have no trouble lugging the chair back to my flat half a mile away, but it turned out to be heavier than I expected and awkward to carry. As I was crossing the road where I’d had my fall five years previously, the chair slipped my grip, crashed to the ground and splintered its right arm. Hear the chuckles of the coincidence imp.


The black dog at John Bonham’s headstone. Photograph by David Sillitoe and courtesy of Guardian News & Media Ltd

While some coincidences seem playful, others feel inherently macabre. In 2007, the Guardian journalist John Harris set out on ‘an intermittent rock-grave odyssey’ visiting the last resting places of revered UK rock musicians. About halfway through, he went to the tiny village of Rushock in Worcestershire to gather thoughts at the headstone of the Led Zeppelin drummer John Bonham, who died at the age of 32 on 25 September 1980, after consuming a prodigious quantity of alcohol. A Guardian photographer had visited the grave a few days earlier to get a picture to accompany the piece. It was, writes Harris, ‘an icy morning that gave the churchyard the look of a scene from The Omen’ and, fitting with one of the key motifs of that film, the photographer was ‘spooked by the appearance of an unaccompanied black dog, which urinates on the gravestone and then disappears’. ‘Black Dog’ (1971) happens to be the title of one of the most iconic songs in the Led Zeppelin catalogue.

If we picture a continuum of coincidences from the trivial to the extraordinary, both the Hopkins and the Baum examples would surely be located towards the strange and unusual end. My ‘broken arms’ coincidence tends towards the trivial. Other, still more mundane examples are commonplace. You get chatting to a stranger on a train and discover you have an acquaintance in common. You’re thinking of someone and, in the next breath they call you. You read an unusual word in a magazine and, simultaneously, someone on the radio utters the same word. Such occurrences might elicit a wry smile, but the weirder ones can induce a strong sense of the uncanny. The world momentarily seems full of strange connections and forces.

It’s a state of mind resembling apophenia – a tendency to perceive meaningful, and usually sinister, links between unrelated events – which is a common prelude to the emergence of psychotic delusions. Individual differences may play a part in the experience of such coincidences. Schizotypy is a dimension of personality characterised by experiences that in some ways echo, in muted form, the symptoms of psychosis, including magical ideation and paranormal belief. There is evidence to suggest that, within the general population, people who score high on measures of schizotypy may also be more prone to experiencing meaningful coincidences and magical thinking. Perhaps schizotypal individuals are also more powerfully affected by coincidence. Someone scoring high on measures of schizotypy would perhaps be more spooked by a death dream than I (a low scorer) was.

Ihave set naturalism and the supernatural in binary opposition but perhaps there is a third way. Let’s call it the supranatural stance. This was the position adopted, in different ways, by Kammerer and by the Swiss psychologist Carl Jung. Koestler’s The Roots of Coincidence (1972) introduced Kammerer’s work to the English-speaking world and was influential in reviving interest in Jung’s ideas. Kammerer began recording coincidences in 1900, most of them mind-numbingly trivial. For example, he notes that, on 4 November 1910, his brother-in-law attended a concert, and number 9 was both his seat number and the number of his cloakroom ticket. The following day he went to another concert, and his seat and cloakroom ticket numbers were both 21.

Kammerer’s book Das Gesetz der Serie (1919), or ‘The Law of Seriality’, contains 100 samples of coincidences that he classifies in terms of typology, morphology, power and so on, with, as Koestler puts it, ‘the meticulousness of a zoologist devoted to taxonomy’. The second half of the book is devoted to theory. Kammerer’s big idea is that, alongside causality, there is an acausal principle at work in the Universe, somewhat analogous to gravity but, whereas gravity acts universally on mass, this universal acausal force, as Koestler puts it, ‘acts selectively on form and function to bring similar configurations together in space and time; it correlates by affinity.’ Kammerer sums things up as follows: ‘We thus arrive at the image of a world-mosaic or cosmic kaleidoscope, which, in spite of constant shufflings and rearrangements, also takes care of bringing like and like together.’ This seems far-fetched but Albert Einstein, for one, took Kammerer seriously, describing his book as ‘original and by no means absurd’.

The theory of synchronicity, or meaningful coincidence, proposed by Jung follows a similar line. It took shape over several decades through a confluence of ideas streaming in from philosophy, physics, the occult and, not least, from the wellsprings of magical thinking that bubbled in the depths of Jung’s own prodigiously creative and, at times, near-psychotic mind. Certain coincidences, he suggests, are not merely a random coming-together of unrelated events, nor are the events causally linked. They are connected acausally by virtue of their meaning. Synchronicity was the ‘acausal connecting principle’.

The coincidence of the dream and the insect’s intrusion was the key to therapeutic progress

According to the physicist and historian of science Arthur I Miller’s book Deciphering the Cosmic Number: The Strange Friendship of Wolfgang Pauli and Carl Jung (2009), Jung considered this to be one of the best ideas he ever had, and cites Einstein as an influence. In the early years of the 20th century, Einstein was on several occasions a dinner guest at the Jung family home in Zurich, making a strong impression. Jung traces a direct link between those dinners with Einstein and his dialogue, some 30 years later, with the Nobel prize-winning physicist Wolfgang Pauli, a dialogue that brought the concept of synchronicity to fruition.

Jung’s collaboration with Pauli was an unlikely coalition: Jung, the quasi-mystic psychologist, a psychonaut whose deep excursions into his own unconscious mind he deemed the most significant experiences of his life; and Pauli, the hardcore theoretical physicist who was influential in reshaping our understanding of the physical world at its subatomic foundations. Following his mother’s suicide and a brief, unhappy marriage to a cabaret dancer who left him for a chemist (‘Had she taken with a bullfighter, I would have understood, but such an ordinary chemist…’), Pauli suffered a psychological crisis. Even as he was producing his most important work in physics (formulating the ‘Pauli exclusion principle’; postulating the existence of the neutrino), he was succumbing to bouts of heavy drinking and getting into fights.

Pauli turned for help to Jung who happened to live nearby. His therapy involved the recording of dreams, a task at which he proved himself to be remarkably adept, being able to remember complex dreams in exquisite detail. For his part, Jung saw an opportunity. Not only was Pauli an extraordinary chronicler of dreams, but he was also a willing guide to the arcane realm of subatomic physics. Meanwhile, Pauli saw synchronicity as a way of approaching some fundamental questions in quantum mechanics, not least the mystery of quantum entanglement, by which sub-atomic particles may correlate instantaneously, and acausally, at any distance. From their discussions of synchronicity emerged the Pauli-Jung conjecture, a form of double-aspect theory of mind and matter, which viewed the mental and the physical as different aspects of a deeper underlying reality.

Jung was the first to bring coincidences into the frame of psychological enquiry, and made use of them in his analytic practice. He offers an anecdote about a golden beetle as an illustration of synchronicity at work in the clinic. A young woman is recounting a dream in which she was given a golden scarab, when Jung hears a gentle tapping at the window behind him and turns to see a flying insect knocking against the windowpane. He opens the window and catches the creature as it flies into the room. It turns out to be a rose chafer beetle, ‘the nearest analogy to a golden scarab that one finds in our latitudes’. The incident proved to be a transformative moment in the woman’s therapy. She had, says Jung, been ‘an extraordinarily difficult case’ on account of her hyper-rationality and, evidently, ‘something quite irrational was needed’ to break her defences. The coincidence of the dream and the insect’s intrusion was the key to therapeutic progress. Jung adds that the scarab is ‘a classic example of a rebirth symbol’ with roots in Egyptian mythology.

Whereas Kammerer hypothesised impersonal, acausal factors intersecting with the causal nexus of the Universe, Jung’s acausal connecting principle was enmeshed with the psyche, specifically with the archetypes of the collective unconscious. In Jung’s wider theorising, these archetypes are primordial structures of the mind common to all human beings. Resurrecting an ancient term, Jung envisioned an unus mundus, a unitary or one world, in which the mental and physical are integrated, and where the archetypes are instrumental in shaping both mind and matter. It’s a bold vision but where, we are bound to ask, is the evidence for any of this? Beyond anecdote, there is none. Pauli saw archetypal influence in the scientific theories of Johannes Kepler, the father of modern astronomy and, as the evolutionary psychiatrist Anthony Stevens argues in Private Myths (1995), a case can be made for grounding archetypes biologically by analogy with the innate releasing mechanisms identified by ethologists. If so, there is more than a grain of plausibility in the suggestion that archetypal structures have an influence in shaping thought and behaviour. But the entire Universe? Pauli aside, the idea of synchronicity received little support from the wider scientific community.

Contemporary cognitive science offers a more secure, if less colourful, conceptual framework for making sense of the experience of coincidence. We are predisposed to encounter coincidences because their detection, it might be said, reflects the basic modus operandi of our cognitive and perceptual systems. The brain seeks patterns in the flow of sensory data it receives from the world. It infuses the patterns it detects with meaning and sometimes agency (often misplaced) and, as a part of this process, it forms beliefs and expectations that serve to shape future perceptions and behaviour. Coincidence, in the simple sense of co-occurrence, informs pattern-detection, especially in terms of identifying causal relationships, and so enhances predictability. The ‘world’ does not simply present itself through the windowpanes of the eyes and channels of the other senses. The brain’s perceptual systems are proactive. They construct a model of the world by continually attempting to match incoming, ‘bottom-up’ sensory data with ‘top-down’ anticipations and predictions. Raw sensory data serve to refine the brain’s best guesses as to what’s happening, rather than building the world afresh with each passing moment. The brain, simply put, is constantly on the lookout for coincidence.

You drive a different car for the first time, and suddenly the same make and model seems to be all over the place

From a wide-ranging survey of psychological and neurocognitive research, Michiel van Elk, Karl Friston and Harold Bekkering conclude that the overgeneralisation of such predictive models plays a crucial part in the experience of coincidence. Primed by deeply ingrained cognitive biases (self-attributional, confirmational, attentional, and so forth) and ill-equipped to make accurate estimates of chance and probability, we are innately inclined to see (and feel) patterns and connections where they simply don’t exist. ‘Innately inclined’ because, in evolutionary terms, the tendency to over-detect coincidences is adaptive. Failure to detect contingencies between related events – for example, rustling in the undergrowth/proximity of a predator – is generally more costly than an erroneous inference of a relationship between unrelated events. Another driver of coincidence is what the linguist Arnold Zwicky calls the ‘frequency illusion’, a term originating in a blog post but that has since found its way into the Oxford English Dictionary:

frequency illusion n. a quirk of perception whereby a phenomenon to which one is newly alert suddenly seems ubiquitous.

You might encounter a word for the first time, and then read or hear it later the same day. Or, you drive a different car for the first time, and suddenly the same make and model seems to be all over the place. This is due to a combination of two well-understood psychological processes: selective attention (homing in on salient objects and events); and confirmation bias (seeking out objects and events that support our beliefs and perceptions, while ignoring evidence to the contrary).

Van Elk and colleagues were not the first to signal the unreliability of intuitive judgments of probability as a factor in the perception of coincidence. Various authors before them – eg, Stuart Sutherland in his book Irrationality (1992) – have suggested that paranormal beliefs, including the belief that some coincidences are supernatural, arise because of failures of intuitive probability. The so-called birthday problem or paradox, a staple of introductory classes in probability theory, reliably exposes the flaws of our intuitions. It asks what is the likelihood that two people will share a birthday in randomly selected groups. Most people are surprised to learn that a gathering of only 23 people is required for the chances of two of them sharing a birthday to exceed 50 per cent. I’d been meaning for some time to try a simple empirical exercise involving ‘deathdays’ to mirror the birthday problem (an idea inspired by a conversation with the psychologist Nicholas Humphrey). Since I found myself once more staying briefly at my parents’ old house, a short drive from Rushock, I decided I would pay a visit to St Michael and All Angels’ churchyard and use the Led Zeppelin drummer’s grave as the starting point for my research, for no reason other than the vague pull of that black dog story.

Bonham’s headstone is easy to locate on the north side of the church, festooned as it is with drumsticks and cymbals left as offerings by the many pilgrims who make their way to the shrine from around the world. The grave lies in the shade of a spreading, blue-needled conifer and, to the right, there’s a row of three other graves. So just four graves in total (there is also a small, sandcastle-like monument at the base of the tree trunk, which I discounted for lack of name and dates). The plan was to conduct a self-terminating search. Starting with Bonham’s headstone, and with my notebook in hand, I would inspect the other graves in the row and then the rows behind and in front, working my way methodically around the graveyard, until I found any two matching dates of death, but my mission ended almost as soon as it had begun. I needed to go no further than the four graves (with five occupants) in Bonham’s row. The occupants of the two on the right shared 29 September as their date of death (21 years apart). I wish I could report that the mysterious black dog made an appearance, but it didn’t.

Turning to the probability of dream coincidences, suppose for the sake of argument that the probability of a dream coincidentally matching real-world events is 1-in-10,000, and that only one dream per night is remembered. The probability of a ‘matching’ dream on any given night is 0.0001 (ie, 1-in-10,000), meaning that the probability of a ‘non-matching’ dream is 0.9999. The probability of two consecutive nights with non-matching dreams is 0.9999 x 0.9999. The probability of having non-matching dreams every night for a whole year is 0.9999 multiplied by itself 365 times, which is 0.9642. Rounding up, this means that there is a 3.6 per cent chance of any given person having a dream that matches or ‘predicts’ real-world events over the course of a year. Over a period of 20 years, the odds of having a matching/precognitive dream would be greater than even.

Rose, the woman in the death dream I experienced, was 90 years old, and the chances of a 90-year-old woman in the UK dying before her 91st birthday are around 1-in-6, which is to say, not unlikely. Given her medical history, the likelihood that Rose would die before her 91st birthday was probably much greater than that. But why should I dream about her in the first place? It’s true, I hadn’t been consciously thinking about Rose but, staying in my childhood home, there would have been many implicit reminders. She used to live close by and came to our house often. Also, visiting my ailing mother more often than usual at her care home would have me thinking about death at both conscious and unconscious levels, and perhaps (unconsciously) about her friendship with Rose.

Attempts at understanding coincidence thus range from extravagant conjectures conceiving of acausal forces influencing the fundamental workings of the Universe, to sober cognitive studies deconstructing the basic mechanisms of the mind. But there is something else to consider. Remarkable coincidences happen because, well, they happen, and they happen without inherent meaning and independently of the workings of the pattern-hungry brain. As the statistician David Hand puts it, ‘extremely improbable events are commonplace’. He refers to this as the improbability principle, one with different statistical strands, including the law of truly large numbers, which states that, ‘with a large enough number of opportunities, any outrageous thing is likely to happen.’ Every week, there are many lottery jackpot winners around the globe, each with odds of winning at many millions-to-one against. And, in defiance of truly phenomenal odds, several people have won national and state lottery jackpots on more than one occasion.

Set squat on the back of my armchair was a golden beetle, like the one in Jung’s consulting room

I am a naturalist, but coincidences give me a glimpse of what the supernaturalist sees, and my worldview is briefly challenged. Soon, though, for good or ill, I am back on my usual track. One final coincidence story from my personal archive illustrates this point. It concerns a meta-coincidence, that is, a coincidence about a coincidence. It was a warm afternoon in mid-June, and I was feeling sorry for myself. My partner had walked out on me just the week before, and I thought a good way to deal with self-pity would be to launch into a new project. I would do some research into the psychology of coincidence. So, there I was, settled in an armchair surrounded by books and articles on the subject, including Koestler’s The Roots of Coincidence. Among other things, I’d been reading his account of Jung’s golden scarab story.


Paul Broks’s Rose Chafer beetle. Courtesy the author

In need of coffee, I set Koestler aside and went to the kitchen, returning to find, set squat on the back of my armchair, a golden beetle, a rose chafer like the one that had made its way through the window of Jung’s consulting room. It must have flown in through the wide-open balcony door. I quickly took a picture in case the insect took flight again, and then nudged it onto my palm to return it to the wild, but it simply rolled onto its back and lay motionless. Dead.

I sent the picture to my ex, and asked how she was doing. She didn’t reply, but later that evening called with unsettling news. Zoe, an acquaintance of ours, had that afternoon hanged herself from a tree in her ex-partner’s garden. My brain by now was in magical thinking mode, and I said I couldn’t help but link Zoe’s death to the appearance, and death, of the golden beetle. I didn’t believe there was a link, of course, but I felt there might be. Believing and feeling. There was something else at the back of my mind. In Greek mythology, all that king Midas touched turned to gold. His daughter’s name was Zoe, and she too was turned to gold.

Ah, but rose chafers are quite common in the south of England; they are active in warm weather; the balcony opens out on a water meadow (a typical rose chafer habitat); et cetera. And it has since been suggested to me that the beetle was quite likely ‘playing dead’ rather than truly dead. Perhaps, after I’d thrown it back out onto the meadow, there was a ‘rebirth’ of the kind these creatures are said to symbolise.

Weird, though.

Saturday, March 11, 2023

What Plants Are Saying About Us



This is really different.  What if the most important aspect of the human brain happens to be its majorly extended surface area?  Turns out that we are talking about 1500 to 2000 square centimeters or almost two large pages of newspaper.

Now imagine a field of dandelions with their massdive head of petals.  Ceertainly enough to provide potential cognitiln for the God of the dandelions which is something encountered along with the green man.  All of a sudden area and affinity maters for cognition.

All of a sudden plant cognition is not so unlikely.  Can we share our intents?


What Plants Are Saying About Us

Your brain is not the root of cognition.

BY AMANDA GEFTER

March 7, 2023


Iwas never into house plants until I bought one on a whim—a prayer plant, it was called, a lush, leafy thing with painterly green spots and ribs of bright red veins. The night I brought it home I heard a rustling in my room. Had something scurried? A mouse? Three jumpy nights passed before I realized what was happening: The plant was moving. During the day, its leaves would splay flat, sunbathing, but at night they’d clamber over one another to stand at attention, their stems steadily rising as the leaves turned vertical, like hands in prayer.

“Who knew plants do stuff?” I marveled. Suddenly plants seemed more interesting. When the pandemic hit, I brought more of them home, just to add some life to the place, and then there were more, and more still, until the ratio of plants to household surfaces bordered on deranged. Bushwhacking through my apartment, I worried whether the plants were getting enough water, or too much water, or the right kind of light—or, in the case of a giant carnivorous pitcher plant hanging from the ceiling, whether I was leaving enough fish food in its traps. But what never occurred to me, not even once, was to wonder what the plants were thinking.


To understand how human minds work, he started with plants.

I was, according to Paco Calvo, guilty of “plant blindness.” Calvo, who runs the Minimal Intelligence Lab at the University of Murcia in Spain where he studies plant behavior, says that to be plant blind is to fail to see plants for what they really are: cognitive organisms endowed with memories, perceptions, and feelings, capable of learning from the past and anticipating the future, able to sense and experience the world.

It’s easy to dismiss such claims because they fly in the face of our leading theory of cognitive science. That theory goes by names like “cognitivism,” “computationalism,” or “representational theory of mind.” It says, in short, the mind is in the head. Cognition boils down to the firings of neurons in our brains.

And plants don’t have brains.

“When I open up a plant, where could intelligence reside?” Calvo says. “That’s framing the problem from the wrong perspective. Maybe that’s not how our intelligence works, either. Maybe it’s not in our heads. If the stuff that plants do deserves the label ‘cognitive,’ then so be it. Let’s rethink our whole theoretical framework.”

Calvo wasn’t into plants, either. Not at first. As a philosopher, he was busy trying to understand human minds. When he began studying cognitive science in the 1990s, the dominant view was the brain was a kind of computer. Just as computers represent data in transistors, which can be in “on” or “off” states corresponding to 0s and 1s, brains were thought to represent data in the states of their neurons, which could be “on” or “off” depending on whether they fire. Computers manipulate their representations according to logical rules, or algorithms, and brains, by analogy, were believed to do the same.1

But Calvo wasn’t convinced. Computers are good at logic, at carrying out long, precise calculations—not exactly humanity’s shining skill. Humans are good at something else: noticing patterns, intuiting, functioning in the face of ambiguity, error, and noise. While a computer’s reasoning is only as good as the data you feed it, a human can intuit a lot from just a few vague hints—a skill that surely helped on the savannah when we had to recognize a tiger hiding in the bushes from just a few broken stripes. “My hunch was that there was something really wrong, something deeply distorted about the very idea that cognition had to do with manipulating symbols or following rules,” Calvo says.THE PLANT WHISPERER: Paco Calvo once studied artificial intelligence to determine whether it could help unlock secrets of cognition. He decided it couldn’t. Plants were the key. Courtesy of Universidad de Murcia.

Calvo went to the University of California San Diego to work on artificial neural networks. Rather than dealing in symbols and algorithms, neural networks represent data in large webs of associations, where one wrong digit doesn’t matter so long as more of them are right, and from a few sketchy clues—stripe, rustle, orange, eye—the network can bootstrap a half-decent guess—tiger!

Artificial neural networks have led to breakthroughs in machine learning and big data, but they still seemed, to Calvo, a far cry from living intelligence. Programmers train the neural networks, telling them when they’re right and when they’re wrong, whereas living systems figure things out for themselves, and with small amounts of data to boot. A computer has to see, say, a million pictures of cats before it can recognize one, and even then all it takes to trip up the algorithm is a shadow. Meanwhile, you show a 2-year-old human one cat, cast all the shadows you want, and the toddler will recognize that kitty.

“Artificial systems give us nice metaphors,” Calvo says. “But what we can model with artificial systems is not genuine cognition. Biological systems are doing something entirely different.”

Calvo was determined to find out what that was, to get at the essence of how real biological systems perceive, think, imagine, and learn. Humans share a long evolutionary history with other forms of life, other forms of mind, so why not start with the most basic living systems and work from the bottom up? “If you study systems that look way different and yet you find similarities,” Calvo says, “maybe you can put your finger on what is truly at stake.”

So Calvo traded neural networks for a green thumb. To understand how human minds work, he was going to start with plants.

It turns out it’s true: Plants do stuff.

For one thing, they can sense their surroundings. Plants have photoreceptors that respond to different wavelengths of light, allowing them to differentiate not only brightness but color. Tiny grains of starch in organelles called amyloplasts shift around in response to gravity, so the plants know which way is up. Chemical receptors detect odor molecules; mechanoreceptors respond to touch; the stress and strain of specific cells track the plant’s own ever-changing shape, while the deformation of others monitors outside forces, like wind. Plants can sense humidity, nutrients, competition, predators, microorganisms, magnetic fields, salt, and temperature, and can track how all of those things are changing over time. They watch for meaningful trends—Is the soil depleting? Is the salt content rising?—then alter their growth and behavior through gene expression to compensate.


Plants can distinguish self from non-self, stranger from kin.

Plants’ abilities to sense and respond to their surroundings lead to what seems like intelligent behavior. Their roots can avoid obstacles. They can distinguish self from non-self, stranger from kin. If a plant finds itself in a crowd, it will invest resources in vertical growth to remain in light; if nutrients are on the decline, it will opt for root expansion instead. Leaves munched on by insects send electrochemical signals to warn the rest of the foliage,2 and they’re quicker to react to threats if they’ve encountered them in the past. Plants chat among themselves and with other species. They release volatile organic compounds with a lexicon, Calvo says, of more than 1,700 “words”—allowing them to shout things that a human might translate as “caterpillar incoming” or “*$@#, lawn mower!”

Their behavior isn’t merely reactive—plants anticipate, too. They can turn their leaves in the direction of the sun before it rises, and accurately trace its location in the sky even when they’re kept in the dark. They can predict, based on prior experience, when pollinators are most likely to show up and time their pollen production accordingly. A plant’s form is a record of its history. Its cells—shaped by experience—remember.

Chat? Anticipate? Remember? It’s tempting to tame all those words with scare quotes, as if they can’t mean for plants what they mean for us. For plants, we say, it’s biochemistry, just physiology and brute mechanics—as if that’s not true for us, too.

Besides, Calvo says, plant behavior can’t be reduced to mere reflexes. Plants don’t react to stimuli in predetermined ways—they’d never have made it this far, evolutionarily speaking, if they did. Having to deal with a changing environment while being rooted to one spot means having to set priorities, strike compromises, change course on the fly.

Consider stomata: tiny pores on the undersides of leaves. When the pores are open, carbon dioxide floods in—that’s good, that’s breathing—but water vapor can escape. So how open should the stomata be at any given time? It depends on the availability of water in the soil—if there’s plenty more for the taking, it’s worth letting the carbon dioxide in. If the dirt’s dry, the leaves have to retain water. For the leaves to make that decision, the roots have to tell them about the availability of water. The leaves communicate their own needs to the roots in turn, encouraging them, for example, to form symbiotic relationships with specific microorganisms in the soil.3

If a plant could respond to sensory information on a one-to-one basis—when the light does x, the plant does y—it would be fair to think of plants as mere automatons, operating without thought, without a point of view. But in real life, that’s never the case. Like all organisms, plants are immersed in dynamic, precarious environments, forced to confront problems with no clear solutions, betting their lives as they go. “A biological system is never exposed to just a single source of stimulation,” Calvo says. “It always has to make a compromise among different things. It needs some kind of valence, a higher-level perspective. And that’s the entry to sentience.”

Sentience?

Are plants clever? Maybe. Adaptive? Sure. But sentient? Aware? Conscious? Listen closely and you can hear the scoffing.

To feel alive, to have a subjective experience of your surroundings, to be an organism whose lights are on and someone’s home—that’s reserved for creatures with brains, or so says traditional cognitive science. Only brains, the theory goes, can encode mental representations, models of the world that brains experience as the world. As Jon Mallatt, a biologist at the University of Washington, and colleagues put it in their 2021 critique of Calvo’s work, “Debunking a Myth: Plant Consciousness,” to be conscious requires “experiencing a mental image or representation of the sensed world,” which brainless plants have no means of doing.4

But for Calvo, that’s exactly the point. If the representational theory of the mind says that plants can’t perform intelligent, cognitive behaviors, and the evidence shows that plants do perform intelligent, cognitive behaviors, maybe it’s time to rethink the theory. “We have plants doing amazing things and they have no neurons,” he says. “So maybe we should question the very premise that neurons are needed for cognition at all.”

The idea that the mind is in the brain comes to us from Descartes. The 17th-century philosopher invented our modern notion of consciousness and confined it to the interior of the skull. He saw the mind and brain as separate substances, but with no direct access to the world. The mind was reliant on the brain to encode and represent the world or conjure up its best guess as to what the world might be, based on ambiguous clues trickling in through unreliable senses. What Descartes called “cerebral impressions” are today’s “mental representations.” As cognitive scientist Ezequiel Di Paolo writes, “Western philosophical tradition since Descartes has been haunted by a pervasive mediational epistemology: the widespread assumption that one cannot have knowledge of what is outside oneself except through the ideas one has inside oneself.”5

Modern cognitive science traded Descartes’ mind-body dualism for brain-body dualism: The body is necessary for breathing, eating, and staying alive, but it’s the brain alone, in its dark, silent sanctuary, that perceives, feels, and thinks. The idea that consciousness is in the brain is so ingrained in our science, in our everyday speech, even in popular culture that it seems almost beyond question. “We just don’t even notice that we are adopting a view that is still a hypothesis,” says Louise Barrett, a biologist at the University of Lethbridge in Canada who studies cognition in humans and other primates.


We should question whether neurons are needed for cognition at all.

Barrett, like Calvo, is one of an increasing number of scientists and philosophers questioning that hypothesis because it doesn’t comport with a biological understanding of living organisms. “We need to get away from thinking of ourselves as machines,” Barrett says. “That metaphor is getting in the way of understanding living, wild cognition.”

Instead, Barrett and Calvo draw from a set of ideas referred to as “4E cognitive science,” an umbrella term for a bunch of theories that all happen to start with the letter “E.” Embodied, embedded, extended, and enactive cognition—what they have in common (besides “E”s) is a rejection of cognition as a purely brainbound affair. Calvo is also inspired by a fifth “E”: ecological psychology, a kindred spirit to the canonical four. It’s a theory of how we perceive without using internal representations.

In the standard story of how vision works, it’s the brain that does the heavy lifting of creating a visual scene. It has to, the story goes, because the eyes contribute so little information. In a given visual fixation, the pattern of light in focus on the retina amounts to a two-dimensional area the size of a thumbnail at arm’s length. And yet we have the impression of being immersed in a rich three-dimensional scene. So it must be that the brain “fills in” the missing pieces, making inferences from scant data and offering up its best hallucination for who-knows-who to “see,” who-knows-how.

Dating back to the work of psychologist James Gibson in the 1960s, ecological psychology offers a different story. In real life, it says, we never deal with static images. Our eyes are always moving, darting back and forth in tiny bursts called saccades so quick we don’t even notice. Our heads move, too, as do our bodies through space, so what we’re confronted with is never a fixed pattern of light but what Gibson called an “optic flow.”

To “see,” according to ecological psychology, is not to form a picture of the world in your head. It stresses that patterns of light on the retina change relative to your movements. It’s not the brain that sees, but the whole animate body. The result of “seeing” is never a final image for an internal mind to contemplate in its secret lair, but an adaptive, ongoing engagement with the world.

Plants don’t have eyes exactly, but flows of light and energy impinge on their senses and transform in predictable ways relative to the plants’ own movements. Of course, to notice that, you first have to notice that plants move.

“If you think that plants are sessile,” or stationary, Calvo says, “just sitting there, taking life as it comes, it’s difficult to visualize the idea that they are generating these flows.”

Plants appear sessile to us only because they move slowly. Quick movements—like the nightly shuffle of my prayer plant—can be accomplished by altering the water content in certain cells to change the tension in a stem, or to stiffen a branch under the weight of heavy snow. Most plant movement, though, occurs through growth. Since they can’t pick up their roots and walk away, plants change location by growing in a new direction. We humans are basically stuck with the shape of our bodies, but at least we can move around; plants can’t move around, but they can grow into whatever shape best suits them. This “phenotypic plasticity,” as it’s called, is why it’s critical for plants to be able to plan ahead.

“If you spend all this time growing a tendril in a particular direction,” Barrett says, “you can’t afford to get it wrong. That’s why prediction does seem very important. It’s like my granddad said; maybe all granddads say this: ‘measure twice, cut once.’ ”

Phenotypic plasticity is a powerful but slow process—to see it, you have to speed it up. So Calvo makes time-lapse recordings, in which slow and seemingly random growth blooms into what appears to be purposeful behavior. One of his time-lapse videos shows a climbing bean growing in search of a pole. The vine circles aimlessly as it grows. Hours are compressed into minutes. But when the plant senses a pole, everything changes: It pulls itself back, like a fisherman casting a line, then flings itself straight for the pole and makes a grab.

“Once movement becomes conspicuous by speeding it up,” Calvo says, “you see that certainly plants are generating flows with their movement.”

By using these flows to guide their movements, plants accomplish all kinds of feats, such as “shade avoidance”—steering clear of over-populated areas where there’s too much competition for photosynthesis. Plants, Calvo explains, absorb red light but reflect far-red light. As a plant grows in a given direction, it can watch how the ratio of red to far-red light varies and change directions if it finds itself heading for a crowd.

“They are not storing an image of their surroundings to make computations,” Calvo says. “They’re not making a map of the vicinity and plotting where the competition is and then deciding to grow the other way. They just use the environment around them.”


We dismiss a plant’s behavior as brute mechanics—as if that’s not true for us, too.

That might seem to be a long cry from how humans perceive the world—but according to 4E cognition, the same principles apply. Humans don’t perceive the world by forming internal images either. Perception, for the E’s, is a form of sensorimotor coordination. We learn the sensory consequences of our movements, which in turn shapes how we move.

Just watch an outfielder catch a fly ball.6 Standard cognitive science would say the athlete’s brain computes the ball’s projectile motion and predicts where it’s going to land. Then the brain tells the body what to do, the mere output of a cognitive process that took place entirely inside the head. If all that were true, the player could just make a beeline to that spot—running in a straight line, no need to watch the ball—and catch.

But that’s not what outfielders do. Instead, they move their bodies, constantly shuffling back and forth and watching how the position of the ball changes as they move. They do this because if they can keep the ball’s speed steady in their field of vision—canceling out the ball’s acceleration with their own—they and the ball will end up in the same spot. The player doesn’t have to solve differential equations on a mental model—the movement of her body relative to the ball solves the problem for her in active engagement, in real time. As the MIT roboticist Rodney Brooks wrote in a landmark 1991 paper, “Intelligence Without Representation,” “Explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model.”7

If cognition is embodied, extended, embedded, enactive, and ecological, then what we call the mind is not in the brain. It is the body’s active engagement with the world, made not of neural firings alone but of sensorimotor loops that run through the brain, body, and environment. In other words, the mind is not in the head. Calvo likes to quote the psychologist William Mace: “Ask not what’s inside your head, but what your head’s inside of.”

When I first encountered the 4E theories, I couldn’t help thinking of consciousness. If the mind is embodied, extended, embedded, etcetera, does consciousness—that magical, misty stuff—seep out of the confines of the skull, permeate the body, pour like smoke from the ears, and leak out into the world? But then I realized that way of thinking was a hangover from the traditional view, where consciousness was treated as a noun, as something that could be located in a particular place.

“Cognition is not something that plants—or indeed animals—can possibly have,” Calvo writes in his new book, Planta Sapiens.8 “It is rather something created by the interaction between an organism and its environment. Don’t think of what’s going on inside the organism, but rather how the organism couples to its surroundings, for that is where experience is created.”

The mind, in that sense, is better understood as a verb. As the philosopher Alva NoĂ«, who works in embodied cognition, puts it, “Consciousness isn’t something that happens inside us: It is something we do.”9

And we do it in order to keep on living. The need to stay alive, to tread in far-from-equilibrium water—that is what separates us from machines. “Wild cognition,” as Barrett puts it, is more akin to a candle flame than to a computer. “We are ongoing processes resisting the second law of thermodynamics,” she says. We are candles desperately working to re-light ourselves, while entropy does its damnedest to blow us out. Machines are made—one and done—but living things make themselves, and they have to remake themselves so long as they want to keep living.


I felt like an active life form, tendrilled and strange.

The Chilean biologists Humberto Maturana and Francisco Varela—founding fathers of embodied and enactive cognition—coined the term “autopoiesis” to capture this property of self-creation. A cell—the fundamental unit of life—serves as the prime example.

Cells consist of metabolic networks that churn out the very components of those networks, including the cell membrane, which the network continuously builds and rebuilds, while the membrane, in turn, allows the network to function without oozing back into the world. To keep its metabolism going, the cell needs to be in constant exchange with its environment, drawing in resources and tossing out waste, which means the membrane has to let things pass through it. But it can’t do it indiscriminately. The cell has to take a stance on the world, to view it as a place of value, full of things that are “good” and “bad,” “useful” and “harmful,” where such terms are never absolute but dependent on the cell’s ever-changing needs and the environment’s ever-changing dynamics.

These valences, Calvo says, are the stirrings of sentience. They are distinctions that carve out (or “enact”) a world in a process that 4E cognitive scientists call “sense-making.” The act of making valenced distinctions in the world, which allow you to draw the boundary between self and other, is the primordial cognitive act from which all higher levels of cognition ultimately derive. The same act that keeps a living system living is the act by which, as NoĂ« puts it, “the world shows up for us.”

“You start with life,” says Evan Thompson, a philosopher at the University of British Columbia and one of the founders of the enactive approach. “Being alive means being organized in a certain way. You’re organized to have a certain autonomy, and that immediately carves out a world or a domain of relevance.” Thompson calls this “life-mind continuity.” Or as Calvo puts it, echoing the 19th-century psychologist Wilhelm Wundt, “Where there is life there is already mind.”

From a 4E perspective, minds come before brains. Brains come into the picture when you have multicellular, mobile organisms—not to represent the world or give rise to consciousness, but to forge connections between sensory and motor systems so that the organism can act as a singular whole and move through its environment in ways that keep its flame lit.

“The brain fundamentally is a life regulation organ,” Thompson says. “In that sense, it’s like the heart or the kidney. When you have animal life, it’s crucially dependent for the regulation of the body, its maintenance, and all its behavioral capacities. The brain is facilitating what the organism does. Words like cognition, memory, attention, or consciousness—those words for me are properly applied to the whole organism. It’s the whole organism that’s conscious, not the brain that’s conscious. It’s the whole organism that attends or remembers. The brain makes animal cognition possible, it facilitates and enables it, but it’s not the location of it.”

A bird needs wings to fly, Thompson says, but the flight is not in the wings. Disembodied wings in a vat could never fly—it’s the whole bird, in interaction with the air currents shaped by its own movements, that takes to the sky.


What we model with artificial systems is not genuine cognition.

“Plants are a different strategy of multicellularity than animals,” Thompson says. They don’t have brains, but according to Calvo they have something just as good: complex vascular systems, with networks of connections arranged in layers not unlike a mammalian cortex. In the root apex—a small region in the tip of a plant’s root—sensory and motor signals are integrated through electrochemical activity using molecules similar to the neurotransmitters in our brains, with plant cells firing off action potentials similar to a neuron’s, only slower. Like the human brain, the root apex allows the plant to integrate all of its sensory flows in order to produce new behavior that will generate new flows in ways that keep the plant adaptively coupled to the world.

The similar roles played by an animal’s nervous system and a plant’s vascular system help explain why the same anesthetics can put both animals and plants to sleep, as Calvo demonstrated using a Venus flytrap in a bell jar. Normally, the plant’s traps snap shut when an unfortunate insect triggers one of its sensor hairs, which protrude from the trap’s mouth like sharks’ teeth. (Actually, the clever plant awaits the triggering of a second hair within seconds of the first before expending the costly energy to bite. Once closed, it awaits three more triggers—to ensure there’s a decent bug buzzing around in there—before it releases acidic enzymes to digest its meal. As Calvo sums it up, “They can count to five!”) Using surface electrodes, Calvo watched as the triggered hairs sent electric spikes zapping through the plant, sparking its motor system to react. With anesthesia, all of that stopped. Calvo tickled the trap’s hairs and it just sat there, its mouth agape. The electrode reading flatlined.

“The anesthesia prevents the cell from firing an action potential,” Calvo explains. “That happens in both plants and animals.” It’s not that the anesthetic is turning down the dial of consciousness inside the brain or root apex, it’s just severing the links between sensory inputs and motor outputs, preventing the organism from engaging as a singular whole with its environment. Once “woken,” though, the groggy Venus flytraps quickly returned to their usual behavior.

“Clearly,” Thompson says, “plants are self-organizing, self-maintaining, self-regulating, highly adaptive, they engage in complex signaling among each other, within species and across species, and they do that within a framework of multicellularity that’s different from animal life but exhibits all the same things: autonomy, intelligence, adaptivity, sense-making.” From a 4E perspective, Thompson says, “there’s no problem in talking about plant cognition.”

In the end, Calvo’s critics are right: Plants aren’t using brains to form internal representations. They have no private, conscious worlds locked up inside them. But according to 4E cognitive science, neither do we.

“The mistake was to think that cognition was in the head,” Calvo says. “It belongs to the relationship between the organism and its environment.”

After talking with Calvo, I looked around my apartment overrun with plants—at the pothos and bromeliads, rocktrumpet vines and staghorn ferns, at the peace lilies and crowns of thorns, snake plants, Monstera, ZZs, and palms—and they suddenly appeared very different. For one thing, Calvo had told me to think of plants as being upside-down, with their “heads” plunged into the soil and their limbs and sex organs sticking up and flailing around. Once you look at a plant that way, it’s hard to unsee it. But more to the point, the plants appeared to me now not as objects, but as subjects—as living, striving beings trying to make it in the world—and I found myself wondering whether they felt lonely in their pots, or panicked when I forgot to water them, or dizzy when I rotated them on the windowsill.

It wasn’t just the plants. I felt myself differently, too: less like a passive spectator, snug inside my skull, and more like an active life form, tendrilled and strange, moving through the world as the world moved through me.

“Plants are not that different from us after all,” Calvo had told me, “not because I’m beefing them up to make them more similar to us, but because I’m rethinking what human perception is about. I’m neither inflating them nor deflating us but putting us all on the same page.”

It was hard not to wonder whether, from that page, the story of our planet might unfold differently. The “E” approaches ask us to question what we are, how intimately we’re entangled with the world, and whether we can rightly see ourselves as standing apart from nature or whether the destruction we wreak is steadily diminishing our own wild cognition.

“Human nature,” wrote John Dewey, the pragmatist philosopher, “exists and operates in an environment. And it is not ‘in’ that environment as coins are in a box, but as a plant is in the sunlight and soil. It is of them.”10

Amanda Gefter is a science writer and author of Trespassing on Einstein’s Lawn. She lives in Watertown, Massachusetts.

Lead illustration by Deena So’Oteh

References

1. Gefter, A. The man who tried to redeem the world with logic. Nautilus (2015).

2. Pennisi, E. Plants communicate distress using their own kind of nervous system. Science (2018).

3. Tsikou, D., et al. Systemic control of legume susceptibility to rhizobial infection by a mobile microRNA. Science 362, 233-236 (2018).

4. Mallatt, J., Blatt, M.R., Draguhn, A., Robinson, D.G., & Taiz, L. Debunking a myth: plant consciousness. Protoplasma 258, 459-476 (2021).

5. Di Paolo, E. Sensorimotor Life Oxford University Press, Oxford, United Kingdom (2017).

6. Wilson, A.D. & Golonka, S. Embodied cognition is not what you think it is. Frontiers in Psychology 4, 58 (2013).

7. Brooks, R.A. Intelligence without representation. Artificial Intelligence 47, 139-159 (1991).

8. Calvo, P. Planta Sapiens: The New Science of Plant Intelligence W. W. Norton & Co, New York, NY (2023).

9. Noë, A. Out of Our Heads Hill and Wang, New York, NY (2010).

10. Dewey, J. Human Nature and Conduct: An introduction to social psychology H. Holt and Company, New York, NY(1922).