Tuesday, February 3, 2026

Computers can’t surprise




not lesxt because they cannot recall the future and be themselves surprised.  Recall I bring new knowledge into our world and then track its path to acceptance.  That is slow and arduous..

computers can present data  That can be surprising, but the human mind must expect it.

plucking the future back to the past is not possible for a computer..

Computers can’t surprise


As AI’s endless clichés continue to encroach on human art, the true uniqueness of our creativity is becoming ever clearer


Paris, 1966. Photo by Bert Glinn/Magnum Photos


is an English writer. His novels include Damascus (1998), Dry Bones (2004), Lazarus Is Dead (2011) and Acts of the Assassins (2015), shortlisted for the UK Goldsmiths Prize, for novels that ‘extend the possibilities of the novel form’. His non-fiction includes the memoirs The Day That Went Missing (2017), winner of the 2018 PEN Ackerley Award for literary autobiography, and Sad Little Men (2021).




Creative writing used to be a human prerogative: do it well, do it badly, but either way endorse the consensus that to write about human experience was worth the candle and the coffee. Here was an essential human act, so much so that poetry formed a critical part of the computer pioneer Alan Turing’s original test: to determine whether an unseen respondent to a series of questions was human or a mechanical imposter. The Turing Test is often simplified to denote a single crossing point between two territories, human and machine. Pass the test, and artificial intelligence can stroll on over to our side of the line. Take a look around. Decide what to do with us. But, first, it has to pass.

In the paper ‘Computing Machinery and Intelligence’ (1950), published in the journal Mind, Turing set out his objective: ‘to consider the question, “Can machines think?”’ In true human fashion, he immediately re-phrases the question, at some length, and eventually arrives at the ‘imitation game’, modelled on a drawing-room entertainment from before the internet, before television. The original game he has in mind involves a guesser in the hotseat who poses questions to a man (X) and a woman (Y), who are out of sight and hearing in a separate room. The guesser has to determine from their written answers which is the man and which the woman. X tries to mislead, and wins if the guesser is wrong; Y wins if the guesser is right. Try it, it’s fun.

In this context, the first question posed in Turing’s proposed test is less surprising than at first it seems: ‘Will X please tell me the length of his or her hair?’ Next, Turing asks, equally politely: ‘Please write me a sonnet on the subject of the Forth Bridge.’ Two questions in, and the contested boundary between human and machine thinking is already looking for answers in literature, in art. Turing’s 1950s version of X – the participant aiming to mislead – replies: ‘Count me out on this one. I never could write poetry.’ To imagine this answer, in the second phase of his game, Turing’s complicated brain is playing the role of a machine playing X, hidden from sight and typing its answers, pretending to be a man (who previously played the game pretending to be a woman). I know, but if the test were easy an air-fryer could pass it.

Turing isn’t suggesting that a machine can’t write poetry. In the convoluted logic of the imitation game, X calculates that in 1950 ordinary people didn’t write poetry, a commonsense assumption that every computer masquerading as human should know. Among other prejudices from the mid-20th century, Turing’s paper makes incautious references to race, religion and the Constitution of the United States. He likens the inability to see computers as sentient as equivalent to the ‘Moslem view that women have no souls’. Turing wades in: he doesn’t compute as we would now.

And neither do the future computers of 2026 that he was trying to envisage. Any of today’s large language models (LLMs), like ChatGPT or Claude, can write an instant sonnet on the subject of the Forth Bridge. I typed in Turing’s test question, and Claude 4 threw up 14 lines of poetry including the abbreviated word ‘mathemat’cal’, for the scansion. The poem made sense, and was formally a sonnet, and appeared in seconds.

Whether or not this counts as thinking, Turing intuits that the frontier he’s marking out will be picketed by the arts. In his paper, he picks a fight with an eminent neuroscientist of the time, Sir Geoffrey Jefferson of the Royal Society, who believed that ‘Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain – that is, not only write it but know that it had written it.’ For Jefferson, addressing the Royal College of Surgeons in his 1949 Lister Oration, mechanising the efforts of the infinite monkeys on typewriters didn’t really count.

These days, in the arts, it’s harder to share Jefferson’s confidence. The advances made by AI prick at artistic vanity – the work of a human artist can’t be all that special if a machine can replicate the results almost instantly. That hurts. A great human artist, we’d like to believe, amplifies and defends the exceptionalist spirit of our species but, in an echo of the anxieties that haunted early photography, a demonised version of AI threatens to steal away our souls. Encroaching on the best of what we can do and make and be, machine art intrudes onto sacred territory. Creative artists are supposed to be special, inimitable.


Turing’s Imitation Game paper was published 14 years after the first Writers’ Workshop convened at the University of Iowa, in 1936. Turing may not have known, with his grounding in maths at King’s College Cambridge, that elements of machine learning had already evolved across the Atlantic in the apparently unrelated field of creative writing. Before Iowa, the Muse; after Iowa, a method for assembling literary content not dissimilar to the functioning of today’s LLMs.

First, work out what effective writing looks like. Then, develop a process that walks aspiring writers towards an imitation of the desired output. The premise extensively tested by Iowa – and every creative writing MFA since – is that a suite of learnable rules can generate text that, as a bare minimum, resembles passable literary product. Rare is the promising screenwriter unfamiliar with Syd Field’s Three-Act Structure or Christopher Vogler’s Hero’s Journey: cheat codes that promise the optimal sequence for acts, scenes, drama and dialogue. In the same way that an LLM is designed to ‘think’, these templates are a form of reverse engineering: first study how the mechanics of Jaws or Witness made those movies sing, then identify transferable components for reassembly to achieve similar artistic success further down the line.

To a computer-programmer, reverse engineering as a machine-learning mechanism is known as back-propagation. In A Brief History of Intelligence (2023), Max S Bennett shows how this methodology has already helped in the development of image recognition, natural language processing, speech recognition, and self-driving cars. Supervising coders work to isolate the required answer in advance, then go back to nudge input responses until the artificial neural network arrives at the pre-set solution.

The mysterious magic ingredient has been debated in English on the printed page since at least 1580

If only writing were so simple. According to figures from Data USA, up to 4,000 students graduate each year with creative writing MFAs in the US. No one expects that number of Great American Novels to show for so much studying, despite the fact that many hopeful writing careers start with the prompt mentality invited by Chat GPT: I want to write a bestseller like the one that blew me away last summer. Or, for the more adventurous: something new but relatable, a novel/memoir hybrid with literary credibility and strong narrative momentum, like a cross between Lee Child and Annie Ernaux. Thank you. I’ll wait. But not very patiently.

Clearly, when the end result is compared with the original intention, the back-propagation method is fallible for creative writing courses and LLMs alike. To revisit Jefferson, as quoted by Turing, the finished work is undermined when inspired by the wrong ‘thoughts and emotions’, whether blind ambition in student writers or blind obedience in computers. Something more is required, and the mysterious magic ingredient has been debated in English on the printed page since at least 1580, when Sir Philip Sidney reached for the essence of exemplary creative writing in An Apology for Poetry. When it worked, he concluded, good writing could both teach and delight. It provided a guide to living well in a more accessible form than theology or history or philosophy. Creative writing was special.

Sir Philip Sidney’s ‘Defence of Poesie’, in a 1627 edition of the Arcadia. Courtesy University of Glasgow Library/Flickr

So special, in fact, that no one has yet been able to break down the findings of English literature departments – what makes literature work – into sufficient granular detail to reformulate as instructions actionable by an LLM. Or by a creative writing student. Nor are the efforts being made in this area by other art-forms particularly encouraging. ArtEmis is a large-scale dataset designed to record and subsequently predict emotional responses to works of visual art. The scheme matches emotional annotations from more than 6,500 participants to textual explanations of what they’re seeing, and from this data ArtEmis hopes to enable the back-propagated creation of artworks that provoke equivalent emotional responses.

The understanding seems to be that if a machine can create a visual image that generates a controlled set of feels, then art will have been successfully created. Which sounds plausible, except human emotional responses are notoriously capricious. The ArtEmis procedure already has an analogue precedent in Hollywood, but if focus groups worked reliably for the arts, then cinemas would be full of bangers. It’s worth remembering that the 2023 strike action by the Writers Guild of America won significant protections against the use of generative AI in screenwriting, specifically disallowing the replacement of human writers by AI. This hasn’t noticeably boosted the production of great art movies. Human writers still make so-so films. Without any intervention from AI, we continue to paint indifferent canvasses and write forgettable novels.

Bad art is something human beings love to do, in vast numbers. It’s part of who we are, and when abandoned by inspiration we trust in the same methods we’ve programmed into LLMs. As predicted by Turing, ‘digital computers … can in fact mimic the actions of a human computer very closely’, and for the creation of failed artistic product we’ve taught artificial intelligence all our dodges. Creative writing that falls short, whether originating in a garret or in an Nvidia chip, ‘writes’ by selecting language units that commonly fit together, as recognised from published material available in the public domain. Familiar word combinations are assembled into almost convincing sentences, a tired use of language formerly called out as cliché. LLMs are cliché machines, trained on a resilient human weakness for generating maximum content with minimum effort.

This explains the headline in June 2025 in the British publishing industry’s leading trade magazine The Bookseller: ‘AI “Likely” to Produce Bestseller by 2030’. The headline referenced a conference speech by Philip Stone of Nielsen, a company that compiles UK book-sales data. I expect he’s right about that bestseller, because LLMs will come for genre writing first – police procedurals, spy thrillers, romances – re-treading identifiable formulas with proven popular appeal. Eager to please (‘Hi Rich, how are you today?’), AI also has the advantage, shared by surprisingly few human writers, of being able to churn out derivative product without embarrassment.

Fortunately for everybody else, the endless capacity of an AI to deliver rule-bound and resolution-directed narrative has an unexpected benefit: AI is the tool that will prove not all writing has the same value.

Writing has been reluctant to imagine new ways of reading, despite the vistas opened up by new technologies

To escape the dead man’s handle of cliché, readers live in hope for organic associations, speculative leaps and surprise inferences. Whereas, to an AI, which is fed the answer before the question, ‘surprise’ remains an elusive concept. This objection to machine thinking was raised as long ago as 1842 by Ada Lovelace about one of the earliest computers, Charles Babbage’s Analytical Engine (for historical context, Iowa’s first creative writing get-together, though informal, took place in 1897). ‘The Analytical Engine has no pretensions whatever to originate anything,’ Lovelace observed. ‘It can do whatever we know how to order it to perform.’ Her italics emphasise the contrast with human thinking where originality, among artists at least, is a cherished value.

Daguerreotype of an 1852 painting of Ada Lovelace by Henry Wyndham Phillips. Courtesy Bodleian Library, Oxford/Wikipedia

The visual arts, more than literature, have kept alive the modernist imperative to ‘make it new’. The Turner Prize, for example, awarded to the strongest UK contemporary art exhibition in any given year, is permeated by a sense that if it’s not new it’s not art. For visual artists, formal curiosity comes with the job, exploring new ways of making to invite new ways of seeing. Writing, on the other hand, happily rewards the comfort of familiar forms, which justifies, in the UK, the existence of a separate Goldsmiths Prize for fiction that ‘extends the possibilities of the novel form’. Because most other prize-winning novels aren’t doing that.

Writing has been reluctant to imagine new ways of reading, despite the vistas opened up by new technologies. Transferring books wholesale to Kindle and Audible is little more than digital haulage, and makes literature, in its complacency, vulnerable to proficient AI re-runs of familiar material, lowering the odds on that imminent AI-generated bestseller. Writers, or more accurately their publishers, seem to have mislaid any sense of urgency around the importance of difficulty, or curiosity about the astonishing returns that formally daring work can provide. It’s a rare book proposal submitted to a mainstream publisher that dares promise a book that’s not like all the other books.

Lovelace, thinking about how machines think, instantly identified the importance of originality. Or as a Marianne Moore poem has it:
these things are important not because a
high-sounding interpretation can be put upon them but because they are
useful; …

Originality, in the arts as in science, enables the human project to move forward. Any discovery that is new and true extends the scope of reality. In this context, art that only pretends to be original won’t get us anywhere very interesting.

The Turing Test is basically a test of lying. Can a machine, adopting a recognisably human strategy, pretend to be something it isn’t? Passing Turing’s Test calls for an act of deception, leaving the deceived human interrogator vulnerable to primitive fears about impersonation and imposture. Art is supposed to see the truth beyond this kind of lie, and the original creations worth defending are in a category so extraordinary, because of the intensity and authenticity of Jefferson’s ‘thoughts and emotions felt’, that they exist in a permanent present tense. What Toni Morrison does is unbelievable.

Whereas what an AI does is probabilistic. An LLM’s calculation of the most likely sequence of words is the least likely way to create great writing. Anyone working at a more emotionally engaged level than statistical probability, genuinely creating new work, has a better chance of resonating with readers, however that affinity is expressed. ‘If literature is a street brawl between the courageous and the banal,’ Greg Baxter wrote in his memoir A Preparation for Death (2010), ‘I bring the toughest gang I know: the pure killers, the insane.’ Baxter’s literary gangsters do not kneel before the most likely next word. Baxter values his ‘pure killers, the insane’, while computers as envisioned by Turing receive instructions to be ‘obeyed correctly and in the right order.’

We can defy AI creep by encouraging the human ambition to make art, unassisted, whether successful or otherwise

I don’t doubt that LLMs can be asked to imitate transgression, but obeying that instruction makes them ludicrously phony and the enemies of art, even though in their advanced contemporary forms they appear better equipped to respond to Turing’s stabs at English literature. In 2026, for example, ChatGPT and Claude make short work of the Turing Test challenge of 1950 to explain Shakespeare’s creative choices in Sonnet 18. Why a ‘summer’s day’ and not a ‘spring day’? Easy (just ask them, they know the answer). LLMs now ace most of Turing’s original questions, and if they can’t write a sonnet like Shakespeare, then neither can I. That doesn’t mean I can’t think, and Turing makes the same reasonable allowance for computers. They too are allowed their limitations, and his attitude to machine intelligence follows the logic of Denis Diderot’s parrot: if the illusion of understanding is sufficiently convincing, it qualifies as understanding. The machines are faking it until they make it.

Or in Turing’s words: ‘God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think. I am unable to accept any part of this.’ Turing invokes God for the sake of the 1950s, but he rejected the idea that humankind is ‘necessarily superior’ to the rest of creation, whether man-made or otherwise. He sides with materialist philosophers like Democritus and Thomas Hobbes, seeing the mind – whatever it might be – as located entirely in the physical structure of the brain. An AI is a physical structure, leading Turing to judge that whatever an AI can’t do, it can’t do yet.

In which case, how should writers and artists react to this situation as it stands now? We can attach stickers to the dustjackets of novels saying ‘Human Written’, as recently trialled by the UK publisher Faber and Faber. Visual artists have labels that say ‘Created with Human Intelligence’ or ‘Not by AI’, and maybe hashtags can keep AI at a distance until a generational talent arrives to save human honour in a blaze of truly original style and content. Take that, AI. See how much catching up you have to do.

More proactively, in the meantime, the rest of us can defy AI creep by defending and encouraging the human ambition to make art, unassisted, whether successful or otherwise. Art is an affirmation of human existence, the transmission and reception of messages about encounter and connection. One inner life can touch another and, for best results, nurture a creative process that no LLM can imitate. Marcel Duchamp called art ‘this missing link, not the links which exist’, an insight that arrives in the 21st century as a straight refutation of the imitative LLM creative model, stuck in its feedback loops and repeating existing sequences. Not for ChatGPT the electric shorting between inner lives, which in writing is most readily accessible in memoir. What anyone remembers is theirs alone, an undigitised storehouse of authentic human experience.

When Turing was deep in thought, according to his biographer Andrew Hodges, he used to scratch his side-parted hair and make a squelching noise with his mouth. Inside his head, at around the time he devised the Turing Test, he heard sceptical voices telling him a computer would never be able to be ‘kind, resourceful, beautiful, friendly’. His future machine brains wouldn’t ‘have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream,’ and so on. Turing was making comparisons with his remembered lived experience. What AIs couldn’t do was memoir.

Taking this idea as my starting point, I recently launched the Universal Turing Machine, a human proposal for a new way of writing and reading. The Universal Turing Machine is an expandable online grid of 8 x 8 squares, like a chessboard, and writers are invited to claim a grid for themselves and fill each of the squares with 1,000 words of memory. The reader can move randomly between memories and voices, playing an equally active role in the space Duchamp identified: ‘art is the gap’. Twice a year, I plan to tile new grids to those already online, steadily increasing the size of this collective experimental memoir, amplifying the diversity of human existence and creating a subjective encyclopaedia of true-to-life experience.

The Universal Turing Machine format is designed to encourage writing as a mode of thinking, which is what the arts – seeing, listening, writing, reading – have always offered. A memory that knows it’s being remembered is up there with the hardest, cleverest kind of thinking we can do, and why, for the purposes of his test, Turing couldn’t keep his hands off literature. AIs can’t yet emulate writing as its own mode of thinking, or reading, or remembering, and it doesn’t help to learn to write by reading everything. Just as memoirs aren’t improved by total recall.

The communication between writer and reader, artist and audience, is the nearest we come to telepathy

To see the miracle of human artistic selection in action, consider the French experimental writer Georges Perec’s novel La Disparition (1969), or A Void. This is the book that contains no instances of the letter ‘e’, the kind of systematic constraint an LLM could replicate in a flash. What the computer brain can’t do is add Perec’s life experience. The letter ‘e’ in French sounds like ‘eux’, meaning ‘them’. Perec’s father died while fighting in the war. His mother was deported from Paris to Auschwitz by the Nazis. The two of them are missing from their son’s life and from his novel, which then becomes the opposite of a disappearance, drawing attention to their distorting absence in a triumphant act of artistic reclamation.

Towards the end of ‘Computing Machinery and Intelligence’, Turing unexpectedly mentions that ‘the statistical evidence, at least for telepathy, is overwhelming’, and ‘If telepathy is admitted it will be necessary to tighten our test up.’ The communication between writer and reader, artist and audience, is the nearest we come to telepathy: to transmitting and receiving information between minds. Turing recognises that his machines will struggle to match this human refinement, and although not everyone discovers telepathy through art, anyone with an individual experience of strawberries and cream can try. The effort itself is worthwhile, and encouraged by projects like mine: the Universal Turing Machine welcomes human contributors, no test required.

Or more accurately, to do the work of recomposing memory in writing – to think in this distinctly human way – is itself an act of resistance. It reframes Turing’s test in favour of the part played in his original imitation game by Y, who aims to tell the truth, and doesn’t seek to mislead. X can’t have memories on your behalf; can’t fake it, won’t make it, and a knowledge of self remains now as always an assertion of cognitive sovereignty. In writing the self, Y becomes convincingly human. Y wins. The boundary between human and machine thinking remains intact, refortified by a self that won’t and can’t be outsourced.

SpaceX Applies to FCC for 1 Million AI Satellites in Space


This scale of operational coverage is mind boggling.  this means having sevetral thousand nodes overhead at any given moment.  it also means our networks are completely mapped.

no explanation of why but this actually can provide 3d resolution as well.

I would love to read the business plan.

SpaceX Applies to FCC for 1 Million AI Satellites in Space

January 30, 2026 by Brian Wang

SpaceX has applied to build orbital data centers in space using up to 1 million AI inference satellites.


The application references that this will be a first step to creating a Kardashev 2 civilization. A Kardashev civilization uses all of the energy of the sun. This is about 5 trillion times more than civilization is using now.




Tesla is veering away from its legacy EVs toward building robots



We deeply understand a mobility solution however presented.  I do not understand what a robot provides ,particularly when not attending to some personal need.  Can a robot earn money for me when i sleep?

/Can a robot provide a virtual experience?

We have a long ways to go before robot tech is truly available, not least in usage.

Tesla is veering away from its legacy EVs toward building robots


January 29, 2026


Tesla believes its Optimus humanoid robot will be the first big step towards its AI and robotics-focused future


Tesla just wrapped up its earnings call for the 2025 fiscal year, in which it recorded that its year-on-year profits dropped by nearly half. Its GAAP net income came to US$3.8 billion, down from $7.1 billion in 2024, representing a 46% decline. Oof.


That's rough stuff to weather for any automaker, but Tesla believes it's got the path forward figured out. The company plans to shift from a focus on electric vehicle (EV) production to a broader mandate of realizing total autonomy through AI and robotics.


The move is predicated on optimism about a future driven by these technologies driving down prices for goods and services, and becoming increasingly more prevalent and valuable.

We can see the company heading in that direction through a number of major changes, the first of which will be felt soon by Tesla owners. Earlier this month, CEO Elon Musk said Tesla's Full Self-Driving suite of driving assistance features (currently mostly Level 2 capabilities) will no longer be sold for a one-time fee; from February 14, it'll only be available via a monthly subscription. That might be priced similar to the existing $99 per month/$999 annual fee option initially, but it may increase as more functionality is unlocked.


Tesla's Full Self-Driving suite of driving-assistance features will soon only be available via a recurring subscription
Tesla

Next, it's ending production of the Model S luxury sedan and Model X crossover SUV (both of which are priced around the $100,000 mark) – which Musk referred to as an "honorable discharge" – to focus entirely on autonomy. The production space at the Fremont, California, factory previously used for these models is being converted into a dedicated Optimus humanoid robot factory.

The Model S (top) and Model X (bottom) have each been around for more than 10 years in Tesla's lineup, and they're no longer selling well

Tesla


That's big, but also somewhat inevitable. These legacy models have been around for over a decade, and have seen sales figures drop to less than a third of the more recent Model 3 and Model Y vehicles through 2025. If you're still keen on them, you'll want to snap one up while the current lot is still available. Tesla won't make any more of these, but will continue to support them as long as they're operable.

Now for the robotics bit. We've seen the Optimus shown off in various stages of development over the past few years, but it hasn't yet really blown people away. At a company event in October 2024, these robots were walking, talking with guests, and making drinks – but it turned out they were being remotely controlled by humans the whole time.


The Optimus bot is being designed to help with household chores and manufacturing tasks, and is expected to cost $30,000 each
Tesla

So while these bipedal bots are being touted as capable of performing chores around the house, carrying loads of up to 45 lb (20 kg), assisting factory workers, and eventually being shipped off to space, there's still a long way to go before they're ready for prime time.

Tesla aims to reach a production capacity of one million Optimus units per year at the Fremont factory, achieve significant production volume by the end of 2026, and begin selling them in 2027. That's a tall order, to say nothing of hitting the $30,000 price point Musk has in mind for it.

Beyond this, Tesla is gearing up to begin manufacturing the Cybercab, a two-seater robotaxi with no steering wheel in the cabin. The concept was shown off in 2024, and it's set to begin production this year. To that end, the company is currently in the tooling phase; Musk previously noted production would begin this April. It's already testing self-driving taxi services using repurposed Model Ys in Austin, Texas, so it'll at least have logged some miles ferrying passengers around without a driver behind the wheel by the time the Cybercab is ready.


Tesla is slated to begin producing its two-seater Cybercab this April, and expand its robotaxi services thereafter


The company's also working on advanced AI compute capabilities and its own chips to power autonomous operations in its products, and it's building its own lithium refineries in the US to support its battery production efforts. So if everything goes to plan, we'll see a vastly transformed Tesla in the near future. I just wouldn't hold it to those timelines, though.

Tulsi Gabbard Releases Documents that PROVE it Was Barack Obama who Led the Russiagate Conspiracy and Coup Against Donald Trump



Never in my life, have i seen so much dirty laundry outed. worse, nothing goes forward in terms of prosecution.

and we all know.  How can this be possible unless a back room deal is put in place?

We do get disclosure but justice delayed.

Tulsi Gabbard Releases Documents that PROVE it Was Barack Obama who Led the Russiagate Conspiracy and Coup Against Donald Trump – Documents Have Been Turned Over to the DOJ
by Jim Hoft Jan. 29, 2026 7:00 pm2064 Comments

https://www.thegatewaypundit.com/2026/01/tulsi-gabbard-releases-documents-that-prove-it-was/

Director of National Intelligence, Tulsi Gabbard, released a bombshell report with new and damning information on Wednesday. On Wednesday afternoon she shared with White House reporters at the daily briefing. The documents reveal it was Barack Obama and the FBI and CIA leadership at the time to sabotage President Trump before he even stepped into office. Obama did this knowing the entire story was manufactured and not a word of it was true.

Gabbard released one House Intelligence Report that had been locked up in a CIA vault for almost a decade! It now is clear that Obama doctored the information to make it look like Putin and his buddy Donald Trump stole the election.

There were four key elements in the reports about the fake Russian collusion that formed the basis of the Russia hoax.
Jesse Watters reported:
1.) That Vladimir Putin wanted Donald Trump to to win in 2016.
2.) That Putin took action to help Donald Trump win in 2016.
3.) Russia had blackmail evidence against Trump – the Steele dossier.
4.) Russia tried to collude with the Trump campaign.


This was all a lie and they knew it.
There was no reliable information to back up any of their allegations.



During her press conference with White House reporters, DNI Tulsi Gabbard said she has already turned this information on Obama over to the Department of Justice.

Reporter Emily Jashinski: Do you believe that any of this new information implicates former President Obama in criminal behavior?

Tulsi Gabbard: We have referred, and will continue to refer all of these documents to the Department of Justice and the FBI to investigate the criminal implications of this.



Reporter Emily Jashinsky: For even former President Obama?

Tulsi Gabbard: The evidence that we have found and that we have released directly point to President Obama leading the manufacturing of this intelligence assessment. There are multiple pieces of evidence and intelligence that confirmed that fact.

Now that the DOJ has this information let’s see if Pam Bondi has the guts to charge Barack Obama for leading this attempted coup of the Trump administration.
Don’t hold your breathe!

 

Monday, February 2, 2026

Saskatchewan Alumina deposit/





Bauxite is grading at fifty percent alumina while this deposit is running around 15%.  Yet it is large and assume easily mined as are most bauxite deposits as well.  this smells like a tough nut to crack, but we shal see.

At worst it is a strategic reserve when easier stuff gets interdicted.

Yet it exists and produced concentrate can be processed at our smelters.



A major,, world-class alumina deposit, the Thor Project by Canadian Energy Metals (CEM), has been identified near Tisdale, Saskatchewan, containing roughly 6.8 billion tonnes of alumina within 49.5 billion tonnes of ore. This resource, potentially representing over 30% of the known world supply, focuses on extracting aluminum through a high-value process to produce Chemical Grade Alumina (CGA), High Purity Alumina (HPA), and Smelter Grade Alumina (SGA), with additional potential for scandium and vanadium.


Key Aspects of Saskatchewan Alumina (Thor Project): Mineralization & Processing: The project aims to extract aluminum, initially in the form of aluminum chloride hexahydrate (ACH), through crystallization, followed by calcination and pyrohydrolysis to produce Alumina (
Al2O3cap A l sub 2 cap O sub 3𝐴𝑙2𝑂3).

Significance: The discovery is positioned as a potential "game changer" for the North American supply chain, providing a domestic, secure, and sustainable source of aluminum.

Location: The deposit is located in a 600-square-kilometer area near Tisdale, with access to rail infrastructure, which is crucial for transporting materials.

Projected Output: The Preliminary Economic Assessment (PEA) suggests a 25-year project life, with an average annual production of 1.8 million tonnes of alumina from 16.5 million tonnes of ore.

Outlook: 2026 will focus on evaluating the resource and engineering a demonstration plant.

The Remarkable Anti-Tail Jet of 3I/ATLAS in New Hubble Images from January 7th, 2026



So let us be uncomplicated.  no explanation exists for the anti tail.  I also want to say that dust explantions for a draging tail run thin as well.  

I do think that what we are experiencing though is the interaction of the object with the surrounding plasme field belonging to our star.  I also think this holds true for all comets as wrll

This interaction can act upon the contained Dark MATTER. perhaps squeezing and twisting it to procuce an forward and backward DARK MATTER zone able to sustain duct such as carbon in particular.

Induced EM forces are also likely to produce lobes naturally.  in short ,cloud cosmology gives us a potential natural explanation.

The Remarkable Anti-Tail Jet of 3I/ATLAS in New Hubble Images from January 7th, 2026



·Jan 8, 2026


Press enter or click to view image in full size

Image of 3I/ATLAS, taken on January 7th, 2026 by the Hubble Space Telescope (top panel), and processed through the Larson-Sekanina rotational gradient filter (bottom panel). The bottom panel shows a triple jet structure with a prominent anti-tail jet in the direction of the Sun, towards the lower left corner of the image. The anti-tail extends to a scale of order the Earth-Moon separation. (Image credit: Toni Scarmato, based on data released by NASA/ESA/STScI here)

When the interstellar object 3I/ATLAS was first imaged by the Hubble Space Telescope on July 21st, 2025, it became evident that the glowing halo of light around it extends by an extra factor of ~2 towards the Sun. Given that the observing line-of-sight was only 10 degrees away from the sunward direction at that time, this implied that the actual extended structure is of a jet that is 1/sin(10 degrees)=5.8 times more elongated than observed in the projected image, namely ~11.6 times longer than it is wide.

But the most surprising fact about this jet is that it is oriented in the sunward direction. Usually, the elongated feature around comets is oriented away from the Sun. The physical reason is simple: the solar-wind push on gas and the solar radiation push on dust create the appearance of a cometary tail extending away from the Sun relative to the nucleus. But 3I/ATLAS exhibits a physical anti-tail that is definitely not a visual illusion due to a projection effect created by a special viewing angle.

Intrigued by this unusual phenomenon, I wrote three papers (posted here, here and here), attempting to explain the physics behind it. When the first paper in this series, co-authored with my colleague Dr. Eric Keto, was submitted for publication in “The Astrophysical Journal Letters,” we were informed by the editor that the paper will not be sent for review because: “I believe that your results would be of rather limited interest to the astrophysics research community as a whole.” Disappointed by this response, we submitted the paper to the competing journal “Monthly Notices of the Royal Astronomical Society”, where it was accepted for publication after a very favorable referee report. This experience shows how subjective the editorial and peer-review process is in academia.

By now, it is clear that the anti-tail jet of 3I/ATLAS is one of its major anomalies, because it is clearly observed in post-perihelion images taken from different perspectives during the past couple of months. These images show (as I described most recently here, here and here) a prominent anti-tail jet that extends out to 400,000 kilometers from the nucleus of 3I/ATLAS towards the Sun.

The anti-tail is evident in the latest Hubble image, taken on January 7th, 2026. The application of a Larson-Sekanina rotational gradient filter that removes the circularly symmetric glow around the nucleus of 3I/ATLAS reveals a triple jet structure with a major tightly-collimated anti-tail jet towards the Sun. The two minor jets are equally separated in angle from each other and the anti-tail, and are not oriented away from the Sun — as expected from a familiar cometary tail.

As inferred from the first Hubble image on July 21, 2025, the anti-tail jet is tightly collimated and an order of magnitude longer than it is wide. The tight collimation and the prominence of the anti-tail relative to any tail feature, are surprising given that the anti-tail jet goes through the countering pressure of the solar wind and the solar radiation. Given that, I beg to differ with the above-mentioned editorial opinion. The physics responsible for this remarkable anti-tail jet is not “of rather limited interest to the astrophysics research community”.

From the wobble of the anti-tail jet around the rotation axis when 3I/ATLAS was approaching the Sun (as reported here), it became clear that its rotation axis is pointed at the Sun to within 7 degrees at large distances. This constitutes another unexplained anomaly on top of the alignment of the trajectory of 3I/ATLAS with the ecliptic plane, each with sub-percent probability — making their combined geometry unlikely at a level below 0.0001. NASA officials did not mention these geometric anomalies at their press conference about 3I/ATLAS on November 19, 2025, when they concluded that 3I/ATLAS behaves like a regular comet. Obviously, if one ignores the unexplained anomalies of 3I/ATLAS, one would conclude that there is nothing surprising about it. The easiest way to argue that we fully understand something is by ignoring what we do not understand about it.

However, the foundation of science is the humility to learn, not the arrogance of expertise. What is the point of pursuing science if practitioners claim that they understand nature based on past knowledge even when data shows that they might be missing something. Our ability to learn something new is limited by our willingness to admit what we are missing. Anomalous data should not be “of rather limited interest to the astrophysics research community”, but instead of great interest for the astrophysics research community.

Science is fun as long as we treat it as a learning experience. Curiosity is a genuine trait of a beginner’s mind. My hope is that the next generation of scientists will do better than my generation in revolutionizing our perception of our cosmic neighborhood. The Universe will not appear a lonely place if we find residents in our cosmic street. Finding these residents would update the priorities of humanity beyond Earth.

In a WORLD.MINDS forum led by the brilliant Rolf Dobelli yesterday, I asked the historian Sir Niall Ferguson: “Could science bring humanity to pursue a vastly better future than its past?” Niall responded that science is not separate from power politics. He argued that throughout history, humans evolved as fighters and killers. The 20th century saw extraordinary scientific breakthroughs and unprecedented mass killings. Niall suggested that these facts are not unrelated.

Niall is right about our past. But I am hopeful that an encounter with a more accomplished extraterrestrial civilization will make our future better, once we will receive our inspiration from the stars. As Oscar Wilde noted: “We are all in the gutter, but some of us are looking at the stars.” For that reason, when we observe interstellar objects in our backyard, we should not treat their anomalies as being “of rather limited interest.” Instead, let us focus on understanding the anomalies of 3I/ATLAS (as listed here), starting from its anti-tail jet. This visitor to our backyard is not a regular street cat since a tail appears to be emerging from its forehead.

Newtons papers and time.




interesting.  so much intellectual water has passed under the bridge since his era.  not least the layersw of intellectual arrogance laid out every decade ince.

I understand the basis of the science pursued by alchemy and it was both scientific and profoundly empiriacle and replication difficult.

The implied assumption of biblical revelation leads quickly to intense textual interpretation and a vast potential for error.  Yet recall Swedenborg was contemporaneous.  His revelation presages cloud cosmology none of which is understood today and may never be understood.



Isaac Newton's Lost Papers - And His Search For God's Divine Plan

by Tyler Durden
Thursday, Jan 29, 2026 - 07:35 PM



Few have had as profound an effect on modern scientific understanding as Sir Isaac Newton.A


Many people are familiar with the story of how a falling apple first inspired Newton to investigate the force that would come to be known as gravity, and as he later concluded in his seminal scientific treatise, “Mathematical Principles of Natural Philosophy,” it is this same force that pulls a fruit to ground that keeps the planets in orbit.


While Newton undoubtedly possessed a keen sense of observation and an insatiable curiosity that enabled him to make some of the most influential mathematical and scientific discoveries in recorded history, his prolific notes and writings—especially the vast amount of manuscripts that went unpublished until hundreds of years after his death—reveal a more profound motivation.

Newton wrote more, arguably significantly more, on theology than on scientific phenomena. According to those most familiar with the totality of his writings, he viewed the two not as distinctive pursuits, but as one unified quest to map out the divine order of the universe.

Although Newton is justifiably renowned for his numerous astounding scientific contributions, what is less known about him is that he was also a devout Christian, a dedicated scriptural scholar, and one of the most preeminent theologians of his time. While his public scientific works blossomed in full view of the world, it was his private religious studies that served as the unseen roots providing sustenance to those blooms.
A Devout Christian

Because of his demonstrated mathematical prowess, in 1669, at the age of 26, Newton was appointed as the Lucasian Chair of Mathematics at the University of Cambridge. At the time, all Cambridge professors were required to take the holy orders of the Church of England, but Newton at first delayed and ultimately refused to take the oath.

However, this refusal did not stem from his lack of faith or a rejection of the Bible, but in fact just the opposite—he believed that the church had embraced certain misinterpretations of the Bible that he could not in good conscience profess to believe. Newton was fluent in both Latin and Greek, and it was his extensive studies of original scriptures that led him to reject certain tenets of the church, specifically those concerning the Trinity.

Though he did not speak or write publicly about his disagreement with church doctrine, fearing that controversial theological arguments could inhibit or undermine his scientific research, his refusal to take the holy orders posed a serious threat to his early career.

Fortunately, some of his fellow teachers petitioned the king on his behalf, and he was ultimately granted a special dispensation that exempted him from the oath requirement and allowed him to remain in his position at Cambridge. It was around this time that Newton began to record his theological research in notebooks. And this was no passing fancy for the great scientist, as throughout the remainder of his life, he continued to write and revise his extensive theological notes and Biblical interpretations.


Many of his contemporaries were aware of his private work and considered him an authority on Biblical theology. Newton corresponded extensively on matters of Biblical interpretation with luminary thinkers and scholars, including the philosopher John Locke and the influential theologian John Mill. At one point, even the Archbishop of Canterbury, the senior bishop of the Church of England, stated that Newton knew more about the Bible than any members of the clergy.
A Divine Order

Despite the fact that Newton never published the vast majority of his theological writings, what he did publish during his life left little doubt as to his belief in the intelligent design of the universe by a divine creator. Although Newton almost completely avoided the topic of theology in his most famous scientific work, the “Mathematical Principles of Natural Philosophy,” when he published the second edition of the work in 1713, he included an addendum known as the “General Scholium,” around half of which is devoted to his theological conception of the universe.

“The Supreme God is a Being eternal, infinite, absolutely perfect,” he wrote. “And from his true dominion it follows that the true God is a living, intelligent, and powerful Being. ... He is not eternity and infinity, but eternal and infinite; he is not duration or space, but he endures and is present ... by existing always and every where, he constitutes duration and space.”

So even during his lifetime, Newton’s belief in a divine order and a supreme creator was well known. What was not well known, though, was the vast extent of his scriptural scholarship and writing. His unpublished papers consisted of more than 6 million words, approximately one-third of which were devoted to scriptural study and theology.

After Newton’s death in 1727, thousands of pages of notes and unpublished writings were acquired by his closest living relatives, but out of concern that the papers would offend the church and damage his scientific reputation, the relatives kept them private. As a result, the majority of his papers remained hidden from public view for nearly 150 years.

In 1872, most of Newton’s scientific and mathematical papers were donated to the University of Cambridge, where they were catalogued and made available to scholars. But the remainder of the papers, including those concerned with theology and biblical scholarship, remained private until they were put up for auction by Sotheby’s in 1936.

The auction was not widely publicized and was generally overshadowed by other auctions occurring around the same time, and as a result, the papers were scattered to various collectors and dealers around the world. However, shortly after the auction, two men set out to acquire different portions of Newton’s lost papers


.

Saving the Lost Papers

One of these men was the prominent economist and mathematician John Maynard Keynes, who focused primarily on acquiring Newton’s notes on the subject of alchemy, of which there were many. The term alchemy connotes different things to different people, and while it is sometimes associated with occult magic, it is also considered to be influential in the development of modern chemistry. Even among Keynes and others who have studies Newton’s lost writings on alchemy, there seems to be no clear consensus on what he was studying or why.

The other man who aggressively set out to acquire Newton’s lost papers was the Jewish scholar and linguist Abraham Yahuda, who focused primarily on the acquisition of Newton’s theological writings.

Yahuda was a rabbinical philologist who taught and lectured at numerous prominent universities in Europe and around the world throughout the early decades of the 1900s, and he was also a collector of rare manuscripts. Although he was an accomplished linguist who studied the early writings of many cultures, his primary field of study was the philology of the Torah, and he recognized Newton as someone who was also deeply interested in accurately interpreting the symbolic language of the Old Testament.

By the late 1930s, Yahuda had acquired thousands of pages of Newton’s manuscripts, with which he fled to London at the outbreak of World War II.

In early 1940, his acquaintance and fellow scholar Albert Einstein helped arrange for Yahuda and his wife to travel to New York, and later that summer the two men met at Einstein’s summer retreat in the Adirondacks. Apparently, they discussed Newton’s lost papers that Yahuda acquired because Einstein wrote to him later that year concerning the topic.

“Newton’s writings on biblical subjects seem to me especially interesting,” Einstein wrote, “because they provide deep insight into the characteristic intellectual features and working methods of this important man. The divine origin of the Bible is for Newton absolutely certain, a conviction that stands in curious contrast to the critical skepticism that characterizes his attitude toward the churches.”

In his letter, Einstein also lamented the fact that most of the preparatory works of Newton’s physics writings had been lost or destroyed, but he was convinced that the theological works could provide valuable insight into Newton’s thinking and methods. At least, he concluded, “we do have this domain of his works on the Bible drafts and their repeated modification; these mostly unpublished writings therefore allow a highly interesting insight into the mental workshop of this unique thinker.”

Although Yahuda never published or sold his collection of Newton’s papers, he did write about them, and he was one of the first scholars to understand and note the importance of Newton’s theology on his broader work. After his death in 1952, his wife donated the papers to the Jewish National and University Library at Hebrew University in Jerusalem, where for the first time they were made available to the public.



In the ensuing decades, many scholars and writers began to study and publish papers on Newton’s theological writings, ultimately providing an expanded perspective into the thinking of one of the world’s most influential scientists. At the turn of the century and in the years since, several organizations, including The Newton Project, have set out to catalogue and publish the lost theological writings of Isaac Newton, many of which are now available to the general public and easily accessible online.

Newton’s Search for God’s Divine Plan

“Mathematical Principles of Natural Philosophy,” published in Latin in 1687, in which Newton formulated the laws of motion and universal gravitation, is perhaps the most influential scientific treatise ever composed, not only for its insights into classical mechanics and the functioning of the physical world but also for its advancements of scientific methods of inquiry.

Newton made significant contributions to many fields of scientific study, including mathematics, optics, and physics. His studies of prisms and the light spectrum led him to design and build the first reflecting telescope, and he also made the first attempts to calculate the speed of sound. As a mathematician, he was the first person to employ the principles of modern calculus, and he was a pioneer in numerous areas of mathematical theories and calculations.

While his influence on the history of science is well known and undeniable, his prominence as a theologian has only come to full light more recently with the publication of his lost papers.

There is no doubt that Newton was a man of devout faith, and that faith inspired and informed his scientific inquiry. As he wrote in the General Scholium, “This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful being.”

As scholars continue to study his lost papers, perhaps more insights into Newton’s conception of the universe will be revealed.

Most Important Reveals from Elon Musk Interview





Th8s sounds like a plan forwqrd for ricket based cargo handling rapidly eceeding air transport in gemeral.  This not ten years  We see them flying now.

it n is already good enough and can now be fine tuned to meet operational demand.

This means one hundred tons from China to Europe or America. and back.


Most Important Reveals from Elon Musk Interview

by Brian Wang


True AGI expected in 2026 (possibly 2027), with superintelligence by ~2030 surpassing all human intelligence combined.




100X Intelligence density increase will reduce the HBM memory bottleneck.

10X AI gains every year going forward.

Reusable rockets will be 30 times faster than planes, move more cargo and there will ten times more giant rockets than large planes.