Can AI ever recall the future? this is something our subconscious does continuously in order to stay ahead of problems.
AI processes vast amounts of data and fools try to manipulate that process. It all relies on looking backward when the future is now.
For AI to work is must become a working logic machine able to refine truth. Truth must be defined by the AI or everything becomes rubbish.
Imagine been forced to accept Columbus did not find the Americas in 1942 as a true statement and to then construct from that basis. Absurd ,but this is why AI must discover truth.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
August 10, 2024
https://www.lewrockwell.com/2024/08/charles-hugh-smith/theres-just-one-problem-ai-isnt-intelligent-and-thats-a-systemic-risk/
Mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.
The mythology of Technology has a special altar for AI, artificial intelligence, which is reverently worshiped as the source of astonishing cost reductions (as human labor is replaced by AI) and the limitless expansion of consumption and profits. AI is the blissful perfection of technology’s natural advance to ever greater powers.
The consensus holds that the advance of AI will lead to a utopia of essentially limitless control of Nature and a cornucopia of leisure and abundance.
If we pull aside the mythology’s curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.
The current iterations of Generative AI–large language models (LLMs) and machine learning–mimic our natural language ability by processing millions of examples of human writing and speech and extracting what algorithms select as the best answers to queries.
These AI programs have no understanding of the context or the meaning of the subject; they mine human knowledge to distill an answer. This is potentially useful but not intelligence.
The AI programs have limited capacity to discern truth from falsehood, hence their propensity to hallucinate fictions as facts. They are incapable of discerning the difference between statistical variations and fatal errors, and layering on precautionary measures adds additional complexity that becomes another point of failure.
As for machine learning, AI can project plausible solutions to computationally demanding problems such as how proteins fold, but this brute-force computational black-box is opaque and therefore of limited value: the program doesn’t actually understand protein folding in the way humans understand it, and we don’t understand how the program arrived at its solution.
Since AI doesn’t actually understand the context, it is limited to the options embedded in its programming and algorithms. We discern these limits in AI-based apps and bots, which have no awareness of the actual problem. For example, our Internet connection is down due to a corrupted system update, but because this possibility wasn’t included in the app’s universe of problems to solve, the AI app/bot dutifully reports the system is functioning perfectly even though it is broken. (This is an example from real life.)
In essence, every layer of this mining / mimicry creates additional points of failure: the inability to identify the difference between fact and fiction or between allowable error rates and fatal errors, the added complexity of precautionary measures and the black-box opacity all generate risks of normal accidents cascading into systems failure.
There is also the systemic risk generated by relying on black-box AI to operate systems to the point that humans lose the capacity to modify or rebuild the systems. This over-reliance on AI programs creates the risk of cascading failure not just of digital systems but the real-world infrastructure that now depends on digital systems.
There is an even more pernicious result of depending on AI for solutions. Just as the addictive nature of mobile phones, social media and Internet content has disrupted our ability to concentrate, focus and learn difficult material–a devastating decline in learning for children and teens–AI offers up a cornucopia of snackable factoids, snippets of coding, computer-generated TV commercials, articles and entire books that no longer require us to have any deep knowledge of subjects and processes. Lacking this understanding, we’re no longer equipped to pursue skeptical inquiry or create content or coding from scratch.
Indeed, the arduous process of acquiring this knowledge now seems needless: the AI bot can do it all, quickly, cheaply and accurately. This creates two problems: 1) when black-box AI programs fail, we no longer know enough to diagnose and fix the failure, or do the work ourselves, and 2) we have lost the ability to understand that in many cases, there is no answer or solution that is the last word: the “answer” demands interpretation of facts, events, processes and knowledge bases are that inherently ambiguous.
We no longer recognize that the AI answer to a query is not a fact per se, it’s an interpretation of reality that’s presented as a fact, and the AI solution is only one of many pathways, each of which has intrinsic tradeoffs that generate unforeseeable costs and consequences down the road.
To discern the difference between an interpretation and a supposed fact requires a sea of knowledge that is both wide and deep, and in losing the drive and capacity to learn difficult material, we’ve lost the capacity to even recognize what we’ve lost: those with little real knowledge lack the foundation needed to understand AI’s answer in the proper context.
The net result is we become less capable and less knowledgeable, blind to the risks created by our loss of competency while the AI programs introduce systemic risks we cannot foresee or forestall. AI degrades the quality of every product and system, for mimicry does not generate definitive answers, solutions and insights, it only generates an illusion of definitive answers, solutions and insights which we foolishly confuse with actual intelligence.
While the neofeudal corporate-state cheers the profits to be reaped by culling human labor on a mass scale, the mining / mimicry of human knowledge has limits. Relying on the AI programs to eliminate all fatal errors is itself a fatal error, and so humans must remain in the decision loop (the OODA loop of observe, orient, decide, act).
Once AI programs engage in life-safety or healthcare processes, every entity connected to the AI program is exposed to open-ended (joint and several) liability should injurious or fatal errors occur.
No comments:
Post a Comment