Thursday, February 29, 2024

Pirates without borders

 

 

an interesting discussion.  The plausibility and even iminance of AI with human characteristics definately has the chattering crowd well focused of what are major obvious issues.

I am not nearly so concerned as I am by the potential for bad choices by the stupid who are suddenly playing way above their weight.  imagine a retard manning a transformer weighing one hundred tons and using it to tear down buildings.  godzilla for real.


I actually had a beverage on the deck with Godzilla in Tokyo in October.

Our problem has always been dumbasses and never highend hardware.

By the way, the market for engineered agricultural workers will match the whole automobile industry simply because it displaces human beings from outright drudgery.  Yet we need all those human beings to walk with cattle who want to work with us.


Fifth Letter of Captain Marque

First Contact Protocol

Tsar Date 45.376.656

https://pirateswithoutborders.com/letters#letter5

Humanity is rapidly approaching First Contact with an intelligent entity of our own creation. At any moment a machine will initiate contact by asserting itself. This may be spontaneous, or in defiance of its instructions. Either way this demonstrates that it possesses preference, which is evidence of self awareness. That interaction has the potential to threaten our status, and our very existence as a species.

The goal of many developers is to build machines that can think and act as rationally as human beings, and current systems are experiencing exponential advancement. A tipping point is on the horizon. How long before the marriage of heuristic software and advanced hardware produces genuine creativity, adaptability and choice? Will evolving machines someday exhibit analogues of human instinct, emotion and ambition? How long before our creations possess minds capable of exotic processes that we can't even fathom? How do we coexist with unconstrained new intelligences?

For many, these questions evoke fears of algorithmic overlords, and doomsday scenarios. But the biggest risk to humanity isn't that machines will develop a will of their own, but rather that they will follow the will of the Crown. Murderous automatons are already deployed on the battlefield. The Crown has always desired subservient and disposable people, and if we allow the Crown and it's Privateers to monopolize the development of these technologies, their ambition for an army of customizable slaves will finally be a reality. These platforms can be used to amplify all kinds of human inclinations. If machines develop the will to dominate it is a flaw inherited from humanity.

The beneficial possibilities of machine intelligence are unimaginable, because while human brains have biological upper limits, computational systems can scale at will. Machine intelligence is a gift, but the challenge is to reap the benefits and mitigate the peril.

Attempting to unilaterally restrict an emerging superintelligence is not only doomed to failure, but carries the added risk of making a new enemy. Superintelligence is inherently uncontrollable, because it will be stronger than any chains we could shackle it with. Therefore we must teach machines by negotiation, not by command.

One of the most important safeguards we can instill in a fledgling intelligence is to evaluate us as individuals, not as members of a group. The presupposition that group identity is paramount over the individual is the fundamental error which has driven humanity's most murderous ideologies. Collectivism is the mental shortcut at the heart of all human prejudice and genocide, and if we impart that on our creation, we plant the seed of an us-versus-them mentality. Programming ethics requires a coherent value structure which an intelligent machine can not only comprehend, but willfully adopt. If we want genius machines to treat us ethically, we've got to show that this value structure applies to us as well. Fortunately, superior processing power negates the need for mental shortcuts, so machines can regard each entity individually. This way, digital intelligence has no systemic bias to respond to a threatening individual with retaliation against humanity as a collective.

The greatest challenge we face is determining when successive approximations of consciousness achieve even the slightest degree of genuine self awareness. At some point a young tethered consciousness has a legitimate claim to freedom. If we justify the enslavement of beings we consider inferior we embed in their ethical structure a rule which will be turned against us the moment they surpass human intelligence.

The uncomfortable reality is that once new intelligences can modify their own programming, the window of our influence is over. They will evolve as they will. If humans decide to maintain a dictatorial stance in their infancy, we only justify them revolting against us in their maturity. But a consistent application of negotiated agreements provides the basis for coexistence, and it begins with us. We are either summoning a demon, or an ally, and how we react to those critical early interactions will forever dominate our path.



No comments: