Saturday, May 18, 2019

America may outsmart China in 5G with AI and blockchains


 Much of what is out there is best described as the teething pains of of a massive global information system.  We have the Chinese attempting to control the system, but that must fail simply because it really needs to be open to be effective.
 
Three tools are converging as described here.
 
The surprise has been block chain which delivers  real security in the most  secure possible.  That means all data needs to migrate into the evolving cloud.

Sooner or later China must abandon its separate system or be left out at sea.
..
 
America may outsmart China in 5G with AI and blockchains

An FCC commissioner hopes that machine learning and distributed crypto-ledgers will free up wireless spectrum for billions of devices.


by Will Knight


May 7, 2019


https://www.technologyreview.com/s/613499/america-may-outsmart-china-in-5g-with-ai-and-blockchains/


What do you get when you combine three of tech’s biggest buzzwords: AI, blockchain, and 5G? Perhaps ridiculously fast, amazingly abundant wireless data.


Jessica Rosenworcel, a commissioner at the US Federal Communications Commission, believes that artificial intelligence and blockchain technology will give the US an edge in next-generation wireless networking over its big technological rival, China.


Speaking at the Business of Blockchain, an event organized by the MIT Media Lab’s Digital Currency Initiative and MIT Technology Review, Rosenworcel said AI and blockchains would allow wireless devices to use different frequencies within what is known as the wireless spectrum more dynamically and flexibly, enabling billions of devices to connect to 5G networks at once.


Machine learning will help wireless devices and networks share and negotiate over spectrum, Rosenworcel said, while distributed, cryptographically secured ledgers will help them keep track of who has access to what. Currently, the wireless spectrum is divided up for different uses. This avoids interference but isn’t the most efficient use of the airwaves.


The suite of technologies known as 5G allows devices to connect in a variety of ways, and over a range of the wireless spectrum. With speeds of up to 20 gigabits per second, as well as greatly reduced latency, 5G smartphones should be able to run high-quality virtual-reality applications or download movies in seconds. With greater network capacity, 5G should also let many more devices connect to the internet—everything from wearables to washing machines.


Rosenworcel said it will be imperative to devise better ways to allocate the spectrum. “If you think about the internet of things, with 50 billion devices, and wireless input for all of them—we should figure out a real-time market for the wireless spectrum,” she said.


The commissioner pointed to a competition being organized by the Defense Advanced Research Projects Agency (DARPA) to devise new ways of negotiating over spectrum using AI. She said the FCC had recently begun researching whether a blockchain could help too. “If you put this on a public blockchain, you would have this public record of demand and could design systems differently,” Rosenworcel said.


Just as the wireless data available to smartphones has spurred technological progress, 5G should underpin innovation across the tech industry. The White House seems increasingly concerned that the US might cede its position as a technology leader in 5G, with potentially dire consequences for its economy. This worry is behind the scrutiny of Huawei, one of China’s most prominent and powerful companies.


“I am concerned that we are not positioned to lead,” Rosenworcel said at the MIT event. But she added that AI and blockchains could be crucial to helping the US stay competitive with China in wireless technology. “I don’t think of it as the immediate future of wireless, but it might be the far future—5 to 10 years hence,” she said.


In the US, China, and elsewhere, interest is growing in using AI to help advance wireless technologies, but this hasn’t yet found its way into 5G networking products. The National Institute of Standards and Technology (NIST) is currently researching how machine learning could help carve up the wireless spectrum.


“Many problems in wireless networks that require processing large amounts of data and making decisions quickly can benefit from AI,” says Michael Souryal, the NIST researcher who leads the agency’s work. “One example that we have been studying is the use of AI for real-time signal detection and classification, which is important for spectrum sharing.”


Muriel Médard, a professor of electrical engineering at MIT, says more is needed than just new ways to manage spectrum using AI or blockchains. Specifically, Médard says, “coding” schemes, which determine how packets of information get routed, are required. “The other work is fundamentally worthwhile, but it needs another technology, too,” she says.

A revolution in time





This is important.  It is the Greeks of the Seleucids who actually crystallized time as calendar time to be a universal tool.   We have followed this ever since and the succession of empires has kept it largely intact.

For that reason dating before 322 BC is difficult and risky and difficult to safely integrate with other dates. There are specific earlier dated events that help but even then the integration is chancy.
 
We have locked down the end of the Atlantean age or European Bronze Age from tree rings in Ireland at 1159 BCE and amazingly the Trojan war at 1179 BCE or twenty years earlier through star positions.  To show you how important doing this happens to be, it can only work if the Greeks (Dorians ) were a Bronze Age society occupying the Baltic as actually confirmed by place names in the correct order.  The twenty year collapse in agriculture evicted them.
 
Thus we now have historical certainty the arrival of the Greeks around 1155 BCE and Alexander the Great in 322 BCE.  This is a gap of over eight centuries of which the historic Greek component covers two centuries at best.  Relating any of that to surrounding cultures is difficult and easily misleading. 
 
Yet we have been blessed because the Greeks did generate a form of continuous calendar which has organized this information since. 

.
A revolution in time
Once local and irregular, time-keeping became universal and linear in 311 BCE. History would never be the same again

https://aeon.co/essays/when-time-became-regular-and-universal-it-changed-history

What year is it? It’s 2019, obviously. An easy question. Last year was 2018. Next year will be 2020. We are confident that a century ago it was 1919, and in 1,000 years it will be 3019, if there is anyone left to name it. All of us are fluent with these years; we, and most of the world, use them without thinking. They are ubiquitous. As a child I used to line up my pennies by year of minting, and now I carefully note dates of publication in my scholarly articles.

Now, imagine inhabiting a world without such a numbered timeline for ordering current events, memories and future hopes. For from earliest recorded history right up to the years after Alexander the Great’s conquests in the late 4th century BCE, historical time – the public and annual marking of the passage of years – could be measured only in three ways: by unique events, by annual offices, or by royal lifecycles.

In ancient Mesopotamia, years could be designated by an outstanding event of the preceding 12 months: something could be said to happen, for instance, in the year when king Naram-Sin reached the sources of the Tigris and Euphrates river, or when king Enlil-bani made for the god Ninurta three very large copper statues. Alternatively, events could be dated by giving the name of the holder of an annual office of state: something happened in the year when two named Romans were consuls, or when an elite Athenian was chief magistrate, and so on. Finally, and most commonly in the kingdoms of antiquity, events could be dated by counting the throne year of the monarch: the fifth year of Alexander the Great, the 40th year of king Nebuchadnezzar II, and so on.

Each of these systems was geographically localised. There was no transcendent or translocal system for locating oneself in the flow of history. How could one synchronise events at geographical distance, or between states? Take the example of the Peloponnesian War, fought between Athens and Sparta in the last third of the 5th century BCE. This is how the great Athenian historian Thucydides attempted to date its outbreak:


The ‘Thirty Years’ Peace’, which was entered into after the conquest of Euboea, lasted 14 years; in the 15th year, in the 48th year of the priesthood of Chrysis at Argos, and when Aenesias was magistrate at Sparta, and there still being two months left of the magistracy of Pythodorus at Athens, six months after the battle of Potidaea, and at the beginning of spring, a Theban force a little over 300 strong … at about the first watch of the night made an armed entry into Plataea, a Boeotian town in alliance with Athens.

Where we would write, simply, ‘431 BCE’, Thucydides was obliged to synchronise the first shot of war to non-overlapping diplomatic, religious, civic, military, seasonal and hourly data points. The dates are intimately tied to central state institutions, dependent on bureaucratic list-making, applicable only within a self-limiting geography, and highly sensitive to political change. Indeed, they are not really dates at all, so much as synchronisms between multiple events, coordinating a network of better and lesser-known occurrences: what is being dated, and what dates it, belong to the same order of things. Imagine giving the date of the invasion of Iraq, your grandma’s birth or American independence in such a manner; and then try to explain this to someone from another country.

In the chaos that followed the death of Alexander the Great in Babylon in 323 BCE, all this changed. One of Alexander’s Macedonian generals, who would go on to win an enormous kingdom stretching from Bulgaria to Afghanistan, introduced a new system for reckoning the passage of time. It is known, after him, as the Seleucid Era. This was the world’s first continuous and irreversible tally of counted years. It is the unheralded ancestor of every subsequent era system, including the Christian Anno Domini system, our own Common Era, the Jewish Era of Creation, the Islamic Hijrah, the French Revolutionary Era, and so on.

The Seleucid Era began from Year 1 (set at Seleucus I Nicator’s arrival in Babylon in spring 311 BCE) and continued counting, getting bigger each year: n+1. At the death of Seleucus I, his son Antiochus I did not restart the clock, and nor did any of his successors. For the first time in history, historical time was marked by a number that never restarted, reversed or stopped. It is still going. This was time as we know it – 2019, 2020, 2021, and so on – a transcendent, universal, absolute, freestanding, regularly increasing number. It was unconnected to political events, the life-cycle of rulers or conquest. It was not dependent on an imperial bureaucracy or a scribal elite. It could be used at distance to correlate events.

Most importantly, as a regularly increasing number, the Seleucid Era permitted an entirely new kind of predictability. It had been impossible for a subject of, say, the elderly Nebuchadnezzar II, in the 40th year of his reign (he reigned for 43 years), to confidently and accurately conceive, name and hold in the imagination a date several years, decades or centuries into the future. Now, because of the Seleucid Era, this was easy, unproblematic and uniform for every subject of the Seleucid kings. One of the Norwegian author Karl Ove Knausgård’s recent novels has an image that captures the force of this change:


It was as if a wall had been removed in the room they inhabited. The world no longer enveloped them completely. There was suddenly an opening … Their glance no longer met any resistance, but swept on and on through more of the same.

All this would be an interesting aspect of intellectual history, without greater social significance, were it not for two additional factors. First, the Seleucid Era was only and exclusively materialised as number. In whatever script the Seleucid Era number was recorded – and, given the vast expanse of the Seleucid empire, we have it attested in the Greek, Akkadian, Phoenician and Aramaic counting systems – the year’s numerical value was universally stable. That’s to say, within the extraordinary diversity of the extended imperial territories, the Seleucid Era, as a regular and homogeneous counting system, achieved a regulating and homogenising force.


Second, these Seleucid Era year numbers were marked onto an unprecedented range of public, private and mobile platforms. Era dates were affixed to market weights, jar handles, coinage, building constructions, temple offerings, seal rings, royal letters, civic decrees, tombstones, tax receipts, priest lists, boundary markers, astronomical reports, personal horoscopes, marriage contracts – and much, much more. In our own world, filled with ubiquitous date marks, it is easy to underestimate the sheer novelty, and so historical significance, of this mass year-marking. But, in the ancient world, this was without precedent or parallel. In no other state in the ancient Mediterranean or west Asia did rulers and subjects inhabit spaces that were so comprehensively and consistently dated.

Why does all this matter? While chronology and dating might at first seem not the most exciting of things, they are the stuff that history is made on, for dates do two things: they allow things to happen only once, and they insist on the ordering and interrelation of all happenings. Every event must be chained to its place in time before it becomes an available object of historical articulation. And the modes by which we date the world, by which we apprehend historical duration and the passage of time, frame how we experience our present, conceive a future, remember the past, reconcile with impermanence, and make sense of a world far wider, older and more enduring than any of us.


Empires make claims to time and space. And then their subject populations push back

The Seleucid Era, this new and ubiquitous dating system that was driving forward into a future it had opened, proposed fundamentally new possibilities and problems of politics, history and religion. While we ourselves are now at home in such a system, to the ancient world, used to its temporal enclosure, it was explosive. It was a situation that put enormous pressure on long-held notions of the future and the past and, I would suggest, one that generated new sites of contest between the Seleucid empire and its subject peoples.

Empires make claims to time and space. And then their subject populations push back. From the 2nd century BCE down to its ultimate demise in 64 BCE, the Seleucid empire faced increasingly violent and assertive opposition from its subordinated communities in its heartland territories of the Levant, Babylonia and western Iran. The most famous of these resistance movements was the Maccabean Revolt, when the Jews of Judea marched against the Seleucid armies of king Antiochus IV and his successors, liberating the Jerusalem Temple and eventually carving out an independent political space – the Hasmonean kingdom – in the territory of modern Israel. These are the events still commemorated in the festival of Hanukkah. Such resistance to the Seleucids not only targeted their physical infrastructure, fiscal demands, colonial settlements and this-worldly assertions of political dominance; it also targeted the temporal order they had established.
 

It is of the highest significance that our earliest historical apocalypses emerged within the Seleucid empire, within this world newly filled with inexorably increasing date numbers. These historical apocalypses are textual compositions that run through a full and extended account of world history, from the deep reaches of the past, through a succession of kingdoms or historical periods, into the Seleucid empire, and then to the predicted end of time itself. These works of end-time prediction do not appear before the Seleucid empire, such as in the Babylonian or Persian kingdoms or in classical Greek city-states. They do not appear outside the Seleucid empire, such as in the other Hellenistic kingdoms or at Rome. It is a phenomenon restricted to the Seleucid empire’s subject populations.

The theological and political roots of ‘apocalyptic eschatology’, as this end-times literature is known, are complex and multiple. An entire subfield of Second Temple and early Christian scholarship is devoted to this problem of emergence. But the Seleucid Era has played no role in existing research within either classical ancient history or biblical studies. I suggest that the ubiquitous visibility and bureaucratic institutionalisation of an irreversible, interminable and transcendent time system provoked, as a kind of reaction-formation, fantasies of finitude among those who wished to resist the Seleucid empire. The only way to arrest the open-futurity and endlessness of Seleucid imperial time was to bring time itself to a close.
 

The most famous of these early apocalyptic works, and the only one canonised as scripture, is the Book of Daniel in the Hebrew Bible. This is the easiest biblical book to date, for it delivers, in the voice of the ancient seer Daniel, an account of world history that is basically accurate up to 165 BCE and wildly inaccurate after 165 BCE. In 165 BCE, the Jews of Judea, under the leadership of Judas Maccabee, were seeking to throw off the yoke of the Seleucid empire, so it was composed at a time of military conflict.
 

The Book of Daniel contains a number of very famous episodes, including Daniel in the lion’s den, the writing on the wall at Belshazzar’s feast, and the arrival of ‘one like the son of man’ to punish four monstrous beasts emerging out of the chaos waters. Here, let us look a little at the multi-metallic statue in Chapter 2 of the Book of Daniel, likely the earliest apocalyptic passage in Judaism.
 

The narrative runs as follows. King Nebuchadnezzar II, the greatest of Babylonian kings from four centuries before the book’s composition, is troubled in his sleep by a terrifying dream. So, when he wakes up, he summons his full department of eastern mantic experts – Egyptian magicians, Akkadian astrologers, Babylonian sorcerers and Chaldeans. The king demands that this faculty of scholars not only interpret his dream, but first tell him its contents. When the wise men of Babylon protest at the unfeasibility of such a challenge, Nebuchadnezzar condemns them all to death.
 

On the eve of the mass execution of scholars, the content of the dream and its meaning is revealed to Daniel, a Judean exile living in the Babylonian court. And so the next day Daniel interrupts the punishment and speaks before the king:


You, O king, were watching, and – behold! – a single great statue; this statue, mighty and exceedingly dazzling, stood before you, and its appearance was dreadful. The head of this statue was of pure gold, its breasts and arms of silver, its belly and thighs of bronze. Its legs were of iron, its feet partly of iron and partly of clay. You were watching until a stone was cut out, not by (human) hands, and it struck the statue on its feet of iron and clay and broke them into pieces. Then the iron, the clay, the bronze, the silver and the gold were crushed all as one, and became like the chaff on the summer threshing-floors; and the wind carried them away, and not a trace of them was found; and the stone, which struck the statue, became a great mountain, and it filled the whole Earth.
 

Daniel then interprets it as follows. Nebuchadnezzar, and his Babylonian empire, are the head of gold. The Babylonian kingdom will fall to another empire, the Medes of the Zagros mountains, represented by the silver chest and arms. Then a third kingdom, represented by bronze, will rule over all the Earth: this is the Persian empire founded by Cyrus the Great. Finally, there will be a fourth kingdom, ‘strong as iron’. As Daniel explains, just as iron shatters everything, so this kingdom ‘will break and crush all these’ former states. This is the kingdom of Alexander the Great, and his Seleucid successors. Yet it is divided upon itself, and toppling on clay feet.


History appears here, perhaps for the first time, as a closed totality: ordered, whole, complete, head to toe

Daniel concludes his exposition by elucidating the function and identity of the stone that destroys the statue and grows into a mountain: ‘In the days of those kings (the Seleucids) the God of heaven will set up a kingdom that will never be destroyed, nor will it be left to another people … but it will itself endure forever.’ Unlike the other empires, which get conquered and replaced by Earthly powers, the end of the Seleucid empire brings about the end of history itself.

This vision – and there are several others like it in the book – orders history into a set of four successive empires: Babylonia, Media, Persia and the Seleucids. Earthly empire as a whole gets symbolised as a grand statue made of processed materials – metals and burnt clay. It is transitory, destructible, unstable and idol-like. Then that history gets destroyed and replaced by the heavenly, eternal kingdom – a natural stone, unchanging and unworked by human hands.
 

History appears here, perhaps for the first time, as a closed totality: ordered, whole, complete, head to toe. The vision projects a viewing gaze, for both Nebuchadnezzar and us, that is exterior and out-of-time. It opens a representation of providential time, of history as revelation. In a world coordinated to the Seleucid timeline and pervaded with assertions of monarchic agency, this apocalyptic vision reveals the ultimate and underlying sovereignty of God. The theological lesson of the episode is given programmatic, theological formulation in Daniel’s thanksgiving prayer, which he delivered after the mystery of Nebuchadnezzar’s dream was revealed to him:


Let the name of God be blessed from eternity and for eternity,
For the wisdom and the power are His.
He is the one who changes the times and the seasons,
Who removes kings and establishes kings.

As a genre, the historical apocalypses that emerged in Seleucid Judea, Babylonia and Iran staged a battle between king and God over the control of time and the architecture of history, exposing the claims of empire as illusory, and relocating the fate of nations to heaven.

For the Seleucid empire, as we have seen, time was transcendent and disinterested. The future was monotonised and disenchanted. Temporal texture was depersonalised. There was no possibility of restart. Worst of all, there was an endlessness that, by implication, would overwhelm eternity. Seleucid time was a mere passing, and so a loss. Tick, tick, tick, tick…

The historical apocalypses, by contrast, presented an image of time in which everything, including the future, was already determined. Where all that happened to you, happened for you. History was shaped, directed and reaching toward a conclusion. All events, however dislocated, were part of a single story, a total history. Above all, these historical apocalypses called forth the end of days – in this example, the stone that destroys Earthly empire. Not only did this fantasise the destruction of the Seleucid empire; it also brought the new experience of time to a close.

The end-times achieved a kind of temporal integration, like the backing a mirror needs if we are to see anything. They converted the experience of one-thing-after-another into a narrative plot. No longer was time passing away, empty and irredeemable, tick-tick-tick; it now had meaning and an ending, tick-tock.

Eric Hobsbawm, the Communist Who Explained History




 
 
An interesting life and a reminder of the formative years of the thirties in which populist tyrannies were promoted and voted into existence somehow or the other.  That Meme is still with us but slowly ebbing.  Its replacement remains unsatisfactory.
 
His viewpoint is useful, simply because it acts as an anecdote to other viewpoints.  All viewpoints are biased and get old too soon.
 
I have always found it wise to guide my historical reading through several such viewpoints.  Biographys are often helpful for this,.
 
.
Eric Hobsbawm, the Communist Who Explained History
 
 
Hobsbawm, perhaps the world’s most renowned historian, saw his political hopes crumble. He used that defeat to tell the story of our age.


By Corey Robin

May 9, 2019

https://www.newyorker.com/books/under-review/eric-hobsbawm-the-communist-who-explained-history

Across two centuries of the modern world, Eric Hobsbawm, a lifelong Marxist, projected a dramatic span that no historian has since managed to achieve. 
 
Eric Hobsbawm was a historian and a Communist. The first pursuit brought him great success. When he died, in 2012, at the age of ninety-five, nearly all of his books were still in print, his writings had been translated into more than fifty languages, and he was eulogized across the globe. He left behind an astonishing body of work, including a widely read tetralogy spanning the years 1789-1991 and a vocabulary that revolutionized the study of modern history: the “invention of tradition,” “primitive rebels,” the “general crisis” of the seventeenth century, the “dual revolution,” the “long nineteenth century,” and the “short twentieth century.”

The second pursuit ended less well. Hobsbawm joined the Communist Party in 1936 and stayed in it for about fifty years. Not only did the cause to which he had devoted his life expire in infamy but the rubbish that it had promised to sweep from the stage—ethnic and national chauvinism—would, in time, make a new bid for legitimacy. As early as 1990, Hobsbawm foresaw how the disintegration of the Soviet Union would accelerate forces that “have been kept frozen for up to 70 years.” He came to see the consequences of that disintegration less as the disappointment of his hopes than as a coda to “the most murderous century” in history, which saw, in Europe, a revival of torture, the deliberate slaughter of millions, the collapse of state structures, and the erosion of norms of social solidarity.

“Losers,” Hobsbawm once said, “make the best historians.” Yet, if it was Hobsbawm’s destiny to enjoy intellectual success while suffering political failure, the experience may have been more generative than he realized. It gave him his abiding historical theme: the struggle of political men and women to get on top of their world, and the economic forces that bested them. As is true of all great historians, irony was Hobsbawm’s signature, the reversal of fortune his ink. The reason was simple: “Nothing . . . can sharpen the historian’s mind like defeat.” He was lucky to have so many.


Hobsbawm’s biographer, Richard Evans, is one of Britain’s foremost historians and the author of a commanding trilogy on Nazi Germany. He knew Hobsbawm for many years, though “not intimately,” and was given unparalleled access to his public and private papers. It has not served either man well. More data dump than biography, “Eric Hobsbawm: A Life in History” is overwhelmed by trivia, such as the itineraries of Hobsbawm’s travels, extending back to his teen-age years, narrated to every last detail. The book is also undermined by errors: Barbara Ehrenreich is not a biographer of Rosa Luxemburg, and Salvador Allende was not a Communist. The biography is eight hundred pages because Hobsbawm “lived for a very long time,” Evans tells us, and he wanted “to let Eric tell his story as far as possible in his own words.” But, as we near the two hundredth page and Hobsbawm is barely out of university, it becomes clear that the problem is not Hobsbawm’s longevity or loquacity but the absence of discrimination on the part of his biographer.

Instead of incisive analyses of Hobsbawm’s books, read against the transformations of postwar politics and culture, Evans devotes pages to the haggling over contracts, royalties, translations, and sales. These choices are justified, in one instance, by a relevant nugget—after the Cold War, anti-Communist winds blowing out of Paris prevented Hobsbawm’s best-selling “The Age of Extremes” from entering the French market in translation—and rewarded, in another, by a gem: Hobsbawm wondering to his agent whether it’s “possible to publicize” “Age of Extremes,” which came out in 1994, “& publish extracts on INTERNET (international computer network).” Apart from these, Evans’s attentions to the publishing industry work mostly as homage to the Trollope adage “Take away from English authors their copyrights, and you would very soon also take away from England her authors.”

Inevitably, Evans is shadowed by Hobsbawm’s lively memoir “Interesting Times.” The title references a famous curse, putatively Chinese: “May you live in interesting times.” Hobsbawm was born in Alexandria, Egypt, in 1917, just five months before the Bolshevik Revolution. Two years later, the family moved to Vienna, which the memoir describes as “the impoverished capital of a great empire, attached, after the empire’s collapse, to a smallish provincial republic of great beauty, which did not believe it ought to exist.” His father died in 1929, his mother in 1931. Orphaned at fourteen, Hobsbawm moved to Berlin, to live with relatives.

A sinking economy and rising fascism led the bookish teen-ager to communism. Hobsbawm began organizing against the Nazis and struggling through Marx. (When he was seventeen, he ruefully noted that he hadn’t read enough Marx; by this time, Evans observes, he had consumed the first volume of “Capital,” “The Poverty of Philosophy,” “The Eighteenth Brumaire of Louis Napoleon,” and “The Civil War in France.”) Once the Nazis came to power, he moved to Britain. After receiving his undergraduate and graduate degrees from Cambridge and securing a teaching position at Birkbeck College, he came to lead a charmed life in London, where he attended parties hosted by the theatre critic Kenneth Tynan that featured A. J. Ayer, Robin Blackburn, and Liza Minnelli.

But “Interesting Times” has a second, unintended meaning. Hobsbawm was obsessed with boredom; his experience of it appears at least twenty-seven times in Evans’s biography. Were it not for Marx, Hobsbawm tells us, in a book of essays, he never would “have developed any special interest in history.” The subject was too dull. The British writer Adam Phillips describes boredom as “that state of suspended anticipation in which things are started and nothing begins.” More than a wish for excitement, boredom contains a longing for narrative, for engagement that warrants attention to the world.

A different biographer might have found in Hobsbawm’s boredom an opening onto an entire plane of the Communist experience. Marxism sought to render political desire as objective form, to make human intention a causal force in the world. Not since Machiavelli had political people thought so hard about the alignment of action and opportunity, about the disjuncture between public performance and private wish. Hobsbawm’s life and work are a case study in such questions. What we get from Evans, however, is boredom itself: a shapeless résumé of things starting and nothing beginning, the opposite of the storied life—in which “public events are part of the texture of our lives,” as Hobsbawm wrote, and “not merely markers”—that Hobsbawm sought to tell and wished to lead.


Down the corridor of every Marxist imagination lies a fear: that capitalism has conjured forces of such seeming sufficiency as to eclipse the need for capitalists to superintend it and the ability of revolutionaries to supersede it. “In bourgeois society capital is independent and has individuality,” “The Communist Manifesto” claims, “while the living person is dependent and has no individuality.” Throughout his life, Marx struggled mightily to ward off that vision. Hobsbawm did, too.

The Age of Revolution,” the first of Hobsbawm’s four volumes of modern history, opens with the French Revolution and Britain’s industrial revolution, two explosions of the late eighteenth century that spurred “the greatest transformation in human history” since antiquity. For Hobsbawm, this “dual revolution” announced two different orientations to modernity. In the first, men and women sought to transform the world through action in concert. In the second, there was transformation, but it happened by coincidence and indirection, through the choices of businessmen “whose only law was to buy in the cheapest market and sell without restriction in the dearest.” These were the lead characters of modernity: the political and the economic. Both contended for mastery; each sought control of the plot.

Hobsbawm begins with the industrial revolution, he says, because “without it we cannot understand the impersonal groundswell of history on which the more obvious men and events of our period were borne.” Initially, the economic assumes the lead; capitalist industrialization sets the stage for the political events that follow. As it gathers force, capitalism threatens to push political actors offstage, and at a certain point it seems to have triumphed. “The gods and kings of the past were powerless before the businessmen and steam-engines of the present,” Hobsbawm writes. It is “traders and entrepreneurs”—not statesmen or generals—who are “transforming the world.”

Yet, from the beginning, Hobsbawm has been telling a counter-tale, undermining the capitalist’s bid for narrative supremacy. Industrial capitalism, he reminds us, was not a virgin birth; it was the child of political parents. It is not the entrepreneur’s acumen or inventor’s know-how that industrialized Britain; technology was more advanced in France, after all. What mattered in Britain was statecraft. Through aggressive warfare with its European competitors and studied choices in colonial administration, Britain conquered a world market for its industry. Everyone agrees that cotton was the motor of the industrial revolution, but what made the “extension of Lancashire’s markets” a “landmark in world history,” in Hobsbawm’s words, was not the heroism of the businessman or genius of its machines. It was that “India was systematically deindustrialized” by a British monopoly that had been “established . . . by means of war, other people’s revolutions, and her own imperial rule.”

The French Revolution, by contrast, was the most formidable statement of political agency since Aristotle declared man a political animal. Through their intentional and concerted actions, the revolutionaries created a new world. Though Hobsbawm itemizes the social and economic causes of the Revolution, he assigns pride of place to ideas and intellectuals—a claim he also makes about the revolutions of 1848, in his next volume in the series, “The Age of Capital.” “A striking consensus of general ideas among a fairly coherent social group gave the revolutionary movement effective unity,” he writes. The collapse of the monarchy was probably inevitable, but it was the action of ideologues that “made the difference between a mere breakdown of an old regime and the effective and rapid substitution of a new one.”


This was the contest that Hobsbawm used to frame the arc of history. The dual revolution was the starting gun that sent two marathoners on their race. The first ran under the flag of the market, following laws as if they were blind forces of nature; the second ran under the flag of politics, making laws through reason and speech. At stake was not who would make it to the finish line first but who would remain standing when the race was done.


Initially, the bourgeoisie grabbed the flag of politics, joining forces with the laboring poor to transform the French monarchy into a republic and then to defend that republic against its counter-revolutionary enemies. “Its achievement was superhuman,” Hobsbawm writes. Even under Napoleon, the bourgeoisie was willing to use the political instruments of war, law, and state-making to abolish feudalism and charge the atmosphere with the ions of revolution. More than any compulsion of economics, Hobsbawm argues, revolution and war were the decisive factors in the emancipation of the French and parts of the European peasantry.

But that was the last time the bourgeoisie would don such a costume. After 1830, politics and revolution grew fraught with the social question—the emancipation of the working class—leading the bourgeoisie to refrain from exercising political levers on its own behalf, even at the cost of its interests. “The Age of Capital” opens in 1848, with a bourgeoisie that has been thoroughly depoliticized. Where once it gambled on revolution, it now saw order and stability as the prerequisites of capitalist expansion. Declining the “rewards and dangers” of la grande nation, Hobsbawm writes, the bourgeoisie sent politics “into hibernation.”

This is Hobsbawm’s next twist of the plot. The economy afforded the bourgeoisie some opportunities for greatness. Industrialists built railroads, dredged canals, and laid submarine telegraph cables. They made the world a whole. But their ambitions had a flaw: for them, “history and profit were one and the same thing.” History-making risks failure; profit-making can’t abide it. For Hobsbawm, the bourgeois drama was the “drama of progress,” which, because it was thought to be inevitable, lacked the necessary elements of uncertainty, reversibility, and irony. When the bourgeoisie became a strictly economic actor, the play became the thing. “It was their age,” Hobsbawm says of the bourgeoisie, but they were not its protagonists. That title belonged to capitalism itself, a word that was only then coming into circulation.

And so the flag of politics—whether of parties, mass strikes, or revolutions—was taken up by the working class. A consistent theme of Hobsbawm’s work, not only in these four volumes but also in his many essays, is a focus on the working class as a political actor rather than as a socioeconomic category. It was here that his signature style—open with a powerful statement of a generalizing thesis, bury the thesis with a hundred qualifications, and then rescue the thesis from its tomb of provisos, so that it emerges with renewed force—served him especially well. Not only did it allow him to demonstrate his absolute command over the rule and its exceptions; it also saved him from the misplaced mania for contingency, the fetish for the telling detail, that sinks the work of so many historians.

The working class, Hobsbawm wrote, was born with everything going against it. After the revolutions of 1848 failed, the leaders of the new proletarian movements were in jail, exiled, or forgotten—sometimes, Hobsbawm notes, “all three.” Writing about social revolutions in the decades after 1848 “is rather like writing about snakes in Britain: they exist, but not as a very significant part of the fauna.” In “The Age of Empire,” the third of his volumes, which begins in 1875, Hobsbawm adumbrates, with even greater command, more obstacles to the working class: a dizzying heterogeneity of language, religion, ethnicity, occupation, location, nationality, and more. In 1880, Hobsbawm notes, mass parties of the working class “barely existed”—except (there’s that proviso) in Germany. “By 1906,” he wrote, those parties “were so much taken for granted that a German scholar could publish a book on the topic ‘Why is there no socialism in the USA?’ ”

What changed? As he does with the French Revolution, Hobsbawm emphasizes the role of militants who understood “the primacy of politics”—specifically, the power of “ideology carried by organization.” In the decades leading up to the First World War, socialists influenced by Marx brought to workers in towns, villages, and urban precincts a new “single identity: that of ‘the proletarian,’ ” along with a conveyance for acting upon that identity: the party or the trade union. Though Hobsbawm itemizes, as he does with the French Revolution, the economic backdrop to these efforts, he takes pains to emphasize the political underpinnings of the economics. Throughout this period, the state was increasingly organizing the market and the workplace, creating integrated industries that made worker action on a national scale possible.


The struggle between capitalism and socialism was never a question of how to organize economic life; it was a question of whether life would be organized by economics. The subtitle of Marx’s “Capital” was “A Critique”—not “A Defense” or “A Theory”—“of Political Economy.” According to Hobsbawm, Marxism entailed “considerations . . . of action, will and decision”; it was a “document of choices,” not a summa of inevitabilities. What made modern history a story, in other words, was the attempt of men and women to subordinate economics to politics.

Did that attempt succeed? The answer, for Hobsbawm, seems to have been no. The ancients believed that the economy was situated in the household, which was the site of production, and in the marketplace, where households traded their surplus. Beyond that lay the public life of the polity; politics began where the economy ended. But in the modern world, Hobsbawm declared in his Marshall Lectures, “history and economics grew up together.” Any account of political agency had to confront the fact that economics was now the medium of political action. Capitalism was not the base to the superstructure of politics, as it is so often presented in textbook accounts of Marxism; it was politics itself.

That insight afforded Hobsbawm astonishing historical vision, as when he observed, in passing, how the political tempos of the non-industrial world, which were conditioned by the famine or feast of the harvest cycle, were accelerated in the industrial world by the boom and bust of the business cycle. Or when he noted, in “The Invention of Tradition,” how public space was altered in response to the mass politics of capitalist contestation: where spaces previously were decorated with baroque details depicting a pageant of old-world permanency, new spaces were stripped of all adornment, allowing attention to settle on “the movement of the actors themselves”—most notably, the working class—as they marched through the square.

Politically, the insight was a source of frustration and despair. As much as Hobsbawm hoped to launch the politicized worker to the top of the economic mountain, the mountain proved to be an unconquerable summit, as the events of the late twentieth century would demonstrate. “Radicals and socialists no longer know,” he said, in the late nineteen-seventies, “how to get from the old to the new.” When the edifice of Soviet-style communism and Western-style social democracy collapsed—one of the great themes of his fourth and final volume, “The Age of Extremes”—it wasn’t the worker but the political actor that came tumbling down with it. The market society that emerged from the rubble was not a complement to democratic rule but its replacement. Because it “denies the need for political decisions,” Hobsbawm wrote, in 2001, market society is “an alternative to any kind of politics” at all. The marathon was over; the economic had won.


After 1956, when the Soviet Union invaded Hungary and Nikita Khrushchev revealed Stalin’s crimes, most of Hobsbawm’s fellow-historians quit the Communist Party. Hobsbawm stayed. For years, he was asked why.

It was the wrong question, in part because it presumed a cathexis that was never quite there. In a chapter of his memoir called “Being Communist,” Hobsbawm describes the life of “utter emotional identification” and “total dedication” required by the Party. For a writer of such an empirical cast of mind, however, it’s notable that Hobsbawm never cites a single instance of his own such devotion or identification. From the beginning, his membership included extended moments of distance and disagreement. As Evans notes, Hobsbawm thought the Nazi-Soviet nonaggression pact, supported by the Party, was a bad idea, and he refused to follow the Party line against Tito, who had broken with Stalin. When the Party sent Hobsbawm letters instructing him to change his tune, he tossed them in the trash. In the United States, as Richard Wright makes clear in “The God That Failed,” the intensity of the Party’s demands made breaking with it a trauma. In Britain, Hobsbawm allows in his memoir, the Party “did not order us to do anything very dramatic.” Staying in, getting out, it was all of a piece.

But the question of why Hobsbawm stayed was wrong for another reason: it often assumed that Hobsbawm believed in a utopia worth any price. Did Hobsbawm really imagine, the writer Michael Ignatieff asked him, in a 1994 interview, that had “the radiant tomorrow actually been created, the loss of fifteen, twenty million people might have been justified?”

Hobsbawm said yes, which got him into no end of trouble. But his full answer is worth considering. As Hobsbawm reminded Ignatieff, the question of communism arose at a time when “mass murder and mass suffering [were] absolutely universal.” Millions were killed in imperial massacres, the Armenian genocide, and the First World War; then fascism marched and the suffering increased. Every person now faced a choice. Watch the suffering get worse, Hobsbawm told another interviewer, or take a gamble on “a new world . . . being born amid blood and tears and horror.” Set out a destination that might render violence a means rather than the end; pursue a course of action that would bring the narrative not to a close but to a point. That’s what communism offered: “It was that or nothing.”As it turns out, the Communist got both: that and nothing.

But what the Communist could not do in life the historian can do on the page. Across two centuries of the modern world, Hobsbawm projected a dramatic span that no historian has since managed to achieve. “We do need history,” Nietzsche wrote, “but quite differently from the jaded idlers in the garden of knowledge.” Hobsbawm gave us that history. Nietzsche hoped it might serve the cause of “life and action,” but for Hobsbawm it was the opposite: a sublimation of the political impulses that had been thwarted in life and remained unfulfilled by action. His defeats allowed him to see how men and women had struggled to make a purposive life in—and from—history.

The triumph was not Hobsbawm’s alone. Moving from politics to paper, he was aided by the medium of Marxism itself, to whose foundational texts we owe some of the most extraordinary characters of modern literature, from the “specter haunting Europe” to the resurrected Romans of the “Eighteenth Brumaire” and “our friend, Moneybags” of “Capital.” That Marx could find human drama in the impersonal—that “the concept of capital,” as he wrote in the “Grundrisse,” always “contains the capitalist”—reminds us what Hobsbawm, in his despair, forgot. Even when structures seem to have eclipsed all, silhouettes of human shape can be seen, working their way across the stage, making and unmaking their fate.

I Slept Outside for a Week and It Changed My Life (Really)

   


It would be lovely if we all could have access to outdoor sleeping on a regular basis. It certainly has never been built directly into our housing, though i think it could. After all buttoning up mosquito netting around an open balcony can not be hard.

The point is that we all need access to open air sleeping from time to time.  We all also need to get our coffee habit under control as it does impact our sleep patterns.  This is best done with exposure to outside air.


all good.
.


I Slept Outside for a Week and It Changed My Life (Really)

What a restless coffee drinker learned from going to circadian-rhythm rehab (a tent in the woods).

I live in explicit defiance of the rules of good sleep hygiene. Rule one: Don’t expose yourself to the blue light that’s emitted from phones and computers before bed. (When else am I going to catch up on the day’s hot takes?) Rule two: Sleep in a darkened bedroom. (I had’t considered this when buying my gauzy curtains, which are sufficient to keep my neighbors from peeping but definitely not to block out their overactive security floodlights.) Rule three: No afternoon coffee. (Ha!) 
Since middle school over a decade ago, my terrible sleeping habits have manifested in various literal failures to launch: waking up for an early-morning run is a laughable concept. I hit the snooze button, on average, four times every morning. My record is eleven. Lately, my energy's been peaking later and later—I do my best thinking and running starting around 4:30 p.m. In my one attempt to have a consistent sleep schedule after college, I tried to be in bed by 10 p.m. But I often ended up just staring at my ceiling for hours, wondering who the hell is able to fall asleep in 10 to 20 minutes, which is evidently the average
Maybe that’s why the headline stood out to me: “Want to fix your sleep schedule? Go camping this weekend,” which appeared in Popular Science in early February. A 2017 Current Biology study, which the article cites, focuses on that most mysterious indicator of sleep habits: the circadian rhythm. Put simply, your body should want to be asleep when it’s dark and awake when it’s light. Apparently, this well-tuned internal clock is as easy to achieve as it is lacking in most adults with a job and a smartphone. Just two days spent entirely outdoors can move a person’s internal clock 2.5 hours closer to being in sync with our natural sleep-wake cycle, the researchers found, following an earlier study showing that a week spent outdoors adjusted some subjects’ clocks by a whopping four hours. This is because constant exposure to natural light (and, crucially, darkness) seems to encourage the release of melatonin, the hormone that regulates circadian rhythm. “When your melatonin begins to rise, that tells us the start of the internal biological night is beginning,” says Kenneth Wright, professor at the University of Colorado’s Department of Integrative Physiology and a lead researcher on the study. 
Maybe all this was a sign: I could hit reset on my deeply broken internal clock and indulge in some good old-fashioned stunt journalism. Surely, sleeping outside for seven days straight, even if I still had to go to work and couldn't spend all my waking hours in nature, would get my melatonin spiking at the right times. And if it didn’t, so what—winter had just ended and I really missed camping. My only rule was that I had to sleep in nature every single day. I could shower and answer e-mail and even have 2 p.m. coffee in civilization, but I couldn’t sleep in my own bed even if I was cold, miserable, or fearful of serial killers who hike
My experiment started in early April, and a friend joined me for my inaugural night out at a car-camping site about 20 minutes from my Santa Fe home, staking out a spot for my just-big-enough-for-two tent. We sat at the campfire for a few hours, then it died, we got cold, and we made for our sleeping bags. Must be something like 11 p.m., I thought, but it was only 9:15. We laughed about it—then fell asleep about 15 minutes later. I awoke only when my alarm rang and hit the snooze button just twice. 
Both of these victories were possibly a result of being lulled to sleep by, and waking up to, disorienting new surroundings. I kept my hopes low for the second night, when I’d be a little more used to the pattern and I’d be camping alone. I thought I might lie awake thinking about The Blair Witch Project. Nope. This time I was out in five minutes, barely surfaced from my deep sleep when I heard (I hope) deer circling my tent in the middle of the night, and hit my snooze button just once the next morning. After the third restful night, I abandoned my sleep anxieties and started evangelizing: “My sleep has been amazing,” I told anyone unfortunate enough to ask how the experiment was going. “I think my circadian rhythm is already changing. You can just feel it, you know?” 
Having drank the melatonin-spiked Kool-Aid, I unzipped my sleeping bag on day four feeling like a whole new, clearheaded woman. I could probably go without my morning coffee, I told myself while drinking my morning coffee. But I did drop the urge to have a cup at 2 p.m.—in fact, I genuinely felt chipper all day. I was becoming the type of functional person who I always thought just lied about their caffeine habits. I kept waiting for the other shoe to drop, to start craving a nap, but that was the weirdest part: I never felt sleepy until the moment my head hit the pillow, and shortly after I was out cold. It was like my body knew to be awake until I lay down, and then it said, “Aha! I’m going to sleep now!” 
I know that what I experienced isn’t really how circadian rhythms work, but according to Wright, the University of Colorado researcher, it could be related. Cutting exposure to blue light and increasing morning sunlight in any amount can help max out your melatonin closer to nightfall. “When that melatonin rises, it tells the body to get ready for bedtime in a couple hours,” Wright said. “So when it’s time for you to try to go to sleep, you’re probably sleeping more in sync with your clock.” 
My experiment was less than scientific, but I do feel like I gleaned some very real benefits simply by letting sunrise and sunset determine my waking hours: the forced bedtime helped me fall asleep sooner, and the 360 degrees of sunlight and the cawing of the ravens every morning were hard to ignore. By the end of the week, I felt consistently tired whenever I chose to go to bed, and consistently more awake when it was time to start the day, and coffee no longer felt like a mandate. (I still drink it—this isn’t magic.) 
Why It Works

I learned a lot about sleep hygiene during my week of camping, including that a lot of the specific before-bedtime habits you’ve heard about really do work. But, in addition to changes in natural-light exposure, it was actually those aspects of camping that you’d think would diminish my quality of sleep that probably enhanced it. For instance, temperatures dipped from 70 degrees to the mid-thirties after sundown every night, and the oncoming chill probably signaled to my body that sleep was imminent. Having no cell service meant that I didn't check my phone before going to bed, which meant no blue light messing with my melatonin levels. I also slept in a mummy-style sleeping bag beneath a giant quilt, the weight of which forced me to sleep on my back, so I could breathe, and kept me from moving around. (In fact, research has shown that a weighted blanket can improve sleep for insomniacs.) 
How to Making Camping Work for You

For a circadian revamp, Wright sensibly recommends a weekend camping trip, rather than a harebrained workweek of semi-camping. Think of it as a cheat that’ll make it easier for you to develop healthier sleep habits when you return home. “We can use camping to jump-start an earlier sleep schedule, then use good sleep habits to keep us there,” he says. Sleeping in the backyard is OK, too, if you’re not blessed with a state forest up the road, as I am. (I love you, Black Canyon Campground.) Just make sure you’re not too exposed to streetlights or other ambient sources of illumination, and maybe leave your phone inside to cut that temptation entirely.
And even though it’s not scientifically supported, I’ve concluded that camping on a school night should be a casual option, even a sort of monthly treat—like a sports massage or a personal-health day. (I’ve tried neither, but they sound relaxing!) 
How to Get the Same Benefits at Home

You can make small changes every day to replicate some of the sleep-cycle benefits of camping. “If you start your day by being more exposed to natural sunlight, that by itself is going to have an impact,” Wright says. He also suggests exercising soon after waking. “That, in addition to turning the lights down in your house and dimming all your electronic devices, could probably help keep your clock timed earlier.” 
For indoor nights, I’ve also made some changes to help replicate my outdoor bedroom. I bought a blackout curtain for the window that directly faces my neighbors’ security light. (I moved the gauzy one to the window facing the street, where it will still allow the morning light to shine in.) Every night, I put my phone in airplane mode and read a book instead. I keep my room as cold as I can, although I’m not sleeping in my mummy bag—yet. I try not to worry so much about exactly when I go to sleep and instead eliminate bad sleep hygiene before it catches up to me: goodbye, 11 p.m. e-mail checks; goodbye, afternoon coffee; goodbye, snooze button number four. To paraphrase my favorite dumpster graffiti, which I believe also paraphrases a Beatles song: Everybody has something to hide (about their sleep-hygiene sins) except for me and my tent.

Friday, May 17, 2019

The Guru Syndrome: When Spirituality Turns Sour



 
 
Correctly a guru is your teacher and guide on the spiritual path.  He is the master and you are the apprentice.  So far so good.  By that measure any person who is maintaining practice can show you enough to get started.  The rest is encouragement because real progress is physical.  Through effort your brain must respond to your intent and allow progress.
 
Some Masters can also project and help to directly trigger real change.  This is extremely helpful as the initiate then experiences the reality of the known spiritual path.  That reality supports additional effort.
 
What is not implied is worship of a Guru.  That is like worshiping Newton.  What you want is his insights and most Gurus do write and talk a lot.  Most in fact are experienced teaching masters and that is their primary calling.
 
The path itself can be misleading as many are driven by wishful thinking rather than accepting clear knowledge of the other side and the related peace this instills.
 
Unfortunately is is also the path to human immortality on Earth for those who wish it.  This has actually been culturally demonstrated or at least implied often.  My own introduction to the Inner Sun showed me the mechanism behind such a personal transition.  Practice allowed me to see it, but it may well have been the other side instructing me, than my innate skill as it has not presented since.  Others far more adept than myself have also typically seen it once or even  twice but never on demand as necessary..... 

 
The Guru Syndrome: When Spirituality Turns Sour
May 12th, 2019

By Steve Taylor, Ph.D.

Guest writer for Wake Up World

https://wakeup-world.com/2019/05/12/the-guru-syndrome-when-spirituality-turns-sour/

Many years ago, I attended some meetings of a spiritual group whose ideas interested me. I liked the teacher and author who had started the group. I thought his theories were very clear and intelligent, encompassing a vast range of topics into an integrated whole. But what I found strange was the attitude of other members of the group. Although the teacher had died many years ago, they worshipped him as an omnipotent god-like being. They believed that he had performed miracles and that he still controlled their lives. I was also disturbed by their attitude to me. They disapproved that I was interested in other approaches. When I mentioned to one member of the group that I also attended a local Buddhist group, he looked at me sternly and said, “Why do you need to go there? This group should be enough for you.” After a while, the group’s unconditional worship of their leader and their exclusivist attitude made me feel so uneasy that I stopped attending their meetings.


This was my first encounter with the “guru syndrome.” The guru tradition has been a part of Indian culture since time immemorial. In that context, it is seen as an important way of transmitting spiritual teachings, and a way of supporting aspirants along the spiritual path. Spiritual development can be a tricky process, with all kinds of pitfalls and dangers, so the guidance of a guru is helpful. According to Indian tradition, the guru can also ‘transmit’ his spiritual radiance to his followers, providing them with spiritual sustenance. In addition, the devotion of the disciple to the guru has an important role. Indian spirituality places a high value on bhakti (devotion), as a way of transcending self-centredness.
 
Gurus in Western Culture


However, when the guru tradition is transplanted into western culture, it often becomes problematic. (I’m sure it is sometimes problematic in Indian culture too, but probably not to the same extent.) There are countless reports of American or European-based spiritual leaders who have exploited and abused their followers, had promiscuous sex with their female followers, become addicted to alcohol or drugs, and so on. In fact, there are so many cases of ‘gurus gone wrong’ that it is not easy (at least outside traditional Indian culture) to find examples of ‘good gurus’ who have avoided excess and immorality.


I don’t think this is wholly the fault of gurus themselves. There’s no doubt that some gurus have bad intentions from the beginning. As I explain in my book The Leap, some gurus may be narcissists who are attracted to the power and privilege (and perhaps the money and sex) that guru status brings. Others may be self-deluded individuals who believe that they are spiritually awakened, when in fact they are psychologically damaged – and who leave a trail of further psychological damage behind them. But some gurus do seem to start off with good intentions. Perhaps they don’t even intend to become gurus. However. followers gradually gather around them, and eventually they become the center of a ‘spiritual community.’


And this – the formation of a spiritual community – is usually the stage when things really go awry. Even if they weren’t corrupt to begin with, the gurus becomes corrupted by the power of their position and the unconditional devotion of their disciples.
 
The Need to Worship Gurus


The key to understanding the guru syndrome is the psychological need of disciples. Although many disciples (at least initially) may have a genuine need for spiritual growth, this is usually combined with a much more unhealthy impulse: a regression to a child-like state of unconditional devotion and irresponsibility. This is a very appealing state to be in. Think of how wonderful it felt as a young child, to believe that your parents were in complete control of the world, and could protect you from everything. There was nothing to worry about; your parents had it all covered. And you worshipped them so devotedly that you unquestioningly accepted everything they said and did.


Guru worship takes his worshipers back to that infant state. As long as the disciple is in the care of the guru, all is well in the world. They feel safe and secure, just as children do in the presence of their parents. They give up responsibility for their own lives and pass it on to the guru, just as children do. And the guru is a perfect being, who cannot behave unethically. He can accumulate millions of dollars, own 93 Rolls Royces, have his own armed security team, and regularly humiliate his followers, but they will always find some excuse for this appalling behaviour, in the same way that children will refuse to believe that their parents can do wrong. The disciples will claim that the guru’s abuse and cruelty is a form of ‘divine play’ or a way of testing their followers.


Any initial impulse that the disciples may have felt to spiritually awaken is usually subsumed by this regression. The guru doesn’t lead them to enlightenment, but to infantile narcissism. They may feel a sense of oneness or bliss in the company of the guru, but this isn’t genuine enlightenment, but more akin to the sense of oneness that a baby feels with their mother. In this way, the guru syndrome is a classic case of what Ken Wilber referred to as the ‘pre/trans fallacy’ – the mistaking of transpersonal spiritual states for regressive pre-personal states (or vice-versa).


And for the guru, this mindless devotion usually leads to disaster. Once they are surrounded by hundreds of adoring disciples, they begin to suffer from ego-inflation. They really believe that they are perfect, even that they are divine. They lose their moral compass, believing that the most unethical behaviour is acceptable. In the midst of unlimited power – and sexual temptation – they lose any sense of restraint.


Of course, this doesn’t apply to all gurus. Certainly in the Indian tradition, there have been many examples of gurus – such as Ramana Maharshi and Ramakrishna – who have behaved with integrity, and supported the development of thousands of followers. This also doesn’t mean that there is anything wrong with spiritual teachers per se. It is perfectly possible to be a spiritual teacher without being a guru – that is, without having a community of disciples around you, offering you unconditional devotion. In fact, the best thing a spiritual teacher can do is to avoid guru worship. And the best thing a spiritual seeker can do is to avoid gurus.

Staplers



Came across this.  More than you ever wanted to know about this ubiquitous tool of the office. Also a reminder that certain things cannot really be improved upon. Yet while the knife, saw and hammer and pry bar belong to even the stone age, this is merely a century and one half old.

The need for a useful and practical hard copy does not go away and thus we use the stapler

All good.  It will last forever.


Staplers

May 13, 2019

https://qz.com/emails/quartz-obsession/1618020/

Bound with a single stroke


The Industrial Revolution ushered in a new era of commerce—and paperwork. As the piles of orders, records, and data in offices across the industrialized world grew, so did the need to hold relevant files together. 
In the pre-stapler era, papers were bound with a needle and thread (time-consuming), wax or paste (messy), string (unreliable), or pins (ouch). By the late 19th century, however, inventors were closing in on a device that would securely bind papers with a single punch.

Today staplers come in all colors, shapes, and sizes, from electric models that bind dozens of pages as fast as a blink to manual desktop models shaped like high heels, hedgehogs, or dragon skulls. Whatever its appearance, the humble stapler remains a triumph of design and ingenuity that has left its indelible mark on the modern office.

2.5: Weight, in pounds, of the first commercial stapler
3: Steps in WikiHow’s “How to Remove a Staple From Your Hand” entry
6 mm: Length of the legs on the world’s most popular office staple
366: US deaths attributed to misused or malfunctioning surgical staples and staplers from Jan. 2011 to March 2018
554.54 m (1819 ft): Length of the world’s longest staple chain
$194 million: 2018 revenues from stapling and punching for Acco, owner of market-leader Swingline
$199.5 million: Price paid for Swingline in 1970 ($1.3 billion in 2019 dollars)
$60 million: Value of art donated to New York’s Metropolitan Museum of Art by Belle Linsky, who founded Swingline with her husband Jack
2002: Year the Linskys’ former factory became the temporary home of the Museum of Modern Art
Sponsored by Accenture
How to unlock the value of your innovation investments



From genetics to space exploration to quantum computing, we are living in a time when investment in, and expectations from, tech-fueled innovation are skyrocketing.



Brief history
1866: Inventor and attorney George McGill receives a US patent for a small bendable piece of metal to hold papers together, a prototype of the modern staple and direct ancestor of the brass fasteners still binding book reports today.
1877: Inventor Henry R. Heyl files a patent for a device that can both insert and close a staple with two strokes.
1879: McGill roars back onto the stapling scene, filing a patent for the McGill Single-Stroke Staple Press. Within a few years the market is flooded with competitors.
1927: The magazine stapler debuts, allowing multiple staples to be loaded into the device at once.
1939: The office supply company Parrot Speed Fastener Company (later rebranded as Swingline) debuts a top-loading model that becomes the industry standard.
1997: Following the passage of the North American Free Trade Agreement (NAFTA), Swingline announces it will close its New York City plant.
1999: The film Office Space is released, featuring a screen-stealing red Swingline stapler that becomes arguably the most famous stapler in movie history.
Fun fact!
One sometimescited story is that the first known staplers came from the court of French king Louis VX. But historian and Redditor Mike Dash is skeptical—he traces the possible origin of the story to Swingline’s founder, Jack Linsky, holding a “Louis XV stapler” in a 1962 photograph from the publication Investor’s Reader. Because steel wire—what a staple is made of—didn’t come until later, Dash suspects it was something like an embosser. What is true is that Linsky’s wife Belle, Swingline’s treasurer and “efficiency expert,” was an avid and informed Louis XV-era collector.


Most staplers actually come with two settings: the standard setting, which folds the staple’s legs underneath the crown, and the “pinning” or “tacking” setting, which fans the legs out so that it is easier to remove. Depending on the stapler model, slide or swivel the plate on the base (or “anvil,” in stapler parlance) to unlock this “hidden” function.

Explain it like I'm 5!
How does a stapler work?



“The engineering of a stapler is not fully appreciated,” Mike Parrish, director of product development for Swingline’s parent company Acco Brands, told the New York Times. As the NYT explains:
“Under the cap of a stapler, a pusher connected to a spring forces the row of staples forward. A special blade drives the first staple through a slot at the front of the magazine. A metal square with indentations at the edge of the open part of the base, called the anvil, helps bend the staple so it can grip the paper. The bottom of the completed staple is known as the clinch, and the top is the crown.”

When Quartz reporter Thu-Huong Ha met product design legend Naoto Fukasawa, the longtime advisor to Muji, she asked him to rate the design of several everyday office supplies. He was unimpressed with the calculator (“too much design”) but praised the humble stapler. “No one would misunderstand how to use it; it’s very intuitive,” he said, calling the standard desk model “an inevitable form.”
Staplers of distinction



Folle 26: “The Folle stapler is a classic of Danish industrial design…I’ve had mine since 1980, and it’s still nicely satisfying to use—like the sound of German-engineered car doors closing,” according to Sir James Dyson.
PaperPro: “Quite possibly the best staplers ever.”
Ellepi Klizia: “The niche office-supply company outside Milan has a cult following for its sleek and modern designs… [T]his is a stapler that’s meant to be seen.”
El Casco: “Can drive a staple through a tall stack of papers almost too easily.”
Ace Pilot: “[B]y some magical confluence of genius and restraint, William Ferdinand Weber—inventor of the first Ace staplers—just knocked it out of the park.”
Elastic Juwel: “The geometric design in enamel elevates the stapler from a functional machine to an artful addition to the 1930’s office desk.”

Quotable
“If they take my stapler, then I’ll set the building on fire.”
—Milton Waddams, Office Space
Origin story
How red staplers became a thing


When writer and director Mike Judge was making his workplace satire Office Space, he wanted a real-life office supply company to lend its name to a key subplot involving a mumbling employee named Milton and his beloved red stapler.
Bostitch said no. Boston said no. There was only one big name in stapling left. “Swingline was the only company that didn’t object,” Judge told The Ringer.

But there was a problem: Swingline only made gray and blue staplers, and Milton’s needed to pop on screen. A production designer painted it red, tweaked the shape with putty, and the most famous stapler in movie history was born. Swingline declined to license any official merchandise when the film was released. Its F-bombs and sex jokes didn’t quite fit the image of a staid Midwestern office supply company. But the company didn’t anticipate the ensuing spike in customer requests for red staplers, or the glut of counterfeit red Swinglines that suddenly popped up on the internet.

So Swingline leaned in. Red staplers are now the company’s second-best selling product, after the 747 in standard black, and it’s offering a replica of Milton’s prop stapler in honor of the film’s 20th anniversary. Quotes from the movie adorn fridge magnets in the Swingline offices, senior marketing manager Tess Hardy told Quartz at Work. And every new US Swingline employee receives a red stapler on their first day.
Reuters/Lucas Jackson
This one weird trick!
The staple-free stapler


US recycling guidelines say it’s okay to put stapled papers in a recycling bin. Some product designers are seeking to replace the metal staple altogether. Hence the “stapleless stapler,” a paper binding device that pierces a small hole in the papers and crimping them together. A US inventor named Arnold Kastner was granted a patent for such a device in 1989, though it would be another 20 years before a Japanese company called Kokuyo marketed a staple-free stapler to consumers. It’s not hard to find one online, but be warned—they hold only a few pages together at once. And a stapler without staples is—well, it’s not really a stapler, is it?