Tuesday, January 22, 2013

Warp Speed: What Hyperspace Would Really Look Like






 All good fun and likely all nonsense. I suspect that FTL travel is possible only using worm hole concepts and that should be safe enough provided one moves to low curvature points to avoid instantaneous acceleration been applied. I am sure there will be other tricky issues as well once the core technology is mastered.

In that scenario, the view would merely change completely. It is likely also that actual movement will be in short hops to have line of sight of any close by objects and it may be more energy efficient.

Warp drive remains a concept unlikely to be competitive with an effective worm hole strategy in the end which I suspect to be practical as more conforming evidence washes up.

Warp Speed: What Hyperspace Would Really Look Like

by Clara Moskowitz, SPACE.com Assistant Managing Editor

Date: 15 January 2013 Time: 11:54 AM ET



The science fiction vision of stars flashing by as streaks when spaceships travel faster than light isn't what the scene would actually look like, a team of physics students says.

Instead, the view out the windows of a vehicle traveling through hyperspace would be more like a centralized bright glow, calculations show.

The finding contradicts the familiar images of stretched out starlight streaking past the windows of the Millennium Falcon in "Star Wars" and the Starship Enterprise in "Star Trek." In those films and television series, as spaceships engage warp drive or hyperdrive and approach the speed of light, stars morph from points of light to long streaks that stretch out past the ship.

But passengers on the Millennium Falcon or the Enterprise actually wouldn't be able to see stars at all when traveling that fast, found a group of physics Masters students at England's University of Leicester. Rather, a phenomenon called the Doppler Effect, which affects the wavelength of radiation from moving sources, would cause stars' light to shift out of the visible spectrum and into the X-ray range, where human eyes wouldn't be able to see it, the students found.

"The resultant effects we worked out were based on Einstein's theory of Special Relativity, so while we may not be used to them in our daily lives, Han Solo and his crew should certainly understand its implications," Leicester student Joshua Argyle said in a statement.

The Doppler Effect is the reason why an ambulance's siren sounds higher pitched when it's coming at you compared to when it's moving away — the sound's frequency becomes higher, making its wavelength shorter, and changing its pitch.

The same thing would happen to the light of stars when a spaceship began to move toward them at significant speed. And other light, such as the pervasive glow of the universe called the cosmic microwave background radiation, which is left over from the Big Bang, would be shifted out of the microwave range and into the visible spectrum, the students found.

"If the Millennium Falcon existed and really could travel that fast, sunglasses would certainly be advisable," said research team member Riley Connors. "On top of this, the ship would need something to protect the crew from harmful X-ray radiation."

The increased X-ray radiation from shifted starlight would even push back on a spaceship traveling in hyperdrive, the team found, slowing down the vehicle with a pressure similar to the force felt at the bottom of the Pacific Ocean. In fact, such a spacecraft would need to carry extra energy reserves to counter this pressure and press ahead.

Whether the scientific reality of these effects will be taken into consideration on future Star Wars films is still an open question.

"Perhaps Disney should take the physical implications of such high speed travel into account in their forthcoming films," said team member Katie Dexter.

Connors, Dexter, Argyle, and fourth team member Cameron Scoular published their findings in this year's issue of the University of Leicester's Journal of Physics Special Topics.

Editor's Note: This article was updated to correct the following error: As an ambulance moves closer to an observer, its wavelength becomes shorter, not longer.

Largest Structure in Universe Discovered?




Let us try something else. Quasars are way closer to us than we have imagined and may actually be in our own galaxy. I have good reason to think this anyway as the present theory depends on the extreme red shifts exhibited for which present day astronomy accepts only one explanation.
We do that and it is a simple star cluster that is much the same age and has entered the Quasar Stage close together. This obviously changes the whole picture and demands an independent explanation for the spectrum and the red shift. I can actually do this easily but will leave it for the time been. It flows naturally from understanding the implications of the higher order metrics I introduced in 2910 in my paper in AIP's Physics Essays and the derivative Cloud Cosmology unpublished as yet.
The only other large groups out there are in fact local star clusters, so it is completely reasonable to assert that they are the same thing.


Largest Structure in Universe Discovered

By Mike Wall | SPACE.com – Fri, 11 Jan, 2013

Astronomers have discovered the largest known structure in the universe, a clump of active galactic cores that stretches 4 billion light-years from end to end.

The structure is a large quasar group (LQG), a collection of extremely luminous galactic nuclei powered by supermassive central black holes. This particular group is so large that it challenges modern cosmological theory, researchers said.

"While it is difficult to fathom the scale of this LQG, we can say quite definitely it is the largest structure ever seen in the entire universe," lead author Roger Clowes, of the University of Central Lancashire in England, said in a statement. "This is hugely exciting, not least because it runs counter to our current understanding of the scale of the universe."

Quasars are the brightest objects in the universe. For decades, astronomers have known that they tend to assemble in huge groups, some of which are more than 600 million light-years wide.

But the record-breaking quasar group, which Clowes and his team spotted in data gathered by the Sloan Digital Sky Survey, is on another scale altogether. The newfound LQC is composed of 73 quasars and spans about 1.6 billion light-years in most directions, though it is 4 billion light-years across at its widest point.

To put that mind-boggling size into perspective, the disk of the Milky Way galaxy — home of Earth's solar system — is about 100,000 light-years wide. And the Milky Way is separated from its nearest galactic neighbor, Andromeda, by about 2.5 million light-years.

The newly discovered LQC is so enormous, in fact, that theory predicts it shouldn't exist, researchers said. The quasar group appears to violate a widely accepted assumption known as the cosmological principle, which holds that the universe is essentially homogeneous when viewed at a sufficiently large scale.

Calculations suggest that structures larger than about 1.2 billion light-years should not exist, researchers said.

"Our team has been looking at similar cases which add further weight to this challenge, and we will be continuing to investigate these fascinating phenomena," Clowes said.

The new study was published today (Jan. 11) in the Monthly Notices of the Royal Astronomical Society.

Capitalism Imbalance and Resolutions




What is destroying capitalism is its penchant for gaming monopolies and stifling competition and innovation. The expedient of setting the tax universe to powerfully over tax the cost of capital for the capital rich is likely able to end this abuse.

Recall that large capital accesses cheaper money than a small dynamic growing business. It is no trick to establish the cost of capital and to then tax away the advantage. It is not particularly onerous, but assembling for the sake of assembling becomes unattractive and will only be driven then by specific business rationalization.

Such a formula would quickly see the too big to fail enterprises unwind in a hurry when their smaller competitors have a few hundred extra basis points to work with.

The real problem of capital has been the simple access to capital which has been rationed by the King to his buddies. Micro banking has shown us that there are alternatives that work and empower.

If the entire population is a natural member of an economic cooperative whose size is not in excess of 200 individuals and has access to sufficient capital to prosper with lending managed by a group of internal peers, then it is reasonable that all these issues will in time resolve simply because we have a working balance.

Is There an Alternative for Capitalist Economics and Politics? Richard Wolff Says Yes

Tuesday, 08 January 2013 09:18


"Imagine a country where the majority of the population reaps the majority of the benefits for their hard work, creative ingenuity and collaborative efforts. Imagine a country where corporate losses aren't socialized, while gains are captured by an exclusive minority. Imagine a country run as a democracy, from the bottom up, not a plutocracy from the top down. Richard Wolff not only imagines it, but in his compelling, captivating and stunningly reasoned new book, Democracy at Work, he details how we get there from here - and why we absolutely must."

-- Nomi Prins, Author of It Takes a Pillage and Black Tuesday

Few are better equipped than economist Richard Wolff, professor emeritus at the University of Massachusetts, to address the massive failings and inequalities of capitalism as he does in his latest book, Democracy at Work: A Cure for Capitalism.

He also describes Workers' Self-Directed Enterprises (WSDEs) as an alternative to the capitalism that broke the US economy and has resulted in massive economic redistribution to the ruling elites.

Mark Karlin: In your book, what is the distinction between capitalism and welfare state capitalism?

Richard Wolff: Capitalism, like all other economic systems, displays a variety of forms. There are, for example, largely private, laissez-faire kinds of capitalism that differ in many ways from forms of capitalism in which the state plays more significant roles, such as market regulator or social welfare guarantor (as in "state welfare capitalism"), or as a close partner of capitalists as in fascism. What remains the same across all such forms - why they all deserve the label "capitalist" - is the exclusion of the mass of workers that produces the output and generates the profits from receiving and distributing that profit, and from generally participating democratically in enterprise decisions. Capitalism excludes workers from deciding what is produced, how it is produced, where it is produced and how profits are to be used and distributed. Democracy at Work is a critique and alternative aimed at changing that exclusion shared by all these forms of capitalism.

Mark Karlin: In that regard, what do you think about the contention that FDR was not at all an opponent of capitalism, but simply saw that some government intervention was necessary in the US economy in order to save capitalism during the depression of the '30s?

Richard Wolff: What FDR saw was the political might of the coalition of unionists (galvanized by the CIO in the middle 1930's), socialist and communist parties demanding that government not only bail out the banks and corporations, but also directly help the mass of people suffering the Great Depression. Elements within that coalition threatened that Washington's failure to respond to do so would turn many millions of US citizens against capitalism. FDR got the message and crafted a deal in response. The government would both tax and borrow from corporations and the rich to fund the new Social Security system, national unemployment insurance, and a vast program of federal hiring. In return, the coalition would downplay its anti-capitalism and celebrate instead the achievement of a welfare state type of capitalism. The coalition mostly accepted this New Deal. FDR went on to win four consecutive presidential elections making him the most popular president in US history. The New Deal saved the capitalist system by changing its form from a relatively more laissez-faire [form]to a welfare-type state.

Mark Karlin: Before the recent crash, what was the capitalist crisis from above and below that you describe in the book?

Richard Wolff: The crisis from above refers to the speculative mania indulged by the small minority of people (major holders of corporate securities, boards of directors, their professional staffs, etc.) who gathered increasing profits into their hands as wages stagnated after the mid-1970s. Financial enterprises competed for the funds accumulating in this minority's hands by taking ever-greater risks with old and new (e.g. asset-backed securities, credit default swaps, etc.) financial instruments. Another in the long history of capitalist speculative manias built a bubble on the back of the rising debt of the US working class. When the latter's debt burden could no longer be serviced, the bubble burst, adding the crisis from above to that built from below by the lethal mixture of stagnant wages and rising debts.

Mark Karlin: How does the distribution of surpluses in revenue (profits) in business enterprises affect the economic structure of a society?

Richard Wolff: The surplus generated by enterprises - the excess of revenue from commodity sales over the direct costs of producing those commodities - is what capitalists receive and control in capitalist economies. They then distribute those surpluses as they see fit to reproduce the system in which they occupy such exalted positions. Thus, for example, they distribute some of the surplus to top corporate officials (shaping the distribution of income and wealth in capitalist societies), some to moving production abroad if, when and where that might generate larger surpluses (producing unemployment at home and growth abroad), some to donations to politicians and parties to shape and control political decisions to serve their needs, and so on. The distribution of the surplus is thus a major shaper of how our society works, how we all live.

Mark Karlin: During the last few years, particularly during and after the Occupy movement, many of the masters of the universe on Wall Street trumpeted their alleged intellectual capital, as if capitalism was equal to being the smartest guys on the block. In this bragging rights boasting, it can be inferred that workers are interchangeable parts of a machine and should be grateful to those with "intellectual capital." How do you respond to that claim?

Richard Wolff: Intellectual capital is just the latest name for an old idea that has long been recognized as a crucial part of production. In the past, other names included "know how" and "technology" and "expertise." The basic idea was that in addition to the tools, equipment, machines and raw materials that go into production, and in addition to the muscles and energy people contribute to production, there is the mental capacity to think, to adjust behavior, to invent new things and new ways of working - that is also crucial to production. To build that "intellectual capital" is one purpose of schooling. Of course, everyone in the production process can bring his or her intellectual capital into the production process if that process is organized to welcome, recognize, reward and stimulate that. When people suggest that only executives or financiers have or apply "intellectual capital," that is one sure way to discourage and reduce the application of workers' intellectual capital to production.

Mark Karlin: Refreshingly, you offer a key alternative to capitalism in decline. You promote Workers' Self-Directed Enterprises (WSDE) in Part III of your book. What would be a succinct description of a WSDE?

Richard Wolff: Quite simply, a WSDE entails the workers who make whatever a corporation sells also functioning - collectively and democratically - as their own board of directors. WSDEs thereby abolish the capitalist differentiation and opposition of surplus producers versus surplus appropriators. Instead, the workers themselves cooperatively run their own enterprise, thereby bringing democracy inside the enterprise where capitalism had long excluded it.

Mark Karlin: In your sixth chapter, you contrast WSDEs with worker-owned enterprises, worker-managed enterprises and cooperatives. What are the primary differences?

Richard Wolff: Workers have a long history of multiple kinds of cooperatives. That is, workers can cooperatively own (e.g. their pension fund holds shares in the company that employs them), buy (e.g. the many food coops around the country), sell (e.g. grape growers who combine to market their outputs), and manage (e.g. workers take turns supervising themselves). All such cooperatives can and often do co-exist with a capitalist organization of production in the precise sense of workers being excluded from the decisions of what, how and where to produce and what to do with the profits. What makes WSDEs unique is precisely that they are about cooperative production, about ending the capitalist division of producers from appropriators of the surplus, and replacing it with democratic cooperative decisions governing production and the social use of its fruits.

Mark Karlin: Where does the much-celebrated (and world's largest) Mondragon cooperative model fit in with your vision of WSDEs?

Richard Wolff: Mondragon is the world's largest and perhaps most successful example of WSDEs' successful growth in competition with conventional capitalist enterprises. Begun in 1956 with six workers organized into a cooperative enterprise by a Spanish priest, the Mondragon Cooperative Corporation (MCC) now employs over 100,000 workers, is the largest corporation in the Basque part of Spain and the tenth largest corporation in all of Spain. It has extensive research and development labs generating new ways to produce new products and maintains its own university to train its workers and interested others in all the ways of running and building democratically cooperative enterprises. MCC is thus a remarkable testimony to the contemporary viability and strength of non-capitalist production systems.

Mark Karlin: I recently asked this question in another interview on labor and economics and received an answer that amounted to a sigh. Although there is definitely a growing cooperative movement in the United States, it is still struggling. What will be the tipping point that will persuade US workers that WSDEs are preferable to the current managerial capitalist system? So many workers in the US have been brainwashed that any alternative to capitalism is satanic and communist. How does an idea like WSDEs change from an intellectual concept to a grassroots labor movement?

Richard Wolff: As has happened often in human history, what provokes change is less any clear vision of where we go next and more the intolerability of where we are. Capitalism is no longer "delivering the goods" for most people. The circle of its beneficiaries grows smaller and richer and more out of touch with the mass of people than ever. In the US, this is particularly problematic because the rationale of US capitalism has long been its creating and sustenance of a vast "middle class." As capitalism's evolution destroys that middle class, it opens the space in minds and hearts to inquire after alternatives to an increasingly unacceptable system. WSDEs offer precisely that. Nothing better illustrates that growing interest than the fact that Democracy at Work is going into a second printing three months after it was first published.

Mark Karlin: Republicans and Democrats both tout the alleged benefits of free trade agreements, despite their lack of adequate support for labor rights and worker remuneration. One thing that free trade advocates claim is that by moving to lower-cost labor, products will be cheaper in the US. While this may be true in some cases, this hardly appears to be the case in name brand products (particularly clothes) and trendy hi-tech products such as Apple. For instance, I went to a retail store and looked at items made by Calvin Klein, Nautica, and IZOD. Not one of the items, not one, was made in the United States. Most were made in China and Southeast Asia. Supposing we assume a worker who gets a few dollars a day produces a Nautica polo shirt for $1. Add the costs of material and equipment and maybe we get to $3. Add management and shipping and maybe we get to $5 per shirt, maybe. But the retail price on upper end brand name polo shirts could be as much as $70. So the shirt is not less expensive; the company is just making a greater profit off of exploited labor overseas. Is that correct?

Richard Wolff: When US corporations producing for the US market move existing (or open new) production facilities overseas, their usual goal is more profits. They relocate to exploit cheaper labor, lax environmental rules, lower taxes, etc. If they lowered their prices, then the cheaper labor, lax rules, and lower taxes would raise their profits less or not at all. So they rarely drop prices much when they move and then only temporarily to gain market share (thereby pressuring competitors to similarly relocate). Of course, relocating corporations could choose to lower their prices, but profit considerations usually render that a last resort. Finally, corporations in lower-cost overseas locations can usually more easily manage competition among themselves than they do in the US (because local rules against monopoly are less effective and relatively low-cost bribes are more effective).

Monday, January 21, 2013

Atlantean Mining Technology Rediscovered and Modernized





You do want to go to the original on this one in order to work through the excellent pictures.  Way more important however is that this clears up a major problem that I have had with Bronze Age, or Atlantean mining technology between about 2500 BC and 1159 BC. 

The Bronze Age assumption we have all accepted has involved the use of an open fire on the ore face to induce spalling to collect ore.  It is a really bad idea simply because there is a loss of control of airflow almost immediately.  Carbon monoxide makes it impossible and the heat delivery is terribly low.  On top of all that, any manpower calculations required a huge input for every pound collected.  Yet deep mines are amply in evidence in Cornwall that made air a certain problem.

Prior work using hydrogen attempted to induce rock spalling, but never got much past the hydrogen production problem.

What this modern technological solution does is use diesel duel and a strong air flow to ignite a burn front that becomes a hot zone up against the working face operating at an astounding 1800 C.  This easily shatters the face and allows the material to fall away in small particles ideal for even blowing back.

My key point that I want to make though is that I could easily set up this system using Bronze Age technology.  The Chinese had continuous bellows and even ordinary bellows will create the necessary head pressure.  For fuel we can now use any suitable liquid oil including animal fats.  The torch head will be very hot and the oils will vaporize coming through the head to create a continuous ignition front.

The economics are easily superior to blasting economics in even large underground veins and clearly superb for small vein structures simply because it naturally minimizes dilution with waste rock.  I now understand the nature of some of those old workings.  This is actually a huge breakthrough for underground mining that allows us to go after ore that once was chased with hand steel.

Now we know what to look for the evidence will start to show up.  We already have a Mayan glyph demonstrating the processes at work.  Even better, A friend of mine some thirty years ago chose to inform me that a decade earlier, he had worked underground in Scotland to drive a passage for a dam penstock.  They encountered a borehole that was two feet across in host rock lacking any volcanic nature that was clearly a man made bore hole.  The hole itself had naturally infilled with calcite and was sealed tight.  Yet there was no mistaking what it was.

This demonstration clearly tells us just how such a bore hole came about.  Note that there was no technology then known to do this.  Even now, it still takes centuries to calcite such a hole.

The Atlantean Age had an insatiable appetite for metal as it was hoarded by a globally expanding economy.  Their traces are found all over.   Yet one needs to only closely read ancient texts to understand the centrality of metal in this economy

THERMAL FRAGMENTATION: REDUCING MINING WIDTH WHEN EXTRACTING
NARROW PRECIOUS METAL VEINS

Donald Brisebois and Jean-Philippe Brisebois
Rocmec Mining, Canada



ABSTRACT

The mining of high-grade, narrow vein deposits is an important field of activity in the precious metal mining sector. The principle factor that has undermined the profitability and effectiveness of mining such ore zones is the substantial dilution that occurs when blasting with explosives during extraction.

In order to minimise dilution, the Thermal Fragmentation Mining Method enables the operator to extract a narrow mineralised corridor, 50 cm to 1 metre wide (according to the width of the vein), between two sub-level drifts. By inserting  a strong burner powered by diesel fuel and compressed air into a pilot hole previously drilled directly into the vein, a thermal reaction is created, spalling the rock and enlarging the hole to 80 cm in diameter. The remaining ore between the thermal holes is broken loose using low powered explosives, leaving the waste walls intact. This patented method produces highly concentrated ore, resulting in 400% - 500% less dilution when compared to conventional mining methods.

The mining method reduces the environmental affects of mining operations since much smaller quantities of rock are displaced, stockpiled, and treated using chemical agents. The fully mechanised equipment operated by a 2-person team (1 thermal fragmentation operator, 1 drilling operator) maximises the effectiveness of skilled personnel, and increases productivity and safety. 
The Thermal Fragmentation is currently employed in 3 mining operations in North America.

INTRODUCTION

The mining of high-grade, narrow vein deposits is a predominant field of activity in the precious metal sector. These types of deposits are located throughout the globe and have a significant presence in mining operations. The principle factor that has undermined the profitability and effectiveness of mining such ore zones is the substantial dilution that occurs when blasting with explosives during extraction and the low productivity associated with today’s common extraction methods. The Thermal Fragmentation Mining Method has been conceived to mine a narrow mineralised corridor in a productive and cost efficient manner in order to solve these particular challenges. The following describes this mining method in depth and outlines its successes in improving the extraction process of such ore bodies.

DESCRIPTION OF TECHNOLOGY

A strong burner powered by diesel fuel is inserted into a 152 mm pilot hole drilled into the vein (Figure 1) using a conventional longhole drill. The burner spalls the rock quickly, increasing the diameter of the hole to 30 - 80 cm (Figure 2) producing rock fragments 0 - 13 mm in size. The leftover rock between fragmented holes is broken loose using soft explosives and a narrow mining corridor with widths of 30 cm to 1 metre is thus extracted (Figure 3).  Since the waste walls are left intact, the dilution factor and the inefficiencies associated with traditional mining methods are greatly reduced.

THE BURNER

The burner (Figure 4), powered by diesel fuel and compressed air, creates a thermal cushion of hot air in the pilot hole, which produces a thermal stress when coming in contact with the rock. The temperature difference between the heat cushion and the mass of rock causes the rock to shatter in a similar manner as putting a cold glass in hot water. A spalling effect occurs (Calman and Rolseth, 1968), and the rock is scaled off the hole walls and broken loose by the compressed air.

THE FRAGMENTED ROCK

The process of fragmenting the rock is optimal in hard, dense rock. The spalling process produces rock fragments 0 - 13 mm in size. Figure 5 illustrates the size of the fragmented ore. The finely fragmented ore requires no crushing before entering the milling circuit and can be more efficiently transported since it consumes less space than ore in larger pieces. 

TONNAGE COMPARISON WITH ALTERNATIVE METHOD 

The method produces highly concentrated ore, resulting in 400% - 500% less dilution when compared to conventional mining methods. Table 1 below compares the quantity of rock extracted when mining a 50 cm-wide vein using the thermal fragmentation mining method as opposed to a shrinkage mining method.


The table above shows approximately 4 times less rock needs to be mined for the equivalent mineralised content. This method of extraction allows mine operators to solely extract mineralised zones, thus significantly reducing dilution factors and optimising mine operations as a result. The technology enables the operator to mine ounces and not tonnes.

As a result of less rock being mined, fewer tonnes need to be processed at the mill to extract the precious metals. The quantity of chemical agents needed in the process is greatly reduced and the quantity of energy needed to process the ore is also greatly diminished. The reduced quantity of energy for hauling and processing the ore results in fewer greenhouse gases being emitted. The mining residue that remains once the precious metal contents are removed is 4 times less abundant, using the example above, meaning much smaller tailing areas need to be constructed, maintained, and rehabilitated once mining operations have ceased. The space needed to host the mine site is greatly reduced, the alterations to the landscape are significantly diminished, and the result is a cleaner and more responsible approach to mine operations.

PRODUCTIVITY AND SAFETY 

The shortage of skilled personnel in the mining community has made it essential to find ways to increase productivity per worker while improving working conditions in order to attract and retain skilled miners. 

PRODUCTIVITY

The work group required to operate 1 thermal fragmentation unit consists of a 2 person team (1 thermal fragmentation operator, 1 drilling operator). Table 2 shows the time needed to extract an ore block using the thermal fragmentation mining method in comparison to using a shrinkage mining method.

The table above demonstrates that for the equivalent amount of mineral content, it takes approximately half the time to mine the ore zone using the thermal fragmentation mining method than when using a shrinkage mining method. Furthermore, since less rock needs to be mucked and hauled from the stope, fewer personnel are needed for handling the ore.

MECHANISATION AND EMPLOYEE SAFETY 

Each unit is completely mechanised, reducing the risk of injuries and strain caused by manual manipulation of heavy equipment. The operator stands at a safe distance from the stope, virtually eliminating the risk of flying debris and falling loose rock from the waste walls. Furthermore, unlike shrinkage mining methods, smaller excavations are made (0.5 m compared to 2 m) so the occurrence of falling loose rock is greatly diminished.

OTHER APPLICATIONS - DROP RAISING

The thermal fragmentation equipment is also used to create the centre cut in traditional drop raising. The burner can enlarge a 152 mm pre-drilled pilot hole into an 80 cm cut on a 20 meter distance in approximately 4 hours total. By creating this large centre cut quickly and efficiently, larger sections can be blasted with minimal vibrations (Figure 8), thus avoiding damage to the surrounding rock (Figure 9). The number of blast holes needed and explosives are reduced and the risk of freezing the raise is minimised.

ENVIRONMENTAL IMPACT ANALYSIS

There is a growing need to develop sustainable mining methods that minimise the environmental footprint left behind by mining operations. While developing the Thermal Fragmentation Mining Method, important efforts were made to address and reduce the environmental effects that mine operations have on the surrounding areas. Using the method, mine development is performed directly into ore, resulting in less waste rock being extracted and displaced to the surface. By solely extracting the mineralised zone, only the necessary excavations are made. As shown in Table 1 above, 4 times less rock needs to be mined for the equivalent mineral content. 

As a result of less rock being mined, fewer tonnes need to be processed at the mill to extract the precious metals. The quantity of chemical agents needed in the process is greatly reduced and the quantity of energy needed to process the ore is also greatly diminished. The reduced quantity of energy for hauling and processing the ore results in fewer greenhouse gases being emitted. The mining residue that remains once the precious metal contents are removed is 4 times less abundant, using the example above, meaning much smaller tailing areas need to be constructed, maintained, and rehabilitated once mining operations have ceased. The space needed to host the mine site is greatly reduced, the alterations to the landscape are significantly diminished, and the result is a cleaner and more responsible approach to mine operations.

PRODUCTIVITY AND SAFETY 

The shortage of skilled personnel in the mining community has made it essential to find ways to increase productivity per worker while improving working conditions in order to attract and retain skilled miners. 

PRODUCTIVITY

The work group required to operate 1 thermal fragmentation unit consists of a 2 person team (1 thermal fragmentation operator, 1 drilling operator). Table 2 shows the time needed to extract an ore block using the thermal fragmentation mining method in comparison to using a shrinkage mining method.

The table above demonstrates that for the equivalent amount of mineral content, it takes approximately half the time to mine the ore zone using the thermal fragmentation mining method than when using a shrinkage mining method. Furthermore, since less rock needs to be mucked and hauled from the stope, fewer personnel are needed for handling the ore.

MECHANISATION AND EMPLOYEE SAFETY 

Each unit is completely mechanised, reducing the risk of injuries and strain caused by manual manipulation of heavy equipment. The operator stands at a safe distance from the stope, virtually eliminating the risk of flying debris and falling loose rock from the waste walls. Furthermore, unlike shrinkage mining methods, smaller excavations are made (0.5 m compared to 2 m) so the occurrence of falling loose rock is greatly diminished.

ECONOMIC ANALYSIS

By rendering a greater number of narrow, mineralised zones that are economical to extract, the mining method has the potential to convert a substantial portion of the mineral resources of an operating company into mineral reserves. A large number of mines currently in operation today contain narrow, precious metal veins throughout the ore body, but unless these veins are of significant width (usually 1 m or greater) or very high grade they are often overlooked. As the mine operator develops the zones to be extracted, high grade, narrow ore bodies are often uncovered, but not extracted since it is uneconomical to mine such ore bodies using conventional mining methods (shrinkage, long hole, room and pillar, etc.) Table 3 below demonstrates the cost savings per ounce of using the thermal fragmentation mining method in comparison to the long-hole method. The study was done by Canadian Institute of Mining using 2001 exchange rate figures.

As the analysis above shows, it is approximately 45% less costly to mine a narrow vein ore body using the thermal fragmentation mining method than using a conventional mining method. Overall profitability of mine operations is increased since more precious metals can be economically mined for the same level of development expenditures.

CONCLUSION

Many variations and adjustments have been made to conventional methods of mining narrow precious metal veins, but the serious shortfalls brought upon by dilution remain. The Thermal Fragmentation Mining Method is a new and innovative way of mining narrow vein ore bodies and a foremost solution to solving the problem of ore dilution by reducing it by a factor of 4 to 1. It uses a unique tool, a powerful burner, to mine with precision, a narrow mineralised corridor in an effective and productive manner. The technology is positioned to meet the growing challenges of skilled labour shortages, tougher environmental guidelines, and the depletion of traditional large scale ore deposits mined using conventional methods. As the technology continues to develop and spread through the mining community, the objective remains to optimise the productivity and profitability of mining narrow, high-grade, precious metal ore bodies and to make a substantial, lasting contribution to this sector of activity.

REFERENCES
Canadian Institute of Mining. (2003). Thermal rock fragmentation – Applications in narrow vein extraction. Vol 96, #1071. CIM Bulletin, Canada. pp. 66-71.[1]

Calaman J.J, Rolseth H.C., (1968). Surface Mining First Edition. Chapter 6.4 Society for
Mining Metallurgy and Exploration Inc., Colorado, USA p.325-3

Oldest Star in Universe





 At some point, a flawed paradigm will start cranking out absurd results. In this instance, our paradigm for measuring age in the universe is telling us that a nearby star that is still in the hydrogen burn stage is older than the universe. This is nonsense of course.

What it tells us is that our models for stellar evolution are incomplete at least and our assumptions on age and distance are deeply flawed as we have long since thought.

I think about the universe using two conjectures. It is steady state in appearance and relative velocity is negligible. The red shift varies directly as to transit time and thus with the age of the universe and its content then. More simply the photon arrives from the past in which the content of the universe was smaller. This induces our red shift.

In fact that nearby star is going to turn out to be quite young and our real assumption is just wrong and must be revisited.

Oldest star in the universe is right in our stellar neighborhood

By Scott Sutherland



Nothing is older than the universe, right? Well, don't be so sure about that. Astronomers are reporting that a nearby star could older than The Big Bang by almost a billion years!

For nearly 100 years, astronomers have been studying a bright subgiant star named HD 140283 and struggling to pin down the age of it.

HD 140283 is remarkable for the fact that it is almost entirely composed of hydrogen and helium. That may seem like a strange thing for a star to be remarkable for, but whereas most other stars are mainly hydrogen and helium, they also contain heavier elements, such as oxygen, carbon, neon and iron, equal to at least a few per cent of their mass. These heavier elements were all created in thecores of dying stars, and spread out in the universe when those dying stars exploded. Thus, this means that HD 140283 is almost certainly one of the first stars that appeared in the universe, before these heavier elements were generated.

The astronomers were able to figure out its age by determining its exact distance from us (a mere 190 light years) and taking very precise measurements of its brightness. Since the star is now in the process of exhausting the hydrogen at its core and switching to burning helium — a process which will cause it to dim in a very specific way — the exact distance, brightness and how much that brightness dims reveals the star's age.

The result they got was baffling, as their calculations showed that the star is 13.9 billion years old! For reference, our best estimate for the age of the universe is 13.7 billion years!

Now, there's a 700-million-year margin of error in their calculations, which may seem really big, but these are huge numbers we're dealing with, so the error is still small (about 5%) compared to the result. That means that HD 140283 is somewhere between 13.2 billion years old (roughly the age of our galaxy) and 14.6 billion years old (nearly one billion years older than the universe).

Personally, I'm guessing that the true age of this star is going to be somewhere towards the younger end of that range. However, if HD 140283 truly is as old as they're saying, does it rewrite what we know about the age of the universe, or perhaps what we know about stellar evolution?

Even if HD 140283 doesn't pre-date The Big Bang, and thus rewrite our understanding of how the universe formed, its extreme age could still lead to some incredible discoveries.

Was it a lonely wanderer in the universe, until it got caught up in our galaxy's gravitational influence and settled into our stellar neighborhood? Perhaps it was a relatively small companion to the star that collapsed to create the supermassive black hole at the centre of our galaxy. Has it been circling around the galactic disk for all these billions of years, as the other stars in the Milky Way formed around it?

We may never know the answers to questions like these, but this star is simply another example of the incredible wonders contained in our universe.

Solar Variability and Terrestrial Climate




The sun is the one thing big enough and significant enough to affect climate at all.  That is a little detail assiduously ignored by most in the climate game since it looks a lot like a constant.  Yet minor variations in spectrum and flux can plausibly trigger significant accelerators that do impact wildly and enough to disturb our world.

We learn here that EUV goes crazy during a sunspot maximum and that the long term trend is negative.  Thus we are leaning toward a direct link between solar activity or lack there off to the sudden cooling that has been experienced in the past.

 I am also reminded that we really need is a century of good satellite data.

Solar Variability and Terrestrial Climate



Jan. 8, 2013:  In the galactic scheme of things, the Sun is a remarkably constant star.  While some stars exhibit dramatic pulsations, wildly yo-yoing in size and brightness, and sometimes even exploding, the luminosity of our own sun varies a measly 0.1% over the course of the 11-year solar cycle. 

There is, however, a dawning realization among researchers that even these apparently tiny variations can have a significant effect on terrestrial climate. A new report issued by the National Research Council (NRC), "The Effects of Solar Variability on Earth's Climate," lays out some of the surprisingly complex ways that solar activity can make itself felt on our planet.

These six extreme UV images of the sun, taken by NASA's Solar Dynamics Observatory, track the rising level of solar activity as the sun ascends toward the peak of the latest 11-year sunspot cycle.More

Understanding the sun-climate connection requires a breadth of expertise in fields such as plasma physics, solar activity, atmospheric chemistry and fluid dynamics, energetic particle physics, and even terrestrial history. No single researcher has the full range of knowledge required to solve the problem. 

 To make progress, the NRC had to assemble dozens of experts from many fields at a single workshop.  The report summarizes their combined efforts to frame the problem in a truly multi-disciplinary context.

One of the participants, Greg Kopp of the Laboratory for Atmospheric and Space Physics at the University of Colorado, pointed out that while the variations in luminosity over the 11-year solar cycle amount to only a tenth of a percent of the sun's total output, such a small fraction is still important.  "Even typical short term variations of 0.1% in incident irradiance exceed all other energy sources (such as natural radioactivity in Earth's core) combined," he says.[ please note this particularly – Arclein ]

Of particular importance is the sun's extreme ultraviolet (EUV) radiation, which peaks during the years around solar maximum.  Within the relatively narrow band of EUV wavelengths, the sun’s output varies not by a minuscule 0.1%, but by whopping factors of 10 or more.  This can strongly affect the chemistry and thermal structure of the upper atmosphere.





Space-borne measurements of the total solar irradiance (TSI) show ~0.1 percent variations with solar activity on 11-year and shorter timescales. These data have been corrected for calibration offsets between the various instruments used to measure TSI. SOURCE: Courtesy of Greg Kopp, University of Colorado.

Several researchers discussed how changes in the upper atmosphere can trickle down to Earth's surface.  There are many "top-down" pathways for the sun's influence.  For instance, Charles Jackman of the Goddard Space Flight Center described how nitrogen oxides (NOx) created by solar energetic particles and cosmic rays in the stratosphere could reduce ozone levels by a few percent.  Because ozone absorbs UV radiation, less ozone means that more UV rays from the sun would reach Earth's surface.

Isaac Held of NOAA took this one step further.  He described how loss of ozone in the stratosphere could alter the dynamics of the atmosphere below it.  "The cooling of the polar stratosphere associated with loss of ozone increases the horizontal temperature gradient near the tropopause,” he explains. “This alters the flux of angular momentum by mid-latitude eddies.  [Angular momentum is important because] the angular momentum budget of the troposphere controls the surface westerlies."  In other words, solar activity felt in the upper atmosphere can, through a complicated series of influences, push surface storm tracks off course.





How incoming galactic cosmic rays and solar protons penetrate the atmosphere. SOURCE: C. Jackman, NASA Goddard Space Flight Center, “The Impact of Energetic Particle Precipitation on the Atmosphere,” presentation to the Workshop on the Effects of Solar Variability on Earth’s Climate, September 9, 2011.

Many of the mechanisms proposed at the workshop had a Rube Goldberg-like quality. They relied on multi-step interactions between multiples layers of atmosphere and ocean, some relying on chemistry to get their work done, others leaning on thermodynamics or fluid physics.  But just because something is complicated doesn't mean it's not real.

Indeed, Gerald Meehl of the National Center for Atmospheric Research (NCAR) presented persuasive evidence that solar variability is leaving an imprint on climate, especially in the Pacific. According to the report, when researchers look at sea surface temperature data during sunspot peak years, the tropical Pacific shows a pronounced La Nina-like pattern, with a cooling of almost 1o C in the equatorial eastern Pacific. In addition, "there are signs of enhanced precipitation in the Pacific ITCZ (Inter-Tropical Convergence Zone ) and SPCZ (South Pacific Convergence Zone) as well as above-normal sea-level pressure in the mid-latitude North and South Pacific," correlated with peaks in the sunspot cycle.

The solar cycle signals are so strong in the Pacific, that Meehl and colleagues have begun to wonder if something in the Pacific climate system is acting to amplify them. "One of the mysteries regarding Earth's climate system ... is how the relatively small fluctuations of the 11-year solar cycle can produce the magnitude of the observed climate signals in the tropical Pacific."  Using supercomputer models of climate, they show that not only "top-down" but also "bottom-up" mechanisms involving atmosphere-ocean interactions are required to amplify solar forcing at the surface of the Pacific.




Composite averages for December-January-February for peak solar years. SOURCE: G.A. Meehl, J.M. Arblaster, K. Matthes, F. Sassi, and H. van Loon, Amplifying the Pacific climate system response to a small 11 year solar cycle forcing, Science 325:1114-1118, 2009; reprinted with permission from AAAS.

In recent years, researchers have considered the possibility that the sun plays a role in global warming. After all, the sun is the main source of heat for our planet. The NRC report suggests, however, that the influence of solar variability is more regional than global.  The Pacific region is only one example. 

Caspar Amman of NCAR noted in the report that "When Earth's radiative balance is altered, as in the case of a chance in solar cycle forcing, not all locations are affected equally.  The equatorial central Pacific is generally cooler, the runoff from rivers in Peru is reduced, and drier conditions affect the western USA." 

Raymond Bradley of UMass, who has studied historical records of solar activity imprinted by radioisotopes in tree rings and ice cores, says that regional rainfall seems to be more affected than temperature.  "If there is indeed a solar effect on climate, it is manifested by changes in general circulation rather than in a direct temperature signal."  This fits in with the conclusion of the IPCC and previous NRC reports that solar variability is NOT the cause of global warming over the last 50 years.
Much has been made of the probable connection between the Maunder Minimum, a 70-year deficit of sunspots in the late 17th-early 18th century, and the coldest part of the Little Ice Age, during which Europe and North America were subjected to bitterly cold winters.  The mechanism for that regional cooling could have been a drop in the sun’s EUV output; this is, however, speculative.[ Do I need to remind all that we lack a good explanation?  I have been playing with a long cycle change in ocean currents but also suspect that it needs help - Arclein]



The yearly averaged sunspot number for a period of 400 years (1610-2010). SOURCE: Courtesy of NASA Marshall Space Flight Center.

Dan Lubin of the Scripps Institution of Oceanography pointed out the value of looking at sun-like stars elsewhere in the Milky Way to determine the frequency of similar grand minima. “Early estimates of grand minimum frequency in solar-type stars ranged from 10% to 30%, implying the sun’s influence could be overpowering.  More recent studies using data from Hipparcos (a European Space Agency astrometry satellite) and properly accounting for the metallicity of the stars, place the estimate in the range of less than 3%.”   This is not a large number, but it is significant. 

Indeed, the sun could be on the threshold of a mini-Maunder event right now.  Ongoing Solar Cycle 24 is the weakest in more than 50 years.  Moreover, there is (controversial) evidence of a long-term weakening trend in the magnetic field strength of sunspots. Matt Penn and William Livingston of the National Solar Observatory predict that by the time Solar Cycle 25 arrives, magnetic fields on the sun will be so weak that few if any sunspots will be formed. Independent lines of research involving helioseismology and surface polar fields tend to support their conclusion. (Note: Penn and Livingston were not participants at the NRC workshop.)

“If the sun really is entering an unfamiliar phase of the solar cycle, then we must redouble our efforts to understand the sun-climate link,” notes Lika Guhathakurta of NASA’s Living with a Star Program, which helped fund the NRC study. “The report offers some good ideas for how to get started.”

In a concluding panel discussion, the researchers identified a number of possible next steps.  Foremost among them was the deployment of a radiometric imager.  Devices currently used to measure total solar irradiance (TSI) reduce the entire sun to a single number:  the total luminosity summed over all latitudes, longitudes, and wavelengths.  This integrated value becomes a solitary point in a time series tracking the sun’s output.

In fact, as Peter Foukal of Heliophysics, Inc., pointed out, the situation is more complex.  The sun is not a featureless ball of uniform luminosity.  Instead, the solar disk is dotted by the dark cores of sunspots and splashed with bright magnetic froth known as faculae.  Radiometric imaging would, essentially, map the surface of the sun and reveal the contributions of each to the sun’s luminosity.  Of particular interest are the faculae.  While dark sunspots tend to vanish during solar minima, the bright faculae do not.  This may be why paleoclimate records of sun-sensitive isotopes C-14 and Be-10 show a faint 11-year cycle at work even during the Maunder Minimum.  A radiometric imager, deployed on some future space observatory, would allow researchers to develop the understanding they need to project the sun-climate link into a future of prolonged spotlessness.

Some attendees stressed the need to put sun-climate data in standard formats and make them widely available for multidisciplinary study.  Because the mechanisms for the sun’s influence on climate are complicated, researchers from many fields will have to work together to successfully model them and compare competing results.  Continued and improved collaboration between NASA, NOAA and the NSF are keys to this process.

Hal Maring, a climate scientist at NASA headquarters who has studied the report, notes that “lots of interesting possibilities were suggested by the panelists.  However, few, if any, have been quantified to the point that we can definitively assess their impact on climate.” Hardening the possibilities into concrete, physically-complete models is a key challenge for the researchers.

Finally, many participants noted the difficulty in deciphering the sun-climate link from paleoclimate records such as tree rings and ice cores.  Variations in Earth’s magnetic field and atmospheric circulation can affect the deposition of radioisotopes far more than actual solar activity.  A better long-term record of the sun’s irradiance might be encoded in the rocks and sediments of the Moon or Mars.   Studying other worlds might hold the key to our own.

The full report, “The Effects of Solar Variability on Earth’s Climate,” is available from the National Academies Press at http://www.nap.edu/catalog.php?record_id=13519

Accepted Memory Formation Model Refuted





This is a neat bit of science that clearly shows us that the working hypothesis simply does not work. Implied in all that is that repetition is effects memory formation when that would be quite an inefficient protocol. Why not infer instead that the mind makes a specific decision to retain a memory point which can then be over ridden by further work? Do not infer that this process is passive and automatic.

That certainly explains the plasticity of memory. In my own case I have observed my own mind balking at actual memorization of any kind whenever I tried to retain apparent nonsense. Yet if I choose to note something and I determine a need much later for that item, my memory becomes clear as glass. It really impresses folks when you reference an item read or a lecture comment from decades ago.

I have also noted that my mind also balks learning a mathematical proof that is flawed. A rather handy trick when working on new ideas or mushy old ideas. Think about this and then test your own mind to see if it works for you.

Bye the bye. A perfect memory allows you to know and believe every error produced by mankind while lacking sufficient skills to understand this. The process of remembering these errors naturally firmly embedded them in your mind making it almost impossible to change your thinking. Thus the majority of scholars who naturally depended on powerful memories are usually trapped by their own training.

Study refutes accepted model of memory formation

by Staff Writers

Baltimore MD (SPX) Jan 04, 2013



A study by Johns Hopkins researchers has shown that a widely accepted model of long-term memory formation - that it hinges on a single enzyme in the brain - is flawed. The new study, published in Nature, found that mice lacking the enzyme that purportedly builds memory were in fact still able to form long-term memories as well as normal mice could.

"The prevailing theory is that when you learn something, you strengthen connections between your brain cells called synapses," explains Richard Huganir, Ph.D., a professor and director of the Johns Hopkins University School of Medicine's Department of Neuroscience.

"The question is, how exactly does this strengthening happen?"

A research group at SUNY Downstate, led by Todd Sacktor, Ph.D., has suggested that key to the process is an enzyme they discovered, known as PKM-zeta. In 2006, Sacktor's group made waves when it created a molecule that seemed to block the action of PKM-zeta - and only PKM-zeta.

When the molecule, dubbed ZIP, was given to mice, it erased existing long-term memories. The molecule caught the attention of reporters and bloggers, who mused on the social and ethical implications of memory erasure.

But for researchers, ZIP was exciting primarily as a means for studying PKM-zeta. "Since 2006, many papers have been published on PKM-zeta and ZIP, but no one knew what PKM-zeta was acting on," says Lenora Volk, Ph.D., a member of Huganir's team. "We thought that learning the enzyme's target could tell us a lot about how memories are stored and maintained."

For the current study, Volk and fellow team member Julia Bachman made mice that lacked working PKM-zeta, so-called genetic "knockouts." The goal was to compare the synapses of the modified mice with those of normal mice, and find clues about how the enzyme works.

But, says Volk, "what we got was not at all what we expected. We thought the strengthening capacity of the synapses would be impaired, but it wasn't." The brains of the mice without PKM-zeta were indistinguishable from those of other mice, she says.

Additionally, the synapses of the PKM-zeta-less mice responded to the memory-erasing ZIP molecule just as the synapses of normal mice do.

The team then considered whether, in the absence of PKM-zeta, the mouse brains had honed a substitute synapse-building pathway, much in the way that a blind person learns to glean more information from her other senses.

So the researchers made mice whose PKM-zeta genes functioned normally until they were given a drug that would suddenly shut the gene down.

This allowed them to study PKM-zeta-less adult mice that had had no opportunity to develop a way around the loss of the gene. Still, the synapses of the so-called conditional knockout mice responded to stimuli just as synapses in normal mice did.

What this means, the researchers say, is that PKM-zeta is not the key long-term memory molecule previous studies had suggested, although it may have some role in memory.

"We don't know what this ZIP peptide is really acting on," says Volk. "Finding out what its target is will be quite important, because then we can begin to understand at the molecular level how synapses strengthen and how memories form in response to stimuli."

Other authors on the paper are Richard Johnson and Yilin Yu, both of the Johns Hopkins University School of Medicine.

This study was supported with funds from the National Institute of Neurological Disorders and Stroke (grant number NS36715), the National Institute of Mental Health (grant number T32MH15330) and the Howard Hughes Medical Institute.