Thursday, August 21, 2008

Collision Earth

I recently came across this interesting article that tackles a number of anomalies in the climatic record. The 1159 BCE event we have already associated with a volcanic event Hekla and the 12900 BCE impact event in Northern Canada. This article isolates from cultural referents these dates:

7640 BCE, 3195 BCE, 2354 BCE, 1628 BCE, 1159 BCE, 207 BCE, 44 BCE, and 540 CE.

We have Thera to apply to the 1628 BCE event. As a warning the apparent exact nature of this date and the later dates are controversial at the least but are typically associated with carbon dates and an independent Chinese record for 1618 BCE. I have associated 1159 BCE with the inundation of Atlantis and the resulting collapse of their seaborne mercantile civilization. We surmise that the Thera drove the collapse of Minoan civilization and this event was the foundation of the tale of the exodus. The tale itself could well have already been legend at the time of actual historic biblical event and could clarify a two hundred disparity. I have found that these ancient records never fail to include a good story even if the actual linkage is a stretch. And why not? This was their only way to transmit cultural history.

You will have followed my recent pursuit of the possibility that the little ice age was triggered by a major volcanic event in Alaska. It certainly looks promising and explains the reoccurrence of cooling in the Arctic over the past two to three millennia without having to call upon other even less provable sources like solar variation. It is also one of the nastiest places on earth for this type of volcanic activity with no lack of candidates.

To this we now should add cosmic events. We should also get much more serious about their potential for damage. Science has understated and actually misled us all on the potential for damage from this source. Perhaps we need to respect ignorance instead.

Firstly a sea based impact has never been studied. We do not understand the possibilities. All the energy will surely be absorbed by the water, just as all the energy of the stony meteorites is typically absorbed in the atmosphere up to a fairly large size. So although a fair range of small to mid sized objects are packing huge amounts of energy, those two blankets will discharge the energy fairly well.

I add to this the 12900 BCE impact of the Canadian Ice sheet which hurled ice into the Carolinas and likely the Atlantic. It also delivered entrained material into the Ohio Valley recently identified. The bulk of the energy was still absorbed by the crust and surely left a crater now flooded with water. Happy hunting.

To affect climatic temperature, the event has to hit land and sent a vast amount of dust into the atmosphere or itself be a massive source of dust. Tunguska shows us how this could be. That means that a huge scar must exist that would be discernable even today if the event took place in the last 10,000 years.

Recall that the big volcanic events threw twenty to fifty cubic miles of rock into the atmosphere. An asteroid needs to be that large or at least a reasonable fraction thereof to have the same impact. Again the atmosphere will break it up on the way in. The fact is, is that we lack observational evidence to make proper predictions except by analogy.

What this article does bring home is that the energy is out there and has certainly been felt. Whether it applies to this sequence of climatic anomalies is only prospective when we have the alternatives of the Indonesian volcanoes and those of Alaska. And even for the protracted Little Ice Age, I am more inclined to chase volcanoes than a major cosmic event whose effect should have dissipated very quickly, if only because of the lack of chemical aerosols.

I have every reason to think that as our dating of the eruptive periods of all the world’s volcanoes improves so will the correlation with global climate. We only need to remember that an Arctic chill affects the northern portion of the northern hemisphere, while a much larger chill at the equator hits us all.


Collision Earth:

The Threat From Outer Space (2004)

BY JASON JEFFREY

Over a century ago Ignatius Donnelly summed up our precarious existence: We are but vitalized specks filled with a fraction of God’s delegated intelligence, crawling over an egg-shell filled with fire, whirling madly through infinite space, a target for the bombs of the universe.

By bombs Donnelly meant the untold number of asteroids and comets that fill the heavens around us which on perhaps not a few occasions have smashed into Earth itself, and may do so again.

Through revolutionary new techniques in observation, detection and photography, modern astronomers and astrophysicists have now identified two new classes of celestial objects which could pose a real danger to our planet within the foreseeable future, called NEA’s (Near Earth Asteroids) and ECC’s (Earth-Crossing Comets).

On September 29, asteroid “4179 Toutatis” passed within 1.6 million kilometres of Earth. Its approach was the closest in this century of any known asteroid the size of Toutatis, which measured around 4.6 kilometres in length. If it had struck the Earth, we could have faced what scientists have dubbed “a mass extinction event.”

Scientists believe the asteroid poses no risk at least through 2562, when Toutatis will pass within 400,000 kilometers of Earth – but astronomers admit there are forces in the solar system that can alter an asteroid’s orbit and put it on a collision course with Earth.

Earlier this year, on March 31, an asteroid skimmed past the Earth at a distance of just 6500 kilometres above the ground. Object “2004 FU162”, which spans 5-10 metres across, would have burned up as a fireball ending with a smaller explosion, had it ventured into the Earth’s atmosphere. The problem was astronomers did not discover it until after its passing. Scientists have since calculated the asteroid’s orbit was shifted by a whopping 20 degrees because of the Earth’s gravity.

The previous record for the closest asteroid approach to Earth was set on 18 March by an object called “2004 FH” which missed the Earth by about 40,000 kilometres. That was a much larger object, around 30 metres in diameter, but big enough to produce a one-megaton explosion in the atmosphere.

NASA calculates objects in the 100-200 metre range hit Earth about once every 700-1,000 years. Such an object did hit the Earth in 1908, over Tunguska in Siberia.

In the ECC (Earth-Crossing Comet) category, a very serious future candidate for an Earth grazing is comet Finlay, due to pass on October 27, 2060 – perhaps as close as 150,000 kilometres.In 1993, astrophysicist Brian Marsden announced that comet Swift-Tuttle could possibly strike Earth in the 22nd century. It is scheduled to pass the Sun incoming from deep space on July 11, 2126, and on August 14 will come very close to our world. Should the slightest irregularity occur in its long periodic path during the intervening one and a half centuries, it could hit the planet dead-centre, and with a force equivalent to 100 million megatons of TNT.

Over the past few years we have often heard about the discovery of new asteroids and comets. This is the result of NASA’s 25-year survey of the sky to find objects wider than a kilometre that could have a devastating impact if they collided with Earth.

Fortunately, nothing of a dangerous size has been spotted heading our way for at least a century – or so they tell us. According to a US government advisor, secrecy would be the best option if scientists discovered a giant asteroid was on course to collide with Earth.Speaking to a meeting of the American Association for the Advancement of Science, Geoffrey Sommer, of the Rand Corporation, said:

“If an extinction-type impact is inevitable, then ignorance for the populace is bliss. As a matter of common sense, if you can’t intercept it and you can’t move people out of the way in time, there’s nothing you can do in terms of reducing the costs of the potential impact.”
Deep Impact

For one week in July 1994, astronomers watched a planetary body under attack, when two dozen pieces of the disintegrated comet Shoemaker-Levy 9 plunged into Jupiter with explosive results, equivalent to 40 million megatons of TNT going off in a chain reaction. As several scientists warned, this was Earth’s wake-up call for a similar event to happen to us.

Recent computer simulations reveal that if a comet or asteroid hit the Earth on one side, the seismic waves generated would be transmitted through the planetary interior. By being focused on account of the Earth’s curvature, the waves would meet together at the location directly on the opposite side where the impact took place, and the high stress energy released could disrupt the surface area, causing a tremendous outpouring of volcanic activity.

The air blast resulting from an impact would lead to large-scale and worldwide pressure shock waves oscillating the entire atmosphere and ionosphere, creating winds greater than the most powerful hurricanes ever recorded.

Fragments of the asteroid and earth hurled into space by the impact would rain down all over the planet, setting forest fires. The resulting smoke would further darken the atmosphere, plunging the world into permanent night. The temperature would plummet.

Calculating the amount of dust, water vapour and smoke injected into the sky from a kilometre wide object hitting the Earth, scientists estimate a drop of world temperatures by about 15 degrees Celsius lasting for about 15 days.

By far the worst-case scenario is an asteroid or comet striking one of the world’s deep oceans. Some researchers worry the sudden displacement of such large volumes of water across thousands of kilometres of ocean would affect the axis spin and polar stability of the Earth, like adding an off-balancing weight to a spinning gyroscope. Even more disastrous would be a celestial object furrowed into the ocean at a more oblique angle. In this case the energy of the mass dissipates by pushing a titanic amount of water over a large surface area, creating a tsunami wave so high and large in size as to defy imagination.As a tsunami wave reaches nearer to a coast with a shallower continental shelf, its speed slows down, but its height is increased by a factor of 10 to 40. Thus a deep ocean wave of 100 metres might break ashore with a height of 1,000 to 4,000 metres.

A major earthquake triggered off the coast of Chile in May 1960 generated waves in the deep water of the Pacific travelling a full 150 degrees around the globe, or more than 16,000 kilometres distance, landing ashore in Japan at a height of up to 4.5 metres, and killing over 200 people. Earlier, in 1946, a similar event took place when a tsunami originating in the Aleutians killed a handful of people along the nearby Alaskan shores, yet also went on to take the lives of 150 people in Hawaii 8,000 kilometres away. Computer projections indicate that a 9-metre asteroid impacting the ocean between Australia and New Zealand would produce tsunamis breaking on the southern Japanese coastline at 38 to 50 metres high.

That large asteroids have hit the Pacific before is evident from geological remains on the islands within its perimeter. Deposits of unconsolidated corals have been found almost a thousand feet above the present coasts on Lanai, Hawaii, Oahu, Molokai and Maui, indicating they were washed up to that height by a tremendous wave of water in the distant past. Ordinary tsunamis generated by earthquakes along the Ring of Fire do not produce waves of that magnitude – only a major displacement of ocean waters from an impact event would fit the findings.

The Atlantic Ocean is also in danger. Estimates are an impact anywhere in the Atlantic by an asteroid 365 metres wide would devastate coasts on either side with tsunami waves 60 metres high. Major cities either on the coast or with river, bay or harbor accesses such as New York, Boston, Washington, London, Amsterdam and Copenhagen are in danger of being completely obliterated.

A computer simulation of an asteroid impact tsunami developed by scientists at the University of California shows waves as high as 120 metres sweeping onto the Atlantic Coast of the United States.
The researchers based their simulation on a real asteroid known to be on course for a close encounter with Earth eight centuries from now.

March 16, 2880, is the day the asteroid known as “1950 DA”, a huge rock 1.2 kilometres in diameter, is due to swing so close to Earth it could slam into the Atlantic Ocean at 60,000 kilometres per hour.
“From a geologic perspective, events like this have happened many times in the past. Asteroids the size of 1950 DA have probably struck the Earth about 600 times since the age of the dinosaurs,” warns researcher Steven Ward.

Impact Events Linked to Evolution of Life on Earth

It is known the Earth was pummelled by asteroids, comets and other massive heavenly bodies in the early days of its formation – over 3 billion years ago. But, until recently, most scientists thought this was an event limited to Earth’s distant past. They also believed the ancient celestial pounding eventually gave way to billions of years of gradual, non-catastrophic evolution.

In the 1950s, astronomer Gene Shoemaker sent shock waves through the scientific community by suggesting various craters on our planet (and the Moon) were formed by asteroids or comets, rather than volcanic eruptions, which was what most scientists believed at the time.

There doesn’t appear to be one square kilometre of the lunar surface that is not pockmarked with impact craters. While some craters are undoubtedly very ancient, they also contain within their rims a myriad of newer craters from more recent impacts.

The reason why craters do not remain visible on Earth is due to their swift erosion by rain, snow, and wind, whereas on the Moon they remain for eons until a new projectile strikes the scar zone.

Using the Moon’s potholed surface as a reference point, Shoemaker tried to determine how often celestial objects smashed into the Moon and, by extension, struck the Earth. With the help of modern satellite and aerial surveillance, Shoemaker and other scientists soon identified over 200 impact sites around the planet.

In 1980 scientists Luis and Walter Alvarez claimed they had found evidence of a huge impact event 65 million years ago. This age corresponded with the demise of the dinosaurs at the end of the Cretaceous Period. The evidence included a worldwide layer of clay with high levels of the rare element iridium, usually the signature of an impact.

In 1990, the buried remains of a 180-kilometre-diameter crater were discovered near the town of Chicxulub on the Yucatan Peninsula in Mexico. A crater this size would have been blasted out by a 16-kilometre-wide comet or asteroid colliding with the Earth at some 80,000 kph.

Some scientists now believe this crater as the long sought-after “smoking gun” responsible for the demise of the dinosaurs and more than 70 percent of Earth’s living species 65 million years ago.

In June 2003 Science published a report about a team of scientists who believe a massive object from space smashed into what is now the Moroccan desert 380 million years ago. Dates for the impact coincide with the “Kacak/otomari” extinction, when up to 40% of all animals living in the sea perished. Fossils found in rock layers just above the impact layer suggest many new species appeared after the disaster.

And in November 2003, another team of scientists reported on evidence for a massive asteroid colliding with the Earth 251 million years ago which may have killed 90 per cent of all life.

The study, based on meteorite fragments found in Antarctica, suggests the Permian-Triassic event, perhaps the greatest extinction in the planet’s history, may have been triggered by a mountain-sized space rock that smashed into a southern land mass.

“It appears to us that the two largest mass extinctions in Earth history... were both caused by catastrophic collisions” with asteroids, the researchers say in their study in Science.

The evidence indicates asteroid impacts are the key factors in the development of life on this planet. In wiping out a large proportion of life on the planet periodically, the asteroids have played a more important role in evolutionary development than previously thought.More pertinent is the question of cosmic impacts on the rise and fall of mankind’s ancient civilisations. Is there any evidence backing up the stories of ancient apocalypse and hell fire from the sky that are preserved in mythology and some of the world’s religions?

Collapse of Civilisation

...and the seven judges of hell ... raised their torches, lighting the land with their livid flame. A stupor of despair went up to heaven when the god of the storm turned daylight into darkness, when he smashed the land like a cup.

– An account of the Deluge from the Epic of Gilgamesh, circa 2200 BCE

Biblical stories, apocalyptic visions, ancient art and scientific data all seem to intersect at around 2350 BCE, when one or more catastrophic events wiped out several advanced societies in Europe, Asia and Africa.

Archaeological findings show that in the space of a few centuries, many sophisticated civilisations disappeared. The Old Kingdom in Egypt fell into ruin. The Akkadian culture of Iraq, thought to be the world’s first empire, collapsed.

Around the same time apocalyptic writings appeared. The Epic of Gilgamesh describes the fire, brimstone and flood of possibly real, not mythical, events. Omens predicting the Akkadian collapse preserve a record that “many stars were falling from the sky.” The “Curse of Akkad,” dated to about 2200 BCE, speaks of “flaming potsherds raining from the sky.”

In 1650, the Irish Archbishop James Ussher mapped out the chronology of the Bible – a feat that included stringing together all the “begats” to count generations – and put Noah’s great flood at 2349 BCE.

All coincidence? A number of scientists don’t think so.

Mounting hard evidence collected from tree rings, soil layers and even dust that long ago settled to the ocean floor indicates there were widespread environmental nightmares in the Near East during this period: Abrupt cooling of the climate, sudden floods and surges from the seas, huge earthquakes.

In 1999 geologist Dr. Sharad Master spotted a 3-kilometre-wide crater in southern Iraq after studying satellite images. Scientists now believe this circular depression bears all the hallmarks of an impact crater, one that caused devastating fires and flooding. They are now attempting to date the time of the impact, with some of the main researchers estimating an age of around 6,000 years – placing it in the close vicinity of the sudden decline in Middle East civilisation around 2300 BCE.

Mike Baillie, professor of palaeoecology at Queens University in Belfast and author of Exodus to Arthur: Catastrophic Encounters with Comets, figures it would have taken just a few bad years to destroy societies.

Even a single comet impact large enough to have created the Iraqi crater, “would have caused a mini nuclear winter with failed harvests and famine, bringing down any agriculture based populations which can survive only as long as their stored food reserves,” Baillie says. “So any environmental downturn lasting longer than about three years tends to bring down civilisations.”

Professor Mike Baillie is an authority on dendrochronology, the science of studying tree growth rings. His decades long collaborative effort with many scientists has developed a worldwide record of climate modulated, annual tree growth as recorded in tree growth rings. That effort has produced a reliable timeline from the present back to several thousand years BCE.

Occasionally environmental conditions are so extreme that trees all over the world are affected. Certain of these patterns imply weather conditions leading to local or worldwide catastrophes, including crop failures, famine and flooding.

As described in Exodus to Arthur, the dates linked to extreme events are: 3195 BCE, 2354 BCE, 1628 BCE, 1159 BCE, 207 BCE, 44 BCE, and 540 CE.

The significance of the date 2354 BCE has been noted. The other date to stand out is 540 CE, with the extreme weather events actually starting in 536 CE.

Until recently, historians had little notion dramatic climatic events had occurred. The accounts left by contemporary observers were poorly understood and overshadowed by later historical events. In fact, those later events, it turns out, may have been caused, directly or indirectly, by the weather of the time.

The Praetorian Prefect Magnus Aurelius Cassiodorus Senator, who lived between 490 and 585 CE, wrote a letter documenting the conditions. “All of us are observing, as it were, a blue coloured sun; we marvel at bodies which cast no mid-day shadow, and at that strength of intensest heat reaching extreme and dull tepidity... So we have had a winter without storms, spring without mildness, summer without heat... The seasons have changed by failing to change; and what used to be achieved by mingled rains cannot be gained from dryness only.”

In the wake of this inexplicable darkness, crops failed and famine struck. Then a new disease swept across the entire continent of Eurasia: bubonic plague. It ravaged Europe over the course of the next century, reducing the population of the Roman empire by a third, killing four-fifths of the citizens of Constantinople, reaching as far east as China and as far northwest as Great Britain.

Other reports about the weather conditions from Byzantine and Constantinople record the same environmental phenomena such as dry fog, darkness, cold, drought, and famine.In 1984, Mike Baillie proposed that the climatic event of 536 CE (and by extension, all six of the others) could have been caused by “an asteroid, a comet, cometary fragment(s), or cosmic swarms.”

Perhaps one of the most fascinating and well researched theories is offered by authors Christopher Knight and Robert Lomas in their book Uriel's Machine: The Prehistoric Technology That Survived The Flood.

They present recent geological evidence showing that in 7640 BCE Earth was hit by seven comet fragments causing gigantic tidal waves. These findings are derived from the work of Austrian geologists Alexander and Edith Tollmann of Vienna University's Geological Institute.

By combining evidence from various disciplines (including the global distribution of tektites and a study of worldwide myths and legends), the Tollmanns propose that a comet approached the Earth from the south-east and fragmented into seven pieces which fell subsequently into the oceans causing mass destruction on all continents. One piece is believed to have landed in the North Atlantic, while another is considered to have fallen into “the Central Atlantic south of the Azores” creating a direct hit on “Atlantis”.

According to the authors of Uriel's Machine, there is a Masonic tradition that the biblical character Enoch constructed a machine to predict comets on an Earth collision course. They believe the ancient Book of Enoch describes how this machine should be constructed, and how this secret technology has been preserved since ancient times in Freemasonic lore.

ConclusionThe fall of ancient civilisations may now come to be viewed not as a failure of social engineering or political might but rather the product of climate change and, possibly, heavenly happenstance.

The Bible and other ancient texts have kept alive the memory of ancient catastrophes whose scientific analysis and understanding might now be vital for the protection of our own civilisations from future impacts.

These concerns are probably why the European Space Agency’s chief scientist wants a “Noah’s Ark” on the Moon, in case life on Earth is wiped out by an asteroid or nuclear holocaust.“If there were a catastrophic collision on Earth or a nuclear war, you could place some samples of Earth’s biosphere, including humans, [on the Moon],” said Dr. Bernard Foing. “You could repopulate the Earth afterwards, like a Noah’s Ark.”

At this point, only two things are certain: The Earth could be hit at any moment by a roving asteroid or comet, and we will be hit, again, unless something is done to prevent it.

Jason Jeffrey holds an interest in a wide range of subjects including geopolitics, the "New World Order", Big Brother, suppressed technology, psychic/spiritual development, ancient civilisations and esotericism.
He can be contacted at:

jasonjeffrey33@yahoo.com.au© Copyright New Dawn Magazine,
[link to www.newdawnmagazine.com.] Permission granted to freely distribute this article for non-commercial purposes if unedited and copied in full, including this notice.

Wednesday, August 20, 2008

Field Biochar Manufacture

This posting by A. Karve at the terra preta/biochar forum brings fresh practical insight to the task of producing biochar in the field. As noted I have posted on an earthen kiln protocol that can be used by farmers without access to metal. This posting allows me to refine my thinking for the modern subsistence farmer and even well beyond that level.

Start with no more than a steel drum whose top and bottom has been removed. Place a layer of inch thick branches down as a packed floor for the kiln. Place the kiln end up on to this floor. Air will be able to pass under the edge of the drum through the packed branches at a moderate speed.

Pack the drum with chipped wood or chopped biomass. Do not create tight layering that could cut of air flow entirely. Yet get the packing level up to fifty percent. Once the drum is filled, place a charge of dry wood to act as a starter on top of this load. You may already place six inches of soil around the edge to reduce the center diameter to a quarter of the total. Fire it and let the fire burn most of the wood layer in order to be fully engaged.

At this point smother the top of the fire by throwing six inches of dirt over the center. Or alternately, place a metal lid with a six inch chimney pipe with a damper for fine control.

What we have done here is very familiar to those of us who experienced the methods of the nineteenth century. We have actually banked the fire. The surface cannot flare up losing both heat and fuel and only a limited amount of fuel is burning at any one time and it is mostly in the form of the volatiles in the early going.

Over several hours, the burn front will migrate down to the ground and burn out the floor of this kiln, allowing the edge of the kiln to connect to the earth, and cutting off the air flow eventually. In practice, I consider this to be more of a fail safe to prevent a total burn out of the fuel charge as eventually happens in a banked stove.

I like the dirt layer idea, with or without a metal lid. It acts like a filter for the escaping gases and likely maximizes their combustion. In addition, it will end up been blended with the end product to produce a dry easy to handle mixture if the fire is not quenched with water which is likely necessary.

I suspect that it will simply be better practice to water the fire before it fully engages the floor. Once that point is reached we are very close to running out of volatiles and the charcoal then becomes the primary fuel.

Thus, in lieu of naturally packable materials such as corn stover and bagasse able to produce an earthen kiln, we have a simple metal kiln design that is easily expandable and able to work on the modern farm. There a square set metallic box can be set up in the same manner and material loaded in and packed. This is all rough and ready and certainly will not achieve the optimal thirty percent yield, but it will produce twenty percent or better quite handily.

The key idea is to have the bottom edge set on a layer of branches or any other material able to sustain a fifty percent air flow in through the bottom. The top layer of dirt might be dispensed with if a holding layer does not exist. The Indians had palm fronds to work with. A metal sheet with a chimney closing it all off will do the rest.

My most important point is that this is easy to assemble in some form or the other anywhere and regardless of the local economy. Old rusty galvanized sheet steel is very suitable. You may even get away with using rope on the outside to hold it together. After all the core temperature will be mostly in the core and still be around 300 to 400 degrees. Hot spots on the wall will need an unusual source of air and that really means a fully engaged fire. If that is happening, you have plenty of other problems and it is not working at all.

Dear Martin,

I really do not know, how much char is to be applied per hectar. But I can tell you how to make char out of your burnable organic waste. The simplest device is a top-lit updraft kiln. It consists of a vertical cylinder, having relatively small holes near its base for primary air. You fill the cylindrical body of the kiln with the material to be charred and then light it from the top. Once the fire gets going, you place a lid on the cylinder. There is a chimney built into the lid. The lid does not sit flush on the kiln, but there is a gap between the lid and the kiln. The draft created by the chimney sucks secondary air into the chimney, where it gets mixed with the pyrolysis gas to burn it. The biomass burns downwards, leaving a layer of charcoal on top. As the primary air comes upwards, it meets the burning front which traverses downwards. The burning biomass utilises all the oxygen in the primary air, so that the air going up through the layer of char has only carbon dioxide, carbon monoxide, nitrogen and the pyrolysis gas left in it. As there is no oxygen left in the updraft air, it cannot burn the char that has formed above the burning biomass.The pyrolysis gas and carbon monoxide burn in the chimney, because of the secondary air that is sucked in through the gap between the chimney and the kiln. You have to find out by trial and error, how long it takes to char the material loaded in the kiln. After that much time is over, you remove the lid, and extinguish the fire by sprinkling water over the burning material. This particular device is portable and manually operated. There are larger charring kilns, based on the oven and retort process. Prof. Yuri Yudkevich, a Russian scientist, has made them for charring useless material generated by the timber industry in Russia. We are already using both types of kilns under field conditions in India for charring agricultural waste as also urban waste. We have a video CD that describes the kilns and you can fabricate them by watching the video CD. I have not used Prof. Antal's kiln and have absolutely no idea how it operates. Our web site
www.arti-india. org would show you how to get our CDs by paying us through Pay Pal.

Tuesday, August 19, 2008

Camelina

This recent item has introduced me to camelina, a flax-like crop that has been around for at least five thousand years but not as a viable source for human consumption. For that it may well have to do through the same product conversion as rape seed oil into canola. It does shape up to be a very promising agricultural plant with a few modern tweaks.

This is a crop that prospers on ten inches of rain and little more. There are plenty of prospective lands that can work this crop and little else very successfully. It will be a great transition land crop were folks are been squeezed out of farming by Mother Nature. Think in particular of the buffalo commons of the Great Plains that should never have been broken and are now reaching the end of the aquifers.

At this point the oil is useful only as a fuel source. At least we now have a market for it in the form it is in. I suspect that it will take little to convert it into high quality edible oil that will be easily marketed. It has not been done yet and will take years. Canola had the same problem.

More promising is the immediate meal market for the remaining product. This means that there are minimal waste materials although little is said of the straw except to note that it is clearly minimal from the pictures. I do not have a per acre yield figure as yet either, but assume it approaches that of flax and rape.

In practice, the amounts produced will help the fuel situation but it is unlikely to be more than a fraction of the supply system. I would be happy if it just displaced agricultural usage of fossil fuels as a good first step.

Without question, we are transitioning over to sustainable transportation fuels. It is also obvious that brewing up sugars using algae is the easiest and cheapest way to get there. Biodiesel promises to be an important fuel also because it also can be produced cheaply as a byproduct of algae production and integrated directly into the transportation system. Camelina looks like a good feedstock for this industry now and ultimately as a food product at a later stage. It is certainly a better choice than canola and soy and grows on lands that are poor choices for either.

I am also assuming that haulage using pure electrical systems will remain for a long time only practical for short haul applications. Do not count on it!


Camelina looks to be best crop for biodiesel production

By DALE HILDEBRANT, Farm & Ranch Guide

Friday, August 15, 2008 11:05 AM CDT

GRAND FORKS, N.D. - When considering biodiesel production, camelina appears to be the Cinderella crop, according to information presented at the recent Bio-Mass '08 Technical Workshop in Grand Forks.
In recent months biodiesel production has decreased in the U.S. because of high prices for soybean and canola oil, the two main oils currently used in biodiesel processing, since the oil from both of these seeds is in high demand in the food industry.

At the present time, about 90 percent of the oil used in biodiesel is soy oil and the other 10 percent is canola oil. But the biodiesel production capacity of the U.S., which is 2.5 billion gallons per year, isn't being fully utilized with production last year of only 500 million gallons.

However, Duane Johnson the vice president for agricultural development at Great Plains Oil and Exploration in Big Fork, Mont., thinks camelina, which is sometimes called “false flax” could return profit to the bio-diesel industry and thus spur further growth.

For example, at the current market prices, soybean oil feedstock costs $5.25 a gallon and the feedstock price is about 80 percent of the final product cost, making the final cost of a gallon of biodiesel approximately $6.60, which is a figure well above the current price of diesel fuel.

Johnson also noted that converting good grade vegetable oil such as soybean and canola oil is adding to the backlash over food versus fuel, a debate that is currently taking place world-wide. Since camelina is industrial oil, not food grade oil, using it as a feedstock for bio-diesel would lessen that argument.

Using figures prepared by various agencies back in 2003, Johnson provided the following comparison for using oil crops grown in North Dakota for biodiesel. Even though the growing costs per acre and the cost per gallon of the oil are considerably higher, the following data provides a good comparison between the various oil crops in regards to bio-diesel production.

Raising camelina could also be an economic plus for farmers in the more arid areas of the northern Great Plains.

Alice Pilgeram has been working with camelina research for the past several years at Montana State University and claims the crop can provide growers with a high value crop with relatively low input costs. Production acreage in Montana has increased from just 450 acres in 2004 to between 20,000 to 40,000 acres planted this year.

Several other states, including North Dakota, are currently raising camelina and looking at expanding acreage in the future.

When it comes to fuel production, biodiesel is the most efficient form of alternative fuels, according to Johnson. In terms of gasoline and diesel fuel production, for each calorie expended in the extraction and manufacture of these products we recover 0.8 calories of energy. Ethanol production returns 1.1 calories for each calorie expended, but for biodiesel, for each calorie expended 3.5 to 5.2 calories of energy are recovered.

And, camelina is a superior oil when it come to biodiesel. The oil contains a high amount of linolenic fatty acid, which usually leads to a short oil life before it turns rancid. However, the camelina oil also contains a high level of vitamin E that serves as an anti-oxidant and extends the oil's shelf life.

The high linolenic content is important to biodiesel production, since it gives the product a pour point of around -15 degrees Fahrenheit, which is considerably lower than the other oils offer and is important for users in this region of the country.

Pilgeram also noted that at least five biodiesel companies in Montana will be utilitzing camelina oil in 2008.

Agronomically, camelina is an ideal crop for this region, since it produces well with about 10 inches of rain and requires a low rate of fertilization and pesticide use, and does well on marginal land, Johnson explained.

“We can get maximum yield with up to 10 inches of rainfall,” he said. “After that we start having disease problems.”

Johnson claims the biodiesel industry needs to look to a new generation of feedstocks if it is going to be successful.

“The future of biodiesel is going to be what happens in the next generation,” he said. “Right now all of the oilseeds that we use to make biodiesel, whether it be soybeans, sunflower, canola or mustard, are competing against a world food market. We need to start looking at non-food crops, or the next generation of crops, for biodiesel production.”

These next generation crops should be lower in cost, because they aren't competing for food use. These sources include using algae, where the technology is five to 10 years away, the tropical plant jatropha, which is three to seven years away, and camelina, where the technology is here now.

Camelina has one more advantage - a meal by-product that can be successfully used in beef, dairy, poultry and fish rations. Cold-pressed camelina meal contains a residual oil of 8 to 11 percent and this oil contains 34 to 38 percent omega 3 fatty acids and very high levels of vitamin E.

The meal is also an excellent source of protein and is very low in ash content.

Beef feeding trials are currently underway at Montana State University that show feedlot daily rates of gain were higher with a ration containing 3.5 percent camelina meal than rations containing 3.5 and 7.0 percent soybean meal.

It may have been dubbed “false flax” in the past, but many feel there is nothing false about the future of camelina as one of the new sources for biodiesel production.


From Wikipedia we have:

Camelina sativa, usually known in English as gold-of-pleasure or false flax, also occasionally wild flax, linseed dodder, camelina, German sesame, and Siberian oilseed, is a
flowering plant in the family Brassicaceae which includes mustard, cabbage, rapeseed, broccoli, cauliflower, kale, brussels sprouts. It is native to Northern Europe and to Central Asian areas, but has been introduced to North America, possibly as a weed in flax.

It has been traditionally cultivated as an
oilseed crop to produce vegetable oil and animal feed. There is ample archeological evidence to show it has been grown in Europe for at least 3,000 years. The earliest findsites include the Neolithic levels at Auvernier, Switzerland (dated to the second millennium BC), the Chalcolithic level at Pefkakia in Greece (dated to the third millennium BC), and Sucidava-Celei, Romania (circa 2200 BC).[1] During the Bronze age and Iron age it was an important agricultural crop in northern Greece beyond the current range of the olive. [2][3] It apparently continued to be grown at the time of the Roman Empire, although its Greek and Latin names are not known.[4] According to Zohary and Hopf, until the 1940's C. sativa was an important oil crop in eastern and central Europe, and currently has continued to be cultivated in a few parts of Europe for its seed which was used,[1] for example, in oil lamps (until the modern harnessing of natural and propane and electricity) and as an edible oil.

The crop is now being researched due to its exceptionally high levels (up to 45%) of
omega-3 fatty acids, which is uncommon in vegetable sources. Over 50% of the fatty acids in cold pressed Camelina oil are polyunsaturated. The major components are alpha-linolenic acid - C18:3 (omega-3-fatty acid, approx 35-45%) and linoleic acid - C18:2 (omega-6 fatty acid, approx 15-20%). The oil is also very rich in natural antioxidants, such as tocopherols, making this highly stable oil very resistant to oxidation and rancidity. It has 1 - 3% erucic acid. The vitamin E content of camelina oil is approximately 110mg/100g. It is well suited for use as a cooking oil. It has an almond-like flavor and aroma. It may become more commonly known and become an important food oil for the future.

Because of its certain apparent health benefits and its technical stability gold-of-pleasure and camelina oil are being added to the growing list of foods considered as
functional foods. Gold-of-pleasure is also of interest for its very low requirements for tillage and weed control. This could potentially allow vegetable oil to be produced more cheaply than from traditional oil crops, which would be particularly attractive to biodiesel producers looking for a feedstock cheap enough to allow them to compete with petroleum diesel and gasoline. Great Plains - The Camelina Company began research efforts with camelina over 10 years ago. They are currently contracting with growers throughout the U.S. and Canada to grow camelina for biodiesel production. A company in Seattle, Targeted Growth, is also developing camelina.[5]

The subspecies C. sativa subsp. linicola is considered a weed in flax fields. In fact, attempts to separate its seed from flax seeds with a winnowing machine over the years have selected for seeds which are similar in size to flax seeds, an example of Vavilovian mimicry.

Monday, August 18, 2008

Ice Age Denoument

As my long time readers are aware, I have taken a particular interest in the climatic history of the Holocene generally in order to establish historic drivers to the observed climatic variation. The millennia long Bronze Age climatic optimum has been of particular interest. During this period, it was observed that the Sahara was created by the destruction of a well vegetated ecosystem. This also coincides with a similar decline throughout the Middle East.

I pointed out that this was a heat capture system that explains the warmth of the Northern Hemisphere during this time and its loss explains the less optimal climate since.

It is also noted that the principal rise in sea levels from melt waters ended about seven thousand years ago, having lasted a total of perhaps seven thousand years. Other sources had suggested a much shorter melting time frame, so perhaps we should be a bit wary as yet in accepting the source that I am appending here. Those curves are way too smooth.

What I have not paid attention to yet is the last several thousands of years simply because it appears to be a minor variation as compared to the previous collapse. The fact is that there was plenty of land based ice up to perhaps four thousand years ago and this was reflected in the still substantial as compared to today, increase in sea level. It was about twice the current level of sea level increase.

This suggests that a lot of the original ice sheet lingered on the ground near sea level into the late Bronze Age. This surely kept the Arctic cold and possibly a large part of the present boreal forest at bay. In other words, present historical conditions in the high arctic are actually very new and only properly established in the past three thousand years or so.

This also suggests that the Bronze Age had an extra cooling engine in the arctic to offset the warmth coming out of the south. All this is much closer in time in terms of effect than I had anticipated and negates facile comparisons to those climatic conditions. It appears way more robust than anticipated.

It would be useful to do carbon 14 dating on bog trapped wood throughout the boreal forest to get an idea of just how long the forests have been present. I see no evidence that anyone has launched such a program anywhere, let alone in the more northerly regions. It would give us a time line on were and for how long surface ice lingered representing necessary input for the generation of climate models.

A derivative of this result is that during the climatic optimum of the Bronze Age, the Greenland ice sheet would have been far less vulnerable to change and that the sea ice would have remained largely intact. The medieval optimum was possibly the first real look at an ice free summer Arctic and that lasted for many decades. We can never prove it of course.









http://www.globalwarmingart.com/wiki/Image:Holocene_Sea_Level_png

Image:Holocene Sea Level.png
From Global Warming

Holocene_Sea_Level.png; Other sizes: 100, 200, 300, 450
Description


Sea level rise since the last glacial episode


Sea level rise from direct measurements during the last 120 years

This figure shows changes in
sea level during the Holocene, the time following the end of the most recent glacial period, based on data from Fleming et al. 1998, Fleming 2000, & Milne et al. 2005. These papers collected data from various reports and adjusted them for subsequent vertical geologic motions, primarily those associated with post-glacial continental and hydroisostatic rebound. The first refers to deformations caused by the weight of continental ice sheets pressing down on the land, the latter refers to uplift in coastal areas resulting from the increased weight of water associated with rising sea levels. It should be noted that because of the latter effect and associated uplift, many islands, especially in the Pacific, exhibited higher local sea levels in the mid Holocene than they do today. Uncertainty about the magnitude of these corrections is the dominant uncertainty in many measurements of Holocene scale sea level change.

The black curve is based on minimizing the sum of squares error weighted distance between this curve and the plotted data. It was constructed by adjusting a number of specified tie points, typically placed every 1 kyr and forced to go to 0 at the modern day. A small number of extreme outliers were dropped. It should be noted that some authors propose the existence of significant short-term fluctuations in sea level such that the sea level curve might oscillate up and down about this ~1 kyr mean state. Others dispute this and argue that sea level change has been a smooth and gradual process for essentially the entire length of the Holocene. Regardless of such putative fluctuations, evidence such as presented by Morhange et al. (2001) suggests that in the last 10 kyr sea level has never been higher than it is at present.

Copyright This figure was prepared by Robert A. Rohde from published data.


Global Warming Art License

This image is an original work created for
Global Warming Art.
Permission is granted to copy, distribute and/or modify this image under either:
The
GNU Free Documentation License Version 1.2; with no Invariant Sections, Front-Cover Texts, or Back-Cover Texts.
The
Creative Commons Attribution-NonCommercial-ShareAlike License Version 2.5
It is expected that reusers of this image will:
Acknowledge Global Warming Art and/or the specific author of this image, as the source of the image.
Provide a link or reference back to this specific page:
http://www.globalwarmingart.com/wiki/Image:Holocene_Sea_Level_png
Allow any modifications made to the image to also be reused under the terms of one or both of the licenses noted above.

Those interested in commercial and/or higher quality reproduction should also refer to
information for professional republishers.

References
[abstract] [DOI] Fleming, Kevin, Paul Johnston, Dan Zwartz, Yusuke Yokoyama, Kurt Lambeck and John Chappell (1998). "Refining the eustatic sea-level curve since the Last Glacial Maximum using far- and intermediate-field sites". Earth and Planetary Science Letters 163 (1-4): 327-342.
Fleming, Kevin Michael (2000). Glacial Rebound and Sea-level Change Constraints on the Greenland Ice Sheet. Australian National University. PhD Thesis.

[
abstract] [DOI] Milne, Glenn A., Antony J. Long and Sophie E. Bassett (2005). "Modelling Holocene relative sea-level observations from the Caribbean and South America". Quaternary Science Reviews 24 (10-11): 1183-1202.

[
abstract] [DOI] Morhange, C., J. Laborel, and A. Hesnard (2001). "Changes of relative sea level during the past 5000 years in the ancient harbor of Marseilles, Southern France". Palaeogeography, Palaeoclimatology, Palaeoecology 166: 319-329.

Friday, August 15, 2008

Oil Age Ends

This article is yet another encouraging eye opener. Far too much of our knowledge, taught to us with good intentions is often wrong. Here we expand our understanding of the nature of cellulose and also learn of a fantastic production strategy that literally mocks all the other methods been pursued. Those methods would have been still born if this possibility was understood or even guessed at.

This protocol allows a fermenting process like that of alcohol to produce sugars and free cellulose that can also be easily converted to glucose. The production fluid is an obvious feedstock for the production of ethanol.

This is obviously conducive to industrial manufacturing and eliminates most of the whole problem of utilizing the by product of spent algae. This is also early days again and the whole process lends itself to major optimization that will hugely lower the footprint. Their first calculations are back of the envelope worst case scenarios that can be safely ignored. They will get much better.

Most encouraging is the suitability of using saline water for the process. There are surely additional ways of optimizing the system by drawing sea borne organics into the mix. Perhaps while we are at it we can design in a fresh water byproduct system that can support local direct agricultural on arid coastlines. It all takes a bit of imagination but the desert coastlines are locales in which the beginning of a living productive ecosystem is necessary for further movement inland.

We can now expect one step continuous production of a charged fluid that can then be pumped into fermenter to produce ethanol as a second step. The main input will be sunlight and CO2. The output will be ethanol with little wastage and the fluids all been easily recycled. I do not think that it will be possible to make transportation fuel any cheaper. Particularly if they can also add the nitrogen fixing gene to the bug. We can have a run away sugar and cellulose factory working for us on the beach on sea water and sunshine. The other nutrients would come out of the sea water.

We have looked at a lot of promising technology for replacing the fossil fuel business. We now have nanosolar for static power and we have this as the ultimate supply for transportation fuel and just maybe an efficient way to store energy by splitting out hydrogen yesterday. These are surely the three cheapest ways to get there. Nanosolar claims to be already there. The other two will still need a couple of intense years to look commercially viable.

However we look at it and whatever the time it takes to ramp production up and it will not be much, the oil age has really ended with these discoveries.

New Source for Biofuels Discovered by Researchers At The University of Texas at Austin
April 23, 2008

AUSTIN, Texas — A newly created microbe produces cellulose that can be turned into ethanol and other biofuels, report scientists from The University of Texas at Austin who say the microbe could provide a significant portion of the nation's transportation fuel if production can be scaled up.

Along with cellulose, the cyanobacteria developed by Professor
R. Malcolm Brown Jr. and Dr. David Nobles Jr. secrete glucose and sucrose. These simple sugars are the major sources used to produce ethanol.

"The cyanobacterium is potentially a very inexpensive source for sugars to use for ethanol and designer fuels," says Nobles, a research associate in the
Section of Molecular Genetics and Microbiology.

Brown and Nobles say their cyanobacteria can be grown in production facilities on non-agricultural lands using salty water unsuitable for human consumption or crops.

Other key findings include:

The new cyanobacteria use sunlight as an energy source to produce and excrete sugars and cellulose

Glucose, cellulose and sucrose can be continually harvested without harming or destroying the cyanobacteria (harvesting cellulose and sugars from true algae or crops, like corn and sugarcane, requires killing the organisms and using enzymes and mechanical methods to extract the sugars)

Cyanobacteria that can fix atmospheric nitrogen can be grown without petroleum-based fertilizer input
They recently published their research in the journal Cellulose.

Nobles made the new cyanobacteria (also known as blue-green algae) by giving them a set of cellulose-making genes from a non-photosynthetic "vinegar" bacterium, Acetobacter xylinum, well known as a prolific cellulose producer.

The new cyanobacteria produce a relatively pure, gel-like form of cellulose that can be broken down easily into glucose.

"The problem with cellulose harvested from plants is that it's difficult to break down because it's highly crystalline and mixed with lignins [for structure] and other compounds," Nobles says.

He was surprised to discover that the cyanobacteria also secrete large amounts of glucose or sucrose, sugars that can be directly harvested from the organisms.

"The huge expense in making cellulosic ethanol and biofuels is in using enzymes and mechanical methods to break cellulose down," says Nobles. "Using the cyanobacteria escapes these expensive processes."

Sources being used or considered for ethanol production in the United States include switchgrass and wood (cellulose), corn (glucose) and sugarcane (sucrose). True algae are also being developed for biodiesel production.

Brown sees a major benefit in using cyanobacteria to produce ethanol is a reduction in the amount of arable land turned over to fuel production and decreased pressure on forests.

"The pressure is on all these corn farmers to produce corn for non-food sources," says Brown, the Johnson & Johnson Centennial Chair in Plant Cell Biology. "That same demand, for sucrose, is now being put on Brazil to open up more of the Amazon rainforest to produce more sugarcane for our growing energy needs. We don't want to do that. You'll never get the forests back."

Brown and Nobles calculate that the approximate area needed to produce ethanol with corn to fuel all U.S. transportation needs is around 820,000 square miles, an area almost the size of the entire Midwest.

They hypothesize they could produce an equal amount of ethanol using an area half that size with the cyanobacteria based on current levels of productivity in the lab, but they caution that there is a lot of work ahead before cyanobacteria can provide such fuel in the field. Work with laboratory scale photobioreactors has shown the potential for a 17-fold increase in productivity. If this can be achieved in the field and on a large scale, only 3.5 percent of the area growing corn could be used for cyanobacterial biofuels.

Cyanobacteria are just one of many potential solutions for renewable energy, says Brown.

"There will be many avenues to become completely energy independent, and we want to be part of the overall effort," Brown says. "Petroleum is a precious commodity. We should be using it to make useful products, not just burning it and turning it into carbon dioxide."

Brown and Nobles are now researching the best methods to scale up efficient and cost-effective production of cyanobacteria. Two patent applications, 20080085520 and 20080085536, were recently published in the United States Patent and Trade Office.

For more information, contact:
Lee Clippard, College of Natural Sciences, 512-232-0675; Dr. R. Malcolm Brown Jr., 512-471-3364; Dr. David Nobles, 512-471-3364.

Thursday, August 14, 2008

Ice Collapse is Terminal for 2012

Unexpectedly, the arctic sea ice is now racing ahead to catch up to last years melt. It makes sense. Last years old ice was greatly weakened and only replaced by the new winter ice now been eliminated. A satellite view of the arctic shows mostly well broken up ice floes that are certainly at the ambient sea temperature right now and are been attacked by the solar energy collected by the sea. They are all in a melt environment with dropping support from adjacent ice.

It is important to understand that the overall ice mass is continuing to drop season by season and the last report stated that we have lost fifty percent in the last five years. That was after we lost sixty percent between 1959 and 2000. (we have only the two data points and the loss likely took place in the late eighties and the nineties.)

So at the current melt rate now even less impeded and accelerated by expanded open water, it will all be gone in five years at the most. Again for those keeping score, that means 2012. It now appears even possible that we will be even a bit earlier.

At least everyone is watching this year. Last year at this time they were all keeping their heads in the sand. The main point is that this is the terminal collapse of the perennial Arctic sea ice sheet that we are now watching. The only question I had two and three years ago been as to when the collapse would kick off. It kicked off last year and continues in full swing today and can only end with an ice free summer sailing season by 2012.

The cold needed to reverse this situation is actually beyond what has been experienced in the Arctic for decades.
This also makes me revisit the subject of the little Ice Age and the Bronze Age. It is now very apparent to me that the natural state of the Northern Hemisphere is warm. This now clearly means no summer time sea ice left in the Arctic. In fact, with a restored Sahara, we can expect it to be warmer still and very resistant to volcanic cooling.

It is clear that a major agent of cooling brought the temperature down two full degrees at the beginning of the Little Ice age and held it there long enough to produce a very thick ice sheet in the Arctic. This likely took place over a full century.

We have two options. The first is the Maunder Minimum of which I am a bit skeptical and unconvinced. The second is a very active eruption sequence associated with the Aleutians and Kamchatka Peninsula. For that we need eruption and ash volume histories of all these volcanoes. We also need to differentiate those that are rich in volatiles.

Recall that the impact on temperature is magnified in the Arctic. A Mount Pinatuba in the Arctic could be expected to impact several degrees as compared to the degree or so experienced on the equator.

So we really need only an increase in normal volcanism to achieve the temperature changes and preferably in the form of a major eruption followed by several others over a number of years. This is an apt description of the region's behavior.

Arctic Sea Ice Decline Accelerates, Amundsen’s Northwest Passage Opens

12 August 2008

Just several weeks ago it seemed as though the loss of Arctic sea ice would not be as extreme as last year’s, which shattered previous records. (
Earlier post.) However, the rate of sea ice loss has accelerated during the past ten days, triggered by a series of strong storms that broke up thin ice in the Beaufort and Chukchi Seas and brought warm southerly winds into the region, according to the National Snow and Ice Data Center (NSIDC).

Amundsen’s historic Northwest Passage is once again opening up; the wider and deeper route through Parry Channel is currently still clogged with ice. This route opened in mid-August last year; it may still open up before the end of this year’s melt season, according to NSIDC.

Arctic sea ice extent on 10 August was 6.54 million square kilometers (2.52 million square miles), according to NSIDC data, a decline of 1 million square kilometers (390,000 square miles) since the beginning of the month. Extent is now within 780,000 square kilometers (300,000 square miles) of last year’s value on the same date and is 1.50 million square kilometers (580,000 square miles) below the 1979 to 2000 average.

Ice extent has begun to decline sharply. The decline rate surged to -113,000 square kilometers per day on August 7 and as of August 10 was -103,000 square kilometers per day. This compares to the long-term average decline of -76,000 square kilometers per day for this time of year. Normally, the peak decline rate is in early July.

Many of the areas now seeing a rapid retreat saw an early melt onset (see July 2, 2008); this helped set the stage for rapid retreat (July 17 and April 7). However, the more fundamental issue is that these regions started the melt season covered with thin first-year ice, which is especially vulnerable to melting out completely. Thin ice is also vulnerable to breakup by winds; the last ten days have seen a windy, stormy pattern that has accelerated the ice loss.

—NSIDC

Wednesday, August 13, 2008

Energy Storage Breakthrough at MIT

This is the third very important discovery announcement made over the past three months. This one permits the use of electrolysis as an energy storage protocol. The energy cost is not quantified at all but is certainly been cheered in this report. And since the source is hardly uninformed, even a smile is significant. They have done it.
We now have an efficient energy storage protocol to go with our efficient thin film wide temperature band refrigeration discovery and the emerging printed nanosolar film technology. They are all slotted for commercialization over the next four years.

Of course everyone is been coy as to actual efficiencies at this time but would not be talking unless they are expecting a vast improvement over past alternatives. We know that the nanosolar crowd has started their pricing at $1.00 per watt. This will displace all the competition and also indicates that a further order of magnitude price decline is a possibility. Making the separation of hydrogen and oxygen efficient suggests that the swing between storage and utilization is perhaps dropping under twenty percent which is sufficient to make hydrogen the energy storage medium of choice.

Future solar energy conversion efficiencies can be expected to approach and surpass the thirty percent mark and achieve this by harvesting infrared energy also. The efficiency of the refrigeration material is driven by the available temperature range. This was dogged until recently by a ten to twenty degree spread but this has now opened up to one hundred degrees. A lot of things have just become possible and not just cheaper.

Arthur C. Clark missed seeing these announcements by mere months. He predicted these developments forty years ago, in particular the access to solar energy at costs under $1.00 per installed watt.

You must understand. This all means that a home can be clad in a nanosolar skin that supplies all power while storing surplus for the nighttime and will also cool the inside with another interior skin. The house will be a net exporter of energy, perhaps sufficient to supply a hydrogen based vehicle or two. Fuel cells just became important again.

There is still a couple of years of development to grind through, but after seeing how quickly nanosolar has been launched, I am been conservative.

MIT develops way to bank solar energy at home

/energy/article/37841CAMBRIDGE, Massachusetts (Reuters) - A U.S. scientist has developed a new way of powering fuel cells that could make it practical for home owners to store solar energy and produce electricity to run lights and appliances at night.

A new catalyst produces the oxygen and hydrogen that fuel cells use to generate electricity, while using far less energy than current methods.

With this catalyst, users could rely on electricity produced by photovoltaic solar cells to power the process that produces the fuel, said the Massachusetts Institute of Technology professor who developed the new material.

"If you can only have energy when the sun is shining, you're in deep trouble. And that's why, in my opinion, photovoltaics haven't penetrated the market," Daniel Nocera, an MIT professor of energy, said in an interview at his Cambridge, Massachusetts, office. "If I could provide a storage mechanism, then I make energy 24/7 and then we can start talking about solar."

Solar has been growing as a power source in the United States -- last year the nation's solar capacity rose 45 percent to 750 megawatts. But it is still a tiny power source, producing enough energy to meet the needs of about 600,000 typical homes, and only while the sun is shining, according to data from the Solar Energy Industries Association.

Most U.S. homes with solar panels feed electricity into the power grid during the day, but have to draw back from the grid at night. Nocera said his development would allow homeowners to bank solar energy as hydrogen and oxygen, which a fuel cell could use to produce electricity when the sun was not shining.

"I can turn sunlight into a chemical fuel, now I can use photovoltaics at night," said Nocera, who explained the discovery in a paper written with Matthew Kanan published on Thursday in the journal Science.

Companies including United Technologies Corp produce fuel cells for use in industrial sites and on buses.
Automakers including General Motors Corp and Honda Motor Co are testing small fleets of fuel-cell powered vehicles.

POTENTIAL FOR CLEAN ENERGY

Fuel cells are appealing because they produce electricity without generating the greenhouse gases associated with global climate change. But producing the hydrogen and oxygen they run on typically requires burning fossil fuels.

That has prompted researchers to look into cleaner ways of powering fuel cells. Another researcher working at Princeton University last year developed a way of using bacteria that feed on vinegar and waste water to generate hydrogen, with minimal electrical input.

James Barber, a biochemistry professor at London's Imperial College, said in a statement Nocera's work "opens up the door for developing new technologies for energy production, thus reducing our dependence on fossil fuels and addressing the global climate change problem."

Nocera's catalyst is made from cobalt, phosphate and an electrode that produces oxygen from water by using 90 percent less electricity than current methods, which use the costly metal platinum.

The system still relies on platinum to produce hydrogen -- the other element that makes up water.

"On the hydrogen side, platinum works well," Nocera said. "On the oxygen side ... it doesn't work well and you have to put way more energy in than needed to get the (oxygen) out."

Current methods of producing hydrogen and oxygen for fuel cells operate in a highly corrosive environment, Nocera said, meaning the entire reaction must be carried out in an expensive highly-engineered container.

But at MIT this week, the reaction was going on in an open glass container about the size of two shot glasses that researchers manipulated with their bare hands, with no heavy safety gloves or goggles.

"It's cheap, it's efficient, it's highly manufacturable, it's incredibly tolerant of impurity and it's from earth-abundant stuff," Nocera explained.

Nocera has not tried to construct a full-sized version of the system, but suggested that the technologies to bring this into a typical home could be ready in less than a decade.

The idea, which he has been working on for 25 years, came from reflecting on the way plants store the sun's energy.

"For the last six months, driving home, I've been looking at leaves, and saying, 'I own you guys now,'" Nocera said.

(Editing by Vicki Allen)

Tuesday, August 12, 2008

Desert Revelation

There are times when Mother Nature throws you a curve ball that is breathtaking. This article is once again early days in determining the gross effect of the deserts on CO2 uptake. What has happened is that the previous assumption of zero contribution has just turned out to be dead wrong. In fact the first blush in admittedly the most likely productive conditions gives us an absorption level equivalent to woodlands. I certainly do not expect this to hold up except that we have all been dead wrong so far.

The initial tentative explanations are just that and should be ignored for now while a few hundred researchers stomp into the deserts and replicate the method in as many conditions as possible. We need data rather than a slew of possible explanations and unnecessary controversy.

The energy flux has always been there and it is largely dumped back into space because of the lack of plant life to absorb this energy. That it could drive a mechanism that possibly accretes CO2 was not imagined. Are we looking at a protocol that uses organic means to store solar energy and send it deeper into the soils for the use of other biological agents? Is the ultimate repository even calcium carbonate? We just woke up to the fact that no one seems to have cottoned on to the idea of studying these biological pathways as yet.

We have only fairly recently recognized the existence of biological life in rock itself. Surely the energy for that biota must come from some form of heat conversion rather than direct sunlight or access to higher energy content biota. It is obvious that the one place such a specialized biota could exist is in arid surface material that is just as hostile as deeper volcanic warmed plumbing systems that have already been shown to be populated.

Of course, the research tried to eliminate the biological factor with steam, but that may simply not be effective with biota able to function in hostile environments. More importantly, biota provides an active transportation mechanism able to shift produced resources around. This is absolutely necessary if a system is not going to run down to a low entropy state.

It is noteworthy that this contribution, if it holds up, nicely fills a gaping hole in the Global CO2 budget.

I reasonably expect that the variation will be much greater than these first samples are indicating and that the real net contribution will work out to be a significant fraction of forest lands rather that the same. Locale choice was pretty optimized for local reactivity and I do not think it will stand up as a standard. On the other hand we are coming off an expectation of zero contribution and the biological drivers if such exist may be very energetic.

So we get to wait for a lot more data.

As an aside, I am getting sick and tired of every paper or article today making an obligatory nod to the new orthodoxy of global warming no matter how stretched. A fair bit is actually becoming ludicrous and is not a positive reflection on the writer’s intelligence. Or perhaps we need to blame the editors for this fashion.

Science 13 June 2008:

Vol. 320. no. 5882, pp. 1409 – 1410

DOI: 10.1126/science.320.5882.1409

News of the Week

ECOSYSTEMS:

Have Desert Researchers Discovered a Hidden Loop in the

Carbon Cycle?

Richard Stone

URUMQI, CHINA--When Li Yan began measuring carbon dioxide (CO2) in Western China's Gubantonggut Desert in 2005, he thought his equipment had malfunctioned. Li, plant ecophysiologist with the Chinese Academy of Sciences'Xinjiang Institute of Ecology and Geography in Urumqi, discovered that his plot was soaking up CO2 at night.

His team ruled out the sparse vegetation as the CO2 sink. Li came to a surprising conclusion: The alkaline soil of Gubantonggut is socking away large quantities of CO2 in an inorganic form. A CO2-gulping desert in a remote corner of China may not be an isolated phenomenon.

Halfway around the world, researchers have found that Nevada's Mojave Desert, square meter for square meter, absorbs about the same amount of CO2 as some temperate forests. The two sets of findings suggest that deserts are unsung players in the global carbon cycle.

"Deserts are a larger sink for carbon dioxide than had previously been assumed," says Lynn Fenstermaker, a remote sensing ecologist at the Desert Research Institute (DRI) in Las Vegas, Nevada, and a coauthor of a paper on the Mojave findings published online last April in Global Change Biology.

The effect could be huge: About 35% of Earth's land surface, or 5.2 billion hectares, is desert and semiarid ecosystems. If the Mojave readings represent an average CO2 uptake,

then deserts and semiarid regions may be absorbing up to 5.2 billion tons of carbon a year--roughly half the amount emitted globally by burning fossil fuels, says John "Jay" Arnone, an ecologist in DRI's Reno lab and a co-author of the Mojave paper. But others point out that CO2 fluxes are notoriously difficult to measure and that it is necessary to take readings in other arid and semiarid regions to determine whether the Mojave and Gubantonggut findings are representative or anomalous.

For now, some experts doubt that the world's most barren ecosystems are the long sought

missing carbon sink. "I'd be hugely surprised if this were the missing sink. If deserts are taking up a lot of carbon, it ought to be obvious," says William Schlesinger, a biogeochemist at the Cary Institute of Ecosystem Studies in Millbrook, New York, who in the 1980s was among the first to examine carbon flux in deserts. Nevertheless, he says, both sets of findings are intriguing and "must be followed up."

Scientists have long struggled to balance Earth's carbon books. While atmospheric CO2 levels are rising rapidly, our planet absorbs more CO2 than can be accounted for. Researchers have searched high and low for this missing sink. It doesn't appear to be the oceans or forests--although the capacity of boreal forests to absorb CO2 was long underestimated. Deserts might be the least likely candidate. "You would think that seemingly lifeless places must be carbon neutral, or carbon sources," says Mojave coauthor Georg Wohlfahrt, an ecologist at the University of Innsbruck in Austria.

About 20 kilometers north of Urumqi, clusters of shanties are huddled next to fields of hops, cotton, and grapes. Soon after the Communist victory over the Nationalists in 1949, soldiers released from active duty were dispatched across rural China, including vast Xinjiang Province, to farm the land. At the edge of the sprawling "222" soldier farm, which is home to hundreds of families, oasis fields end where the Gubantonggut begins. The Fukang Station of Desert Ecology, which Li directs, is situated at this transition between ecosystems. In recent years, average precipitation has increased in the Gubantonggut, and the dominant Tamarix shrubs are thriving.

Li set out to measure the difference in CO2 absorption between oasis and desert soil. An automated flux chamber measured CO2 depletion a few centimeters above the soil in 24-hour intervals on select days in the growing season (from May to October) in 2005 and in 2006. The desert readings ranged from 62 to 622 grams of carbon per square meter per year. Li assumed that Tamarix and a biotic crust of lichen, moss, and cyanobacteria up to 5 centimeters thick are responsible for part of the uptake. To rule out an organic process in the soil, Li's team put several kilograms in a pressure steam chamber to kill off any life forms and enzymes. CO2 absorption held steady, according to their report, posted online earlier this year in Environmental Geology.

"The sterilization treatment was impressive," says biogeochemist Pieter Tans, a climate change expert with the U.S. National Oceanic and Atmospheric Administration in Boulder, Colorado. "They may have found a significant effect, previously neglected, but I

would like to see more evidence." Indeed, the high end of the Urumqi CO2 flux estimates

are off the charts. "That's more carbon uptake than our fastest growing southern forests.

It's a huge number. I find it extremely hard to believe," says Schlesinger, who nonetheless

says the Chinese team's methodology looks sound. At first, Li was flummoxed. Then, he says, he realized that deserts are "like a dry ocean." The pH of oceans is falling gradually as they absorb CO2, forming carbonic acid. "I thought, 'Why wouldn't this also happen in the soil?'

“Whereas the ocean has a single surface for gas exchange, Li says, soil is a porous medium with a huge reactive surface area. One question, Tans notes, is why the desert soils would remain alkaline as they absorb CO2. Li suggests that ongoing salinization drives pH in the opposite direction, allowing for continual CO2 absorption. But where the carbon goes--whether it is stowed largely as calcium carbonate or other salts--is unknown, Li says.

Schlesinger too is stumped: "It takes a long time for carbonate to build up in the soil," he says. At the apparent rate of absorption in China, he says, "we'd be up to our ankles in carbon." One possibility, DRI soil chemist Giles Marion speculates, is that at night, CO2 reacts with moisture in the soil and perhaps with dew to form carbonic acid, which dissolves calcium carbonate--a reaction that warmer temperatures would drive in reverse, releasing the CO2 again during the day. (Unlike most minerals, carbonates become more soluble at lower temperatures.) In that case, Marion says, Li's nighttime absorption would tell only half the story: "I would expect that over a year, there would be no significant increase in soil storage due to this process," he says, as the dynamic of carbon sequestration in the soil would vary from season to season. Li agrees that this scenario is plausible but notes that his daytime measurements of CO2 flux did not negate the nighttime uptake.

In any case, other researchers say, absorption alone cannot explain the substantial uptake in the Mojave. Wohlfahrt and his colleagues measured CO2 flux above the loamy sands of the Nevada Test Site, where the United States once tested its nuclear arsenal. From March 2005 to February 2007, the desert biome absorbed on average roughly 100 grams of carbon per square meter per year--comparable to temperate forests and grassland ecosystems--the team reported in its Global Change Biology paper.

Three processes are probably involved in CO2 absorption, Wohlfahrt says: biotic crusts, alkaline soils, and expanded shrub cover due to increased average precipitation. "We currently do not have the data to say where exactly the carbon is going," he says. Like the Urumqi team, Wohlfahrt and his colleagues observed CO2 absorption at night that cannot be attributed to photosynthesis. "I hope we can corroborate the Chinese findings in the Mojave," he says. Arnone and others, however, believe that carbon storage in soil is minimal.

Wohlfahrt suspects biotic crusts play a key role. "People have almost completely neglected what's going on with the crusts," he says. Others are not so sure. "I'm mystified

by the Mojave work. There is no way that all the CO2 absorption observed in these studies is due to biological crusts, as there are not enough of them active long enough to account for such a large sink," says Jayne Belnap of the U.S. Geological Survey's Canyonlands Research Station in Moab, Utah. She and her colleagues have studied carbon uptake in the southern Utah desert, which has similar crust species. "We do not see any such results," she says.

Provided the surprising CO2 sink in the deserts is not a mirage, it may yet prove ephemeral. "We don't want to say that these ecosystems will continue to gain carbon at this rate forever," Wohlfahrt says. The unexpected CO2 absorption may be due to a recent uptick in precipitation in many deserts that has fueled a visible surge in vegetation.

If average annual rainfall levels in those deserts were to abate, that could release the stored carbon and lead to a more rapid buildup of atmospheric CO2--and possibly accelerate global warming.

Science. ISSN 0036-8075 (print), 1095-9203 (online)

Monday, August 11, 2008

Eating Kangeroo to Combat Climate Change

This small item out of Australia revisits one of my hobby horses. That sound animal husbandry needs to be applied throughout the world and oceans in order to optimize the ecological footprint. This does not happen naturally without human input. What happens naturally is that you get a predator prey boom and bust cycle that usually is ended with major damage to the sustaining environment.

It then starts all over again from a seriously weakened base. This is presently been demonstrated by the global collapse of fisheries due to unmanaged human predation. Both the folly and the cure are obvious but the predators never unite to preserve their futures until extinction is faced and even then they have the option of merely exiting the business and abandoning the wreckage.

We have already visited the clear benefits of a restoration of bison to both its historical range and also to European and Asian ranges in which their counterparts were hunted to extinction. I have also posted on the rising need to commence direct management of the wild deer herds everywhere through ownership and culling. In North America, the herd sizes are becoming visible and thus about to become far too large to be left alone. And in spite of the occasional comments by very silly people, we do not want a rebuilding of the wolf and grizzly populations. Primate children are even easier morsels for these predators.

This article shows we have the same style of problem with the kangaroo in Australia. Prolific herbivores create huge populations in difficult environments because they are adapted to it. They have to be managed and certainly systematically harvested. It is not even a particularly difficult problem because the actual weights of most of these animals are within our normal handling range. The only thing extra that we will likely want to do is to find a way to corral them and fatten them for a short time to reduce any toughness and gaminess in the meat. Again this is all within our ability. The fact that we are easily taming the buffalo bodes very well for the future of this endeavor.

We have learned that the beef animal that we rely on has been pushed into many ecological niches to which it is less suited. We have done this with all our domesticates and it is actually unnecessary. It is not too obvious, but the taste differences between different meat varieties are not all that significant and will even diminish as husbandry takes charge and eliminates the causes of gaminess and other off flavors. We all forget that we are terribly spoiled today when it comes to the meats we eat. Does anyone recall the flavor of old hen and mutton?

North America can possibly produce venison at the same magnitude as it today produces beef without much ecological overlap. The same may partially hold true for kangaroo meat and is certainly true for a huge number of herbivores worldwide. We just have to get into the business of individually owning the animals and then managing their productivity.

As is so well highlighted in this article, the environmental dividend is always positive because we are managing for optimization rather than either replacement or elimination.

Eating kangaroos to combat climate change?

Fri, 08/08/2008 - 1:04pm

TORSTEN BLACKWOOD/AFP/Getty Images

If going green
isn't cool anymore in today's economic climate, this recent batch of news isn't going to help. According to a recent study published in the journal Conservation Letters, farming and eating kangaroos instead of cattle and sheep would made a dent in Australia's greenhouse gas emissions.

Unlike sheep and cattle, kangaroos emit little methane, which accounts for 11 percent of Australia's greenhouse gas emissions. The study suggests that increasing the kangaroo population to 175 million while simultaneously decreasing the number of other livestock would lower emissions by 3 percent over the next 12 years. The plan would have added benefits for soil conservation, drought response, and water quality as a result of reducing the number of hard-hoofed livestock.

Still, there's the
small issue of kangaroos being a national icon and all:

The change will require large cultural and social adjustments and reinvestment. One of the impediments to change is protective legislation and the status of kangaroos as a national icon," [the study] said.

Friday, August 8, 2008

Volcanic Climate Forcing

Anyone who has followed my investigations regarding the climatic temperature ranges experienced during the Holocene or since the end of the Ice Age should be aware of two things. The first is that the case for apparent solar variation as a forcing mechanism is weak. Part of the reason for this is that we have one apparent correlation in the Maunder minimum and no other really convincing data that cannot be attributed to noise.

My major concern though is that a one degree shift in Global temperature is perhaps measuring a tenth of a percent of the actual influx of energy. That means that the temperature should be way more responsive to solar variation if it is truly responsive. Instead we have a sea of energy flowing in that the atmosphere sheds handily to maintain an equilibrium that varies one degree per century over a two degree range. This is pretty amazing. It is a good bet that the attempts to link climate variation to solar variation may be a lot weaker than thought. Never underestimate researchers’ ability to cherry pick data.

The other side of this argument is that the capacity of the earth to offset climate variation is also grossly underestimated. It is as if we have protective barriers that cost more and more effort to over come. They are not linear at all. Even CO2 runs into a wall of diminishing returns as the percentage rises.

The second very clear fact is that every major temperature drop that we have been able to identify save the Little Ice Age has been associated with known major volcanic activity. This is the one certain way in which to lower global temperatures. In most cases, this effect lasts for a couple of years. Hekla gave us a generation of foul weather in the northern hemisphere after 1159 BCE. Thera was even bigger and caused a general collapse in Mesopotamia centuries earlier.

We have already commented on the loss of global heat as a result of the conversion of the Sahara into desert during the Bronze Age. The wrecking job was completed with the 1159 BCE blast and our northern climate has since varied between the Bronze Age optimum and the various temperature declines induced by volcanism since. We have seen that it takes about two centuries to recover to the optimum from a major low. Obviously restoring the Sahara would likely shorten this recovery time.

That returns us to the Little Ice Age. All the evidence to date is arguing forcefully that the only engine capable of lowering global temperatures is volcanism. It is also arguing that the engine for a cold northern climate is northern volcanism. It is obvious that a major injection of dust and sulphur dioxide into the polar air mass would be several times more effective than Mount Pele on the equator. In fact a major volcano that performed for a century would be able to keep the arctic several degrees colder for decades and cause a resultant buildup of polar ice, to say nothing of providing Europe with a much colder climate.

Hekla has already shown us such a result in 1159 BCE, and in a much smaller way in the late eighteenth century.

The only remaining question left is where is the volcano(es)? Here we have no problem whatsoever. The volcanic belt in the Aleutians and Kamchatka is home to the scariest set of explosive volcanoes outside of Indonesia. There are forty in Alaska, and they are active as hell. There is one going of every couple of years and we are likely in a quiet period.

Even more importantly, they are well positioned to inject gas and dust into the polar air mass and surprise! It has been a hundred years since we have had a major eruption up there that blew away twenty cubic kilometers of dust. Perhaps our northern warming trend reflects the moderate level of eruptions over the past one hundred years.

The point that I want to make is that while no Europeans even knew these volcanoes existed, they were quite capable of producing all the climatic effects experienced in Europe while not causing a measurable effect over the rest of the globe. Maybe we even have a Thera out there unrecognized and undated.

Thursday, August 7, 2008

Northwest Passage may Open Again

Last year everyone was caught totally by surprise when the ice really started to disappear. This year they are all watching if this story is any indication. Right now it is about fifty - fifty to clear shipping through the Northwest Passage. It is open at the western end as of the first of August, but a wind change can ruin this picture over night. The rest of the passage should clear by melting in the next two weeks.

More interestingly, a third of the western sea ice is below forty percent coverage and is surely experiencing maximum melting. I think there a remains a chance that the final areal extent will even approach last year’s low while the gross ice out there will be surely less that last years if we only had a way to measure it.

It is amusing to watch the mathematically impaired experts tip toe to admitting that sea ice coverage is now actually crashing. And I note that no one is brave enough yet to say that it will be mostly done by 2012 which is a mere four years from now. It will actually happen on their watch and they will have to explain now seventy years became five years, particularly when it is not terribly warmer by any apparent measure.

The Northwest Passage itself is very vulnerable still to been shut down by a simple shift in wind bringing a nearby floe into Lancaster Sound. Remember the early enthusiasm for open water at the pole itself? That is why I am saying fifty - fifty for clear sailing.

However, as this trend continues, it will become possible to run shipping through for a two week period at the least every year. Maintaining an icebreaker on standby would be enough to provide a measure of safety and facilitation if winds become contrary.


Frozen Northwest Passage expected to open up

TU THANH HA

From Wednesday's Globe and Mail

Even though this summer's ice melt hasn't approached last year's record conditions, the once-frozen Northwest Passage through Canada's Arctic is expected to open again soon, for only the second time in recorded history.

Already, a shallower, more southern route has freed up, according to high-resolution sea ice charts extracted from satellite microwave imagery by German researchers at the University of Bremen's Institute of Environmental Physics.

The more traditional Northwest Passage route, further north, is still clogged but it could be ice-free later this month, said Mark Serreze, a senior researcher at the University of Colorado's National Snow and Ice Data Center.

"Our view is that it may well open in the next few weeks," he said in an interview yesterday.
Ice remains from west of Cornwallis Island in Nunavut to east of Banks Island in the Northwest Territories, he said. "There is only a fairly small plug in there right now and it's showing signs it's melting away."

Beyond questions of trade, ship navigation and Arctic sovereignty raised by the repeated unlocking of the fabled waterway, Dr. Serreze said, the phenomenon is further proof that humankind might witness an ice-free Arctic Ocean within decades, with resulting unpredictable weather patterns.

"It really is just one more indication how quickly we're losing the sea ice cover. ... The sea ice cover is in a downward spiral, it's essentially in a death spiral right now."

The polar ice cap is now so thin that, even though this summer has been somewhat cooler than the previous one, large portions will disappear. "It really almost doesn't matter any more. We know we'll get a big loss of ice this year simply because we have so much thin ice," Dr. Serreze said.

Scientists are still unclear how the disappearance of the Arctic ice will influence weather elsewhere in the world. Some studies show that the western part of North America will suffer extended drought. Others suggest there will be changes to storm tracks and precipitation patterns over Western Europe.

"It's the sort of thing that we're just starting to get a handle on. It's a new area of research because we weren't thinking we would lose sea ice this quickly. Compared to what our climate models said, we're 20, 30 years ahead of schedule in terms of ice loss. This kind of caught us by surprise and the researchers are just starting to catch up."