Friday, August 29, 2008

Bill Quigley's Post on Katrina

This sad posting is circulating through the internet and is self explanatory. It is an ongoing picture of a classic failure of political leadership that has now unfolded for three full years and has yet to be arrested and corrected by the federal government.

Without question Katrina overwhelmed and initially swamped the available standby resources. And the errors that made New Orleans so vulnerable were known and largely ignored. None of that matters.

When a major disaster strikes, it is the responsibility of the federal Government to get on a war footing immediately and move heaven and earth to quickly resolve the disaster. It is the final insurance. And resolving the disaster means immediate and quick action.

The simple matter of clearing condemned buildings should have been an immediate response. In fact, major burn outs should have been conducted to largely reduce the debris to remove and to also partially sterilize the soils. It is not pleasant but this happens when the city is already fully abandoned and before any significant population returns. The alternative was to convert the bulk of the city into an open uninhabitable garbage dump perpetuating an unimaginable environmental disaster.

This was also the time to recognize that the city needed to abandon all land that was below sea level for residential occupation and to throw it open to commercial usage or parkland, In the case of New Orleans, that left little of the city to work with but still salvaged the old city.

Residential could then be established miles away on high ground and linked by rapid transit to the old city as has been done by a growing number of cities who have recognized the virtue of such multiple city centers. Transferring ownership rights from the city to such new centers is easy and final.

This all took leadership. There was none and there certainly was no sign of a contingency plan for this completely predictable event. It is obvious that a proper evacuation was not even possible had the decision been made in time.

What gives me pause is that I see little evidence of preparedness anywhere for such a disaster. Imagine a ten foot tsunami breaching on the waterfront of Los Angeles. Or a magnitude eight earthquake in the Mississippi Valley to match the new Madrid quake of 1812. Or Mount Rainier or Mount Baker evaporating and putting several cubic miles of ash in the air. How about a ten foot tsunami rolling up the Hudson? The point is that these are all rather unlikely but actually possible. There surely are a lot more possibilities we have absolutely no evidence of, let alone a warning.

It took the war of 1812 for the USA to discover the value of professional soldiers and quit appointing political friends to handle life and death situations. We need to broaden the mandate of the military to include direct management of a disaster zone. They alone have the immediate call on the resources needed and the planning and preparation culture needed. It is foolish duplication to create an alternative organization. After all, war is a managed disaster inflicted preferably on your enemy.

Katrina Pain Index: New Orleans Three Years Later
by: Bill Quigley, t r u t h o u t Perspective
Truthout Original - Tuesday 26 August 2008
http://www.truthout.org/article/katrina-pain-index-new-orleans-three-years-later

Katrina hit New Orleans and the Gulf Coast three years ago this week. The president promised to do whatever I took to rebuild. But the nation is trying to fight warsin several countries and is dealing with economic crisis. The attention of the president wandered away. As a result, this is what New Orleans looks like today.

0. Number of renters in Louisiana who have received financial assistance from the $10 billion federal post-Katrina rebuilding program Road Home Community Development Block Grant - compared to 116,708 homeowners.

0. Number of apartments currently being built to replace the 963 public housing apartments formerly occupied and now demolished at the St. Bernard Housing Development.

0. Amount of data available to evaluate performance of publicly financed, privately run charter schools in New Orleans in 2005-2006 and 2006-2007 school years.

.008. Percentage of rental homes that were supposed to be repaired and occupied by August 2008 which were actually completed and occupied - a total of 82 finished out of 10,000 projected.

1. Rank of New Orleans among US cities in percentage of housing vacant or ruined.

1. Rank of New Orleans among US cities in murders per capita for 2006 and 2007.

4. Number of the 13 City of New Orleans Planning Districts that are at the same risk of flooding as they were before Katrina.

10. Number of apartments being rehabbed so far to replace the 896 apartments formerly occupied and now demolished at the Lafitte Housing Development.

11. Percent of families who have returned to live in Lower Ninth Ward.

17. Percentage increase in wages in the hotel and food industry since before Katrina.

20-25. Years that experts estimate it will take to rebuild the City of New Orleans at current pace.

25. Percent fewer hospitals in metro New Orleans than before Katrina.

32. Percent of the city's neighborhoods that have less than half as many households as before Katrina.

36. Percent fewer tons of cargo that move through Port of New Orleans since Katrina.

38. Percent fewer hospital beds in New Orleans since Katrina.

40. Percentage fewer special education students attending publicly funded, privately run charter schools than traditional public schools.

41. Number of publicly funded, privately run public charter schools in New Orleans out of total of 79 public schools in the city.

43. Percentage of child care available in New Orleans compared to before Katrina.

46. Percentage increase in rents in New Orleans since Katrina.

56. Percentage fewer inpatient psychiatric beds compared to before Katrina

80. Percentage fewer public transportation buses now than pre-Katrina.

81. Percentage of homeowners in New Orleans who received insufficient funds to cover the complete costs to repair their homes.

300. Number of National Guard troops still in City of New Orleans.

1,080. Days National Guard troops have remained in City of New Orleans.

1,250. Number of publicly financed vouchers for children to attend private schools in New Orleans in program's first year.

6,982. Number of families still living in FEMA trailers in metro New Orleans area.

8,000. Fewer publicly assisted rental apartments planned for New Orleans by federal government.

10,000. Houses demolished in New Orleans since Katrina.

12,000. Number of homeless in New Orleans even after camps of people living under the bridges have been resettled - double the pre-Katrina number.

14,000. Number of displaced families in New Orleans area whose hurricane rental assistance expires in March 2009.

32,000. Number of children who have not returned to public school in New Orleans, leaving the public school population less than half what it was pre-Katrina.

39,000. Number of Louisiana homeowners who have applied for federal assistance in repair and rebuilding who still have not received any money.

45,000. Fewer children enrolled in Medicaid public healthcare in New Orleans than pre-Katrina.

46,000. Fewer African-American voters in New Orleans in 2007 gubernatorial election than in 2003 gubernatorial election.

55,000. Fewer houses receiving mail than before Katrina.

62,000. Fewer people in New Orleans enrolled in Medicaid public healthcare than pre-Katrina.

71,657. Vacant, ruined, unoccupied houses in New Orleans today.

124,000. Fewer people working in metropolitan New Orleans than pre-Katrina.

132,000. Fewer people in New Orleans than before Katrina, according to the City of New Orleans current population estimate of 321,000 in New Orleans.

214,000. Fewer people in New Orleans than before Katrina, according to the US Census Bureau current population estimate of 239,000 in New Orleans.

453,726. Population of New Orleans before Katrina.

320 million. Number of trees destroyed in Louisiana and Mississippi by Katrina.

368 million. Dollar losses of five major metro New Orleans hospitals from Katrina through 2007. In 2008, these hospitals expect another $103 million in losses.

1.9 billion. FEMA dollars scheduled to be available to metro New Orleans for Katrina damages that have not yet been delivered.

2.6 billion. FEMA dollars scheduled to be available to State of Louisiana for Katrina damages that have not yet been delivered.

[Bill is a human rights lawyer, a law professor at Loyola University New Orleans and author of the forthcoming book, "STORMS STILL RAGING: Katrina, New Orleans and Social Justice." A version with all sources included is available. Bill's email is
quigley77@gmail.com.

For more information see the Greater New Orleans Community Data Center and Policy Link.
http://www.gnocdc.org/index.html]

Thursday, August 28, 2008

Shifting Economic Winds

We are heading into the last four months of the year, a time which usually sees an increase in investment activity and generally improving economic strength. That means we can expect a rebirth in investor optimism to offset the barrage of negative press we have been subjected to this past year. It is truly necessary this time around.

The subprime disaster has shrunk the capital base of our banking system both here and globally. The huge amount of excess liquidity pumped into the economy has been sponged up through direct losses. We now have a chastened financial sector that has perhaps caught the religion of financial prudence.

That leaves one pending problem. A massive wave of bad paper has worked its way through the system almost choking it. Various newsletters have reported that a much greater wave of refinance paper will be coming due over the next eighteen months. Accepting this as true, we face the most serious financial crisis since the Great Depression that could handily reduce the value of assets to dimes on the dollar and collapse the money supply. If true, the only escape will be my prescription of refinancing by a mark to market strategy. And I doubt if anyone is listening.

The true question is how true is this? I am skeptical. The fact is that Cleveland and those developer paradises in the west were the sweet spot for debt promotion. They loaded up fast and rather quickly ran out of participants. Those chickens came home to roost and have now been handled the hard way.

A lot of asset debt was then put out to folks who had a creditable plan for paying it back as is still happening. That is actually business as usual. The only difficulty is that their assets are now priced at a level that forces them to pay of those loans the old fashioned way and most will.

The equity markets have been reduced by twenty percent over the past year while this scary news was absorbed. It is now absorbing the impact of expensive energy which will take most of the next twelve months. This may squeeze another ten percent out of the market.

That will then be followed by an explosive bull market in equities driven by the rapid conversion of industry to low cost alternative energy regimes. The solutions already exist and the tooling up has begun.

For those who like predictions, I expect static power to soon drop below $1.00 per watt and I expect us to vacate the oil trade causing that price regime to drop well below $50.00 a barrel. In ten years I expect oil to be under $10.00 a barrel because we will have quit using it as a fuel and static power to be at the price equivalent of pennies per watt. That is were we are going.

We just have a little turmoil to go through in lieu of good planning. The conversion is totally feasible now and direct action can make it all very quick. The problem, if any, is the efforts of special interests to push their doubtful solution into the regulatory environment. This is the history of the corn ethanol mess. It never made any sense, but that never stopped anyone.

As I have discovered, wetland cattail starch production can bury us in ethanol at a rate that is likely ten times more productive than any dry land crop. And we have unlimited wetlands to work with that actually need the attention. Then we can enter the boreal forest if we ever need more land. If ethanol can be produced from corn at $1.00 to $2.00 per liter from corn, it is a cinch to produce it a lot cheaper from cattail starch while producing unlimited supplies of cattle fodder from the non starch component.

And then we have our modified alga that just cranks out sugar and easily convertible cellulose.

The point is that we can already bury the world in ethanol without using any food production land and do it at a low cost with modern farm technology and equipment.

The global conversion to the use of ethanol can soon be in full swing.

Wednesday, August 27, 2008

Arctic Calm

My favorite sea ice maps came up again after a three week absence. The winds have not done what they did last year and the sea ice is more broadly distributed this year. Therefore, it looks like any movement in the Northwest Passage is problematic this year. There is plenty of ice at various points that are usually clear by now. It is not tight packed but it is certainly a navigation hazard. It will take good luck this year to move anything large although small vessels may have no problem.

More interestingly only a negligible amount of the sea ice is showing one hundred percent coverage. That means that all that ice has also warmed up to ambient ranges for ice and little retains the steel like cold of winter that large blocks might be expected to do. I see no evidence that the annual loss of net ice mass has abated at all. The downward spiral is continuing. We unfortunately do not have a reliable proxy for ice mass but breaking the trend line now will need a very dramatic increase in the thickness of winter ice with a cool summer that retains a lot of that ice. In short we need a volcano to blow up.

In the meantime, I see little evidence that the discharge of atmospheric heat that took place between 2005 and 2007 is been replenished very fast if at all. The sunspot crowd would certainly argue against any replenishment whatsoever. In fact it is reported a couple of months back that global temperatures dropped three quarters of a degree. Whatever that meant, it has certainly silenced a lot of the run away global warming crowd.

What is becoming more evident to me is that the Earth’s heat engine is operating on far longer cycles than anyone gives it credit for. The reason for that conjecture is the measurable lag between the heating spell of the nineties and the heat discharge event of 2005 to 2007. Certainly the long warm spell has been followed by a protracted warming of the Arctic. This could be simply the result of a transfer mechanism that is not overly robust except in extremis.

Without question our atmosphere is very good at correcting local heat disturbances through mechanisms such as hurricanes. We should have anticipated a long period of low hurricane activity after the blowout of 2005. That was the historic record. And it all shows us that the resolution of our climate models is still hopeless.

In any event, we did not have a very warm summer. I wonder if the winter will be as surprising as last year’s.

The Arctic has had almost a hundred years free from major volcanic activity. The last such event was Novarupta/Katmai, in 1912 in Alaska. It was during this time that the Peace River area of Alberta was opened up to settlers and I have it on good report that the winters were unusually long and awful. The point is that there has been no forced cooling on the Arctic since. So perhaps it is not surprising that we now have enough surplus heat in the Arctic to maintain pressure on the sea ice every year.

As my readers are aware, I think that there is ample indication that the primary cooling mechanism for the Arctic outside of the normal seasonal cycle is the occasional injection of volcanic gas and dust directly into the polar zone. We certainly have a convincing culprit standing by.


In the meantime this news story is waxing somewhat more enthusiastic than I can justify with the areal maps of the fifteenth. Here is hoping that a nifty algorithm is at work and this is not simply journalistic license. Otherwise it is a good update on current coverage and we have plenty of eyeballs this year.

U.S. scientists sound alarm over Arctic ice as Harper poised for visit

Randy Boswell , Canwest News Service

Published: Monday, August 25, 2008

With an election-primed Stephen Harper poised to touch down Tuesday in Inuvik to begin a three-day visit to northern Canada, scientists tracking the ongoing Arctic meltdown are sounding new warnings about the state of the polar environment in an era of evidently rapid climate change.

The latest satellite analysis of this summer's sea-ice retreat, released Monday by the U.S. National Snow and Ice Data Center, showed a decline close to matching last year's record-setting thaw, and experts at the Colorado-based centre noted that key Arctic shipping routes have now opened in both the Canadian Arctic archipelago and in Russia's northern waters.

"Sea ice extent is declining at a fairly brisk and steady pace," the NSIDC said, reporting a total retreat to about 5.5 million square kilometres with up to three weeks of melting left to go.

Sea ice extent is declining at a fairly brisk and steady pace, the U.S. National Snow and Ice Data Center has warned.

Last year's retreat reached an all-time low of about 4.3 million square kilometres by mid-September, a melt that has stoked unprecedented international interest in Arctic shipping, tourism and oil and gas development.
"Amundsen's Northwest Passage is now navigable," the centre said, referring to the southerly route near the Canadian mainland first traversed by Norwegian explorer Roald Amundsen in 1906. "The wider, deeper Northwest Passage through Parry Channel may also open in a matter of days. The Northern Sea Route along the Eurasian coast is clear."

That news follows a series of reports in recent days highlighting the impact of rising temperatures across the world's northern latitudes - a newly discovered crack threatening a Greenland glacier; eroding shorelines in communities across the Canadian Arctic; and polar bears swimming in dangerously open waters of the Chukchi Sea north of Alaska, far from the safe harbour of any land or ice floe.

"There were some years when some bears may have had to swim as far as 100 miles," Steven Amstrup, the senior polar bear scientist with the United States Geological Survey in Alaska, told the New York Times this week. "Now the ice is much farther offshore, more consistently and for longer. So the possibility of long distances between land and sea ice is much greater."

Meanwhile, a U.S. study published Sunday in the British journal Nature Geoscience suggests thawing permafrost in polar regions will unlock up to 60 per cent more carbon dioxide than previously believed, potentially amplifying the greenhouse effect already widely blamed for the current Arctic warming.

Tuesday, August 26, 2008

Biochar Review and A.D.Karve Postings

A.D. Karve is an active contributor to the terra preta list and is a botanist by training. His observations and experiments are well worth reviewing. I have extracted a number of his postings on the subject of biochar.

It may be too early to suggest that a consensus currently exists, but it is fair to say that opinion is converging on several key points.

1 Biochar and by inference terra preta is typically produced in the mid temperatures (plus and minus around 350 degrees Fahrenheit). Production at other higher temperatures is also officious with less residual. It is produced primarily from non woody plant waste in order to provide a fine carbon powder with maximum yield in the all critical surface area. Wood charcoal is just as useful after crushing but normally has a fuel market and is diverted.

2 The powdered charcoal acts as a catalytic sponge for free ions in the soil. The use of the word catalytic is a bit unfair since all we expect is that the receptor sites in the charcoal will grab a free ion and hold it until such time as a biological agent removes it. However, it does get the idea across and I am hardly the first to overuse this word. This mechanism retains nutrients in the working soil while preventing nutrient loss through leaching.

3 The evidence to date suggests that this goes far beyond a mere retention usage. It appears to facilitate the rapid reconstruction of a high quality soil base even in wasted lands and even hostile soils with little remaining organic content. This was unexpected but it appears that we are going there. It is now possible to suggest that it is possible to construct a rich fertile soil many inches deep starting in the middle of the desert in a time span of perhaps twenty years. This is an apparently wild claim but every thing that I have seen combined with our limited knowledge earned to date supports this conjecture.

4 This actually makes total sense. The retention of nutrients particularly nitrogen, allows organic material to be reduced with a limited loss into the atmosphere as CO2. The soil can then be manufactured swiftly.

5 To date every problem soil this has been tried on has eventually generated positive results including land ruined by excess salinity. That is the most important problem where irrigation has wreaked the soils over thousands of years. In fairness, we are still in early days. In fact the work cited here is as good as it gets to date. However, we are approaching the point were hundreds and thousands will start working with these precepts.

6 The char is easily produced by either an earthen kiln, not unlike that used for indigenous charcoal making with waste wood, or the simple expedient of a sheet metal drum set on a bed of sticks to provide limited air flow with a lid to control the fire started on top of the charge. None of this is elegant but is will produce a satisfactory yield while disposing of all the farm waste at little new cost.

7 It is very easy to wax enthusiastic on this subject when a five thousand year field trial conducted by the Indios in Brazil supported a civilization of millions on the worst tropical soils ever. The reason it never found its way into other areas was simply that these other areas never produced enough plant waste to make a noticeable difference. Today that is easily solvable. I have posted on corn stover and bagasse as feedstocks. And the wood chipper is also producing a viable feedstock for the satisfactory production of biochar. Modern equipment will allow us to use our ingenuity to reduce all agricultural and woodland waste to biochar without an excessive expense.

8 It is a reasonable conjecture that the application of powdered charcoal to soils will eliminate the majority of fertilizer wastage now producing oceanic dead zones. It will also quickly reduce the need for fertilizer to vastly lower levels.

9 Vast tracts of well watered tropical and semi tropical lands are very suitable for this technology as well as those lands already been exploited for agriculture. Thus before any effort is expended on more arid lands, it appears that we can expect a massive increase in agriculture in these areas. For starters, the multi year slash and burn cycle will disappear forever.

10 I have accepted a long soil gestation cycle as a reasonable assumption. In fact there is no evidence to suggest that is the case. The first application of biochar should establish good production if not immediately, certainly by the next season as the soil responds. Ten to twenty years of continuous cropping and biochar application should produce a thick rich soil that then requires no further biochar. Field trials may end this process a lot sooner. The remote fields of the Indios were named terra mulato because the charcoal content was present but visibly lower but still significant. I do not have a grade yet, but since one initial season of corn culture can produce respectable carbon content (one to two tons per acre) it is very possible that the direct manufacture of a remote field was a one time effort that paid off for years.

The one point that we should recognize is that all other soils will also need extensive field testing before the local advisory agencies can get fully behind its universal implementation. It is not that we already know the answers – we do – it is just that a field test establishes best local practice and any noteworthy anomalies. Even after all that is said, every farmer will want to run his own test plot in order to both see the results on his ground but also to learn methodology. The good news, is that we are now approaching this threshold of activity.






Dear List,
a former colleague of mine conducted a study of the slash and burn agriculture in the Western Ghats mountain range in India. The farmers generally cultivate a plot for about 5 years. Every year the yield is lower than in the previous year. The plot is abandoned after 5 years becasue the yield is down to unacceptably low level. Weeds, wild herbs and grasses take over the ababdoned land. Some woody plants also establish themselves in this plot of land. After a fallow period of about 10 years, the vegetation on the land is again destroyed by slashing and burning and the land is again brought under cultivation. My colleague conducted soil analysis before and after every crop, and he found that the soil analysis did not change over the five year period of cultivation, and yet the yield dropped every year. He explained this phenomenon by the fact that it was not the soil fertility that diminished over the years, but that the soil was washed away by heavy rains and also because the land sloped. Thus, at the end of the fifth year, hardly a couple of inches of soil was left in the field.
Yours


Dear List,
soil micro-organisms need the same elements as green plants. In soils that are phosphate deficient, the phosphate solubilizing bacteria have a distinct advantage over others because they have the ability to get phosphorus out of phosphatic compounds that are normally insoluble and therefore not available to organisms in the soil. Whenever one applies an organic nutrient compound to the soil, the soil micro-organisms multiply by feeding on the organic nutrient, which primarily provides them with carbon. The mineral ions and molecules are obtained by them from the soil solution. But if the soil solution is deficient in phosphorus, application of an organic nutrient to the soil would automatically lead to a selective increase in the population of phosphate solubilizing bacteria, because only the PSB have the ability to multiply in such soils. Two of my students are currently conducting experiments to test if this hypothesis is correct.
Yours

Dear Mr.Astrupgaard,
when I used the word carbon source, I meant food containing carbon. Please note that nobody can use charcoal as food. The green plants use carbon dioxide as their carbon source. The non-photosynthetic organisms use digestible organic substances like carbohydrates, organic acids etc. as their carbon source. So rotting vegetation and compost also form a part of their food. The nitrogen fixing bacteria need energy to fix nitrogen, to conduct their own metabolism and also to multiply. This energy comes from the carbon in the food that they consume. The carbon gets converted into carbon dioxide in this process. That is why they all, including all animals, need a carbon source in the form of an easily digestible organic compound. As long as they live, the N-fixing organisms do not give the nitrogen fixed by them to any other organism, but use it in their own metabolism and reproduction. The molecules and ions (nitrogen, phosphorus, potash, iron, boron, etc.) in their cells become available to other organisms only when they die. Animals generally need ready made proteins, fats, vitamins etc. for survival. The micro-organisms generally need only a good source of carbon like sugar or a polysaccharide. They can synthesize their own proteins, vitamins etc. using inorganic salts containing the essential minerals.
Yours

Dear Sean,
the azotobacter are free living bacteria and as long as they have a carbon source available to them, they go on multiplying and utilizing the fixed nitrogen for their own metabolism and reproduction They die when the carbohydrates and other sources of carbon available to them are exhausted. In fact that is the basis of my application of 25 kg sugar per ha to the soil once every three months. The sugar increases the number of micro-organisms in the soil, and when the sugar is exhausted, they die. The nutrients released from the dead cells become available to the green plants. The nitrogen fixing microbes do not provide nitrogen to others as long as they are living. The case of rhizobium is altogether different. They are held captive in the root nodules and work like a part of the plant itself. They are fed by the green plants and the green plants extract amino acids from them. In the case of cyanobacteria, the ntrogenous compounds are stored in special perennating organs called heterocysts. Even when the Cyanobacteria die, the heterocysts survive in the dry soil as propagules, from which the next generation of cyanobacteria arises the next year. I am not saying that phytohormones can substitute nitrogenous fertilizers. I was only trying to explain the 10 to 15 % higher yield that is recorded whenever the cyanobacteria are applied to rice fields and I also gave my interpretation of the ecological significance of why the Cyanobacteria promote the growth of rice. There are enough reports in literature of 10 to 15% yield increase caused by substances like triacontanol (a C30 alcohol), organophosphatic insecticides, etc. which have growth promoting effect. Even urea spreayed as 2% solution gives similar effect. It is not caused by the nitrogen in the urea but it is due to the growth stimulating effect of urea.
Yours


Dear List,
there is a school of thought that believes that the free living nitrogen fixing organisms do not give any nitrogen to other organisms, Fixing atmospheric notrogen requires huge expenditure of energy (e.g. look at Haber-Bosch process). When an organism spends that much energy on fixing atmospheric nitrogen, why should it give it to other organisms? In India, cyanobacteria are recommended to be applied to rice fields. There are enough data to show that this treatment causes about 10 to 15% yield increase in rice. Assuming that the cyanobacteria do not give nitrogen to rice, but that they promote growth of rice through plant growth promoting substances, I conducted experiments in which I germinated seeds of barley in a culture filtrate of cyanobacteria and demonstrated that such a filtrate did actually have plant growth promoting property. The plant growth promoting property of cyanobacteria was demonstrated by us even in the case of kidney beans and wheat. Most of the growth promoting substances work at concentrations of 5 to 10 p.p.m. Therefore, plant growth promoting substances are used in quantities that can be measured in grams per hectare, whereas nitrogen being a fertilizer chemical is required in kilogram quantities. So, if the soil micro-organisms want the green plants to grow more vigorously, it makes sense for them to exude phytohormones into their environment than lose to the environment the nitrogen fixed by them so laboriously. It costs them much less energy to produce phytohormones. The question now arises as to why the microbes should promote the growth of green plants. As far as the cyanobacteria in rice paddies are concerned, if the rice plants developed a thick canopy, the growth of green algae would be restricted, because the photosynthetically active radiation would be absorbed by the leaves of rice. Thus, by promoting the growth of rice, the cyanobacteria eliminate the competition from green algae. In the case of other plants, the bacteria may be getting more sugar or more root exudates if the green plants grew more vigorously.
Yours

Dear Martin,
I really do not know, how much char is to be applied per hectar. But I can tell you how to make char out of your burnable organic waste. The simplest device is a top-lit updraft kiln. It consists of a vertical cylinder, having relatively small holes near its base for primary air. You fill the cylindrical body of the kiln with the material to be charred and then light it from the top. Once the fire gets going, you place a lid on the cylinder. There is a chimney built into the lid. The lid does not sit flush on the kiln, but there is a gap between the lid and the kiln. The draft created by the chimney sucks secondary air into the chimney, where it gets mixed with the pyrolysis gas to burn it. The biomass burns downwards, leaving a layer of charcoal on top. As the primary air comes upwards, it meets the burning front which traverses downwards. The burning biomass utilises all the oxygen in the primary air, so that the air going up through the layer of char has only carbon dioxide, carbon monoxide, nitrogen and the pyrolysis gas left in it. As there is no oxygen left in the updraft air, it cannot burn the char that has formed above the burning biomass.The pyrolysis gas and carbon monoxide burn in the chimney, because of the secondary air that is sucked in through the gap between the chimney and the kiln. You have to find out by trial and error, how long it takes to char the material loaded in the kiln. After that much time is over, you remove the lid, and extinguish the fire by sprinkling water over the burning material. This particular device is portable and manually operated. There are larger charring kilns, based on the oven and retort process. Prof. Yuri Yudkevich, a Russian scientist, has made them for charring useless material generated by the timber industry in Russia. We are already using both types of kilns under field conditions in India for charring agricultural waste as also urban waste. We have a video CD that describes the kilns and you can fabricate them by watching the video CD. I have not used Prof. Antal's kiln and have absolutely no idea how it operates. Our web site
www.arti-india. org would show you how to get our CDs by paying us through Pay Pal.

Molasses do have some minerals in them, but the idea that I am propagating is, that one provides the soil microbes only with a carbon source and that they take up the rest of the minerals from the soil solution. I had mentioned in a previous communication, that the water of guttation of many plants contained sugar (e.g. sorghum) or organic acids (e.g. chickpea). Water of guttation is the water oozing out from the leaves during the night. I had already mentioned that the amount of minerals dissolved in the soil solution has a constant value depending upon the solubility of the concerned mineral. Therefore, when the micro-organisms remove the mineral molecules and ions from the soil solution, they are replaced by more of the molecules and ions getting dissolved in the soil solution in order to maintain the equilibrium. When the carbon source has been exhausted, the micro-organisms die, releasing the minerals sequestered in their cells. The green plants and the microbes need the same mineral elements. Therefore when the micro-organisms die, the minerals released from their cells become available to the plants. This symbiosis between the soil microbes and green plants evolved when the green plants came out of the sea and occupied land. Aphids seem to be a part of this symbiosis, because they suck out sugar from the green plants and exude it out of their bodies. The water of guttaion washes off this sugar and drops it on the ground. The fact that plants drop their leaves and flower petals on the ground can also be looked upon as a part of this symbiotic relationship, because these organs feed the soil micro-organisms. It is a known fact that most of the useful minerals are retracted by the plants from the leaves before they are shed. I am trying to mimic the behaviour of the plants in order to develop techniques of growing crops without using chemical fertilizers.
Yours

Dear Greg,
Most of the reactions on externally applied organic matter take place in the top layer of the soil and they are therefore aerobic. Alcohol is formed under anaerobic conditions. Sugar is directly ingested as food by most micro-organisms and it is used by them as carbon source. In the case of plants and also most micro-organisms in the soil, almost 95 per cent of the weight is constituted by carbon, hydrogen, oxygen and nitrogen, all of which are obtained from air. Only 5% come from minerals in the soil. These minerals are absorbed from the soil solution. Whenever an organinc substance with high nutritive value is applied to the soil it causes the number of micro-organisms in the soil to increase. When the carbon source has been exhausted, the microbes die, releasing the sequestered mineral ions and molecules back into the soil solution, making them available to the plants. This is of course just a hypothesis, on which I am working. Literally thousands of farmers are applying today unrefined raw sugar to their fields at the rate of 10 kg per acre or 25 kg per ha, once every 3 months. They are getting good yield from their crops. I am only trying to find out the scientific reason behind this phenomenon.
Yours

Dear Mr. Haard,
ploughing in green plants is called green manuring. It provides soil micro-organisms with high calorie nutrition. In the normal green manuring practice, the green crop is grown on the entire field and ploughed in, after about 45 days. Because of the availability of a carbon source in such abundance, the microbes multiply very fast and take up and bind all the minerals in the soil solution in their own cells. Then you wait for at least a month before planting your crop, becasue otherwise your crop would not get any mineral nutrients from the soil. After a month, a part of the microbes are dead and have released the mineral molecules back into the soil. You therefore lose about 45 days in growing the green cover and another month in allowing it to rot in the soil, Green manuring is therefore not popular with farmers, because they lose a complete season. Under rainfed cropping in India, it means losing the entire year. That is why I recommend applying just 125 kg green leaves per ha along with the seed. While the seedlings are growing, he microbes multiply their numbers by eating the leaves, but because the leaves have been applied in just a small quantity, the nutrition is exhausted very fast by the soil microbes and they start to die, releasing the nutrients sequestered in their cells. By this time, the crop plants have developed their own root system and they are ready to absorb these nutrients. This is just a hypothesis. All that I have observed is that I get high yield whenever I apply about 125 kg green leaves per ha to my crop, right at the beginning of the season. I am trying to find out how and why this practice works so beneficially.
Yours


Dear Mr. Haard,
this refers to your request about my reaction to the observations of Dr. Makoto Ogawa. I am a botanist who used to work as the Research Director of a seed company in India. I worked mainly in the fields of plant physiology and plant breeding. I am now 72 and I head a voluntary organization founded by me for rural development through application of science and technology. I was made aware of the topic of Terra Preta by Ron Larson and Tom Miles and so I became a member of the Terra Preta discussion group. I developed interest in this topic because I had developed some theories of my own about plant nutrition, and agriculture without the use of chemical fertilizers. In the course of my research I found that by feeding the soil bacteria with high calorie, non-composted organic matter such as sugar, starch or cellulose, one not only increased the number of the soil microbes but also the yield of the crops. Just to test my hunch, I applied just 125 kg green leaves to a hectare of land owned by me, and got higher yield from this land than I used to get by applying chemical fertilizers. Now I have started a series of pot experiments in which the pots containing 1 kg soil each received 500 mg sugar, no sugar and a dose of chemical fertilizers. The pots are kept in a randomized complete block design, so that the data can be statistically analysed. After I started talking to my colleagues about charcoal being added to soil, some of them applied char made from sugarcane leaves to plants raised in pots and they reported that the plants in pots with char grew better than the ones not receiving this treatment. These experiments were not conducted very scientifically and they should be treated as anecdotal evidence.

Realising, that I did not know anything about soil science, I recently purchased a book on this subject and have started reading it. Although this book makes reference neither to Terra Preta nor to plant nutrition, the knowledge about soil minerals, their genesis and their metamorphosis under different climatic conditions is helping me greatly in understanding many aspects of plant nutrition. I feel that this knowledge would eventually be useful to me also in understanding Terra Preta. When I gain an insight into this topic, I shall certainly share it with this group.
Yours

Monday, August 25, 2008

IPCC Analysis Mathematically Flawed

You know folks; this article gives me and every other commentator a problem. I always found the IPCC position seriously suspect. This work shows that the work is not just suspect but surely manipulated by chaps lacking talent but determined to generate a result to conform to their thesis. I am not going to call it fraud but Enron has nothing on this nonsense.

The climate got warmer up to ten or so years ago. It may or may not be associated to more solar output. It has been cooling off slightly since. I surmise that the heat accumulation does not dissipate as quickly as we have assumed. I think it is first collected in the oceans and then slowly transferred into the atmosphere for transport into the north for eventual final disposition. It is a slow and imperceptible process. Recall that the North Pole has a conveyor that moves heat from the tropics in the form of the Gulf Stream and the atmosphere is inclined the same way.

Let me add another core assumption to this mix. The atmosphere is almost static over heat retention in the tropics. It is already maxed out and cannot pick up the slack generated by solar variation. Any surplus heat must be absorbed by water or reflected into space. The water or increased humidity must then be transported out of the tropics. This takes time. This is why heat is still washing into the Arctic from the previous decade and it still has not cooled down very much.

In fact, it is likely that the incoming solar energy is still much higher than the historic average although the recent abrupt drop if sustained at all could now change all this. Don’t you wish any of this was settled?
This article has shown the IPCC model to be rubbish.

As my readers know, I have been a strong advocate of the removal of CO2 from our waste streams. In fact my mandate for this Blog was to promote ways and means and we have gone a long way down that road successfully. The global warming linkage conjecture was controversial and is coming a cropper. It was never relevant to the core problem of managing our environment.

Disproof of Global Warming Hype Published

R. F. Gay / F. William Engdahl

A mathematical proof that there is no “climate crisis” has been published in debate on global warming in Physics and Society, a scientific publication of the 46,000-strong American Physical Society.

Christopher Monckton, who once advised Margaret Thatcher, demonstrates via 30 equations that computer models used by the UN’s climate panel (IPCC) were pre-programmed with overstated values for the three variables whose product is “climate sensitivity” (temperature increase in response to greenhouse-gas increase), resulting in a 500-2000% overstatement of CO2’s effect on temperature in the IPCC’s latest climate assessment report, published in 2007.

The article, entitled Climate Sensitivity Reconsidered demonstrates that later this century a doubling of the concentration of CO2 compared with pre-industrial levels will increase global mean surface temperature not by the 6 °F predicted by the IPCC but, harmlessly, by little more than 1 °F. Lord Monckton concludes –

“… Perhaps real-world climate sensitivity is very much below the IPCC’s estimates. Perhaps, therefore, there is no ‘climate crisis’ at all. … The correct policy approach to a non-problem is to have the courage to do nothing.”

Larry Gould, Professor of Physics at the University of Hartford and Chair (2004) of the New England Section of the American Physical Society (APS), has been studying climate-change science for four years.
He said:

“I was impressed by an hour-long academic lecture which criticized claims about ‘global warming’ and explained the implications of the physics of radiative transfer for climate change. I was pleased that the audience responded to the informative presentation with a prolonged, standing ovation. That is what happened when, at the invitation of the President of our University, Christopher Monckton lectured here in Hartford this spring. I am delighted that Physics and Society, an APS journal, has published his detailed paper refining and reporting his important and revealing results.

“To me the value of this paper lies in its dispassionate but ruthlessly clear exposition – or, rather, exposé – of the IPCC’s method of evaluating climate sensitivity. The detailed arguments in this paper, and, indeed, in a large number of other scientific papers, point up extensive errors, including numerous projection errors of climate models, as well as misleading statements by the IPCC. Consequently, there are no rational grounds for believing either the IPCC or any other claims of dangerous anthropogenic ‘global warming’.”

Lord Monckton’s paper reveals that –
► The IPCC’s 2007 climate summary overstated CO2’s impact on temperature by 500-2000%;
► CO2 enrichment will add little more than 1 °F (0.6 °C) to global mean surface temperature by 2100;
► Not one of the three key variables whose product is climate sensitivity can be measured directly;
► The IPCC’s values for these key variables are taken from only four published papers, not 2,500;
► The IPCC’s values for each of the three variables, and hence for climate sensitivity, are overstated;
► “Global warming” halted ten years ago, and surface temperature has been falling for seven years;
► Not one of the computer models relied upon by the IPCC predicted so long and rapid a cooling;
► The IPCC inserted a table into the scientists’ draft, overstating the effect of ice-melt by 1000%;
► It was proved 50 years ago that predicting climate more than two weeks ahead is impossible;
► Mars, Jupiter, Neptune’s largest moon, and Pluto warmed at the same time as Earth warmed;

► In the past 70 years the Sun was more active than at almost any other time in the past 11,400 years.

Friday, August 22, 2008

Wind Power For the American Yeoman

I grabbed this off Jerry Pournelle’s site and the link provided gives us a good update as to what is accruing at the small operator level. It is all quite encouraging. This will never be a major component of the energy equation but it can be very important on almost any farm operation.

The farm industry has a long history of using windmills to operate water wells in particular. When I grew up, they were ubiquous.

Technology has now advanced to make a number of strategies profitable. A simple efficient offset to grid power is a good start. It does not have to be running perfectly, it just needs to take over the farm load as often as possible. Because it is linked to the farm load directly, the incentive is in place to closely mange it.

The advent of grid acceptance of surplus power could eliminate the need for battery support, although it is simply not that easy. The industry is going there though. Ideally the farm will have a matching load like water pumping that makes it all easy.

It is clear that many designs are been experimented with and that modern fabrication methods are been deployed. Small vanes using foam core technology is many times stronger and lighter than historic methods. Combined with electronic control systems it becomes possible to have a very efficient system that is very robust and easily repaired.

This technology is not pushing the limits of material strength as the major systems are.

I am sure that there is now a market for fabricated wind vanes by themselves. The hardware and generator could also be almost off the shelf. All of this can be quite cheap and Mark is quite right. The Chinese have a huge internal market to feed and cannot be far behind on this.

This requires a low capital cost basis for it to be broadly adopted and likely the Chinese can meet that. After all, the water pumps were displaced by rural electrification.

At least this time around we are producing electric power. That also opens the market for many other applications. Fifty windmills mounted on the roof of a factory is a good idea if the cost is within a range similar to grid power. The space is clearly available and the load is also available to maximize efficiency.

The fact that so many amateurs are now doing it tells me that it is only a matter of time until such operations spring up almost everywhere that a wind can be found.

The farm that I grew up on was in Midwest Ontario. As the full heat of summer hit, we got a steady and persistent wind blowing from the west that likely was clipping along at fifteen miles an hour. It was the only reason that it was possible to do field work in the afternoon at the time of maximum heat. Using that particular energy source to carry the local residential air conditioning load would be a very good idea. A case of the best source of temporary energy surplus been paired with temporary maximum demand.

This must be largely true in the whole Midwest and just about other any area of continental weather.

This suggests that towns need to take advantage of local diurnal wind conditions in order to offset diurnal peak loads associated with air conditioning in particular. A city that practically goes off grid when the temperature is warmest is never going to have to apologize for their air conditioning load and the place will become very attractive to builders and new owners. It is also something that a town can implement.


Wind Power For The American Yeoman

Dear Jerry,

Here's a starting point for those too impatient to wait for Sam's Club to stock Chinese-made import wind turbines:

There's a wealth of information about it and examples of small'uns and middlin' size ones up to 5 Kw, all built with widely available materials and construction equipment.

Want something more aerodynamically efficient than hand carved wood blades, and more durable than injection molded plastic? Foam core composites aren't just for Stealth bombers any more. Burt Rutan pioneered the use of do it yourself composites in the early 1970s with his homebuilt experimental aircraft designs. Time marched on, techniques improved and costs dropped. Scaled Composites itself graduated to using multi-axis CNC machines to cut their foam.

The determined American Yeoman can follow Burt's lead in both places with suitable low cost equipment:

Best Wishes,

Mark


Wind power can be an excellent complement to a solar power system. Here in Colorado, when the sun isn't shining, the wind is usually blowing. Wind power is especially helpful here in the winter to capture both the ferocious and gentle mountain winds during the times of least sunlight and highest power use. In most locations (including here) wind is not suitable as the ONLY source of power--it simply fills in the gaps left by solar power quite nicely.

OPTIONS FOR GETTING STARTED IN WIND POWER

Build your own!

Building a wind generator from scratch is not THAT difficult of a project. You will need a shop with basic power and hand tools, and some degree of dedication. Large wind generators of 2000 Watts and up are a major project needing very strong construction, but smaller ones in the 700-1000 Watt, 8-11 foot range can be built fairly easily! In fact, we highly recommend that you tackle a smaller wind turbine before even thinking about building a large one. You'll need to be able to cut and weld steel, and a metal lathe can be handy (though you could hire a machine shop that turns brake rotors do do some small steps for you).

In most locations, GENTLE winds (5-15 mph) are the most common, and strong winds are much more rare. As you'll see by examining our latest machines, our philosophy about designing wind turbines is to make large, sturdy machines that produce good power in low wind speeds, and are able to survive high wind events while still producing maximum power. The power available in the wind goes up by a factor of 8 as the windspeed doubles.

Other critical factors are rotor size and tower height. The power a wind turbine can harvest goes up by at least a factor of 4 as you double the rotor size. And making a tower higher gets you above turbulence for better performance and substially increased power output. Putting a wind turbine on a short tower is like mounting solar panels in the shade!

Before you jump into building your own wind turbine or buying a commercial one, do your homework!
There are certain things that work and certain things that don't, and you can save hours and dollars by learning from other people's successes and mistakes. Some recommended reading:

DanF's series on Small Wind Turbine Basics, published in the
Energy Self Sufficiency Newsletter:

Part 1 -- How wind turbines work, power available in the wind, swept area, average wind speed and what it really means. The basic essentials!

Part 2 -- High wind survival mechanisms, wind turbine types, drag vs. lift machines, HAWTs vs. VAWTs, tip speed ratio, blade design, and lots of cool pictures and diagrams.

Part 3 -- Choosing a site, good and bad site examples, anemometers, tower types, lightning protection, power regulaton, birds and bats.

Our article
The Bottom Line About Wind Turbines is an essential introduction to wind power. It covers the basics of how wind comes to us, how much power different size wind turbines can make in different wind regimes, and has a very handy section on detecting wind turbine scams.

Otherpower.com's
Wind Turbine User's Manual should also be considered essential reading, especially BEFORE you take the plunge and buy or build a wind turbine. It will fill you in on exactly what you are getting yourself into with wind power, including towers, installation, controllers, and troubleshooting. It can be downloaded for free from that page, and is available in printed form through our Online Store.

Wind power information from homebrew wind power guru
Hugh Piggott's website. We've learned a BUNCH from Hugh.

Hugh Piggott's book
Windpower Workshop is an indispensable reference for anyone that's thinking about building a wind turbine. His Axial Flux Alternator Windmill Plans are very detailed and highly recommended.

Homebrew wind power infomation from Ed Lenz's
Windstuffnow.com, a highly informative website.

Read the
Renewable energy FAQs on the Otherpower discussion board, and Search the Otherpower.com discussion board. It's highly active and populated by windpower experts and hobbyists worldwide. If you still can't find and answer, by all means please join the board and ask your question there!
Join the
AWEA mailing list for more discussion with wind power experts worldwide.
Explore other wind power websites from worldwide on our
Links page.

Thursday, August 21, 2008

Collision Earth

I recently came across this interesting article that tackles a number of anomalies in the climatic record. The 1159 BCE event we have already associated with a volcanic event Hekla and the 12900 BCE impact event in Northern Canada. This article isolates from cultural referents these dates:

7640 BCE, 3195 BCE, 2354 BCE, 1628 BCE, 1159 BCE, 207 BCE, 44 BCE, and 540 CE.

We have Thera to apply to the 1628 BCE event. As a warning the apparent exact nature of this date and the later dates are controversial at the least but are typically associated with carbon dates and an independent Chinese record for 1618 BCE. I have associated 1159 BCE with the inundation of Atlantis and the resulting collapse of their seaborne mercantile civilization. We surmise that the Thera drove the collapse of Minoan civilization and this event was the foundation of the tale of the exodus. The tale itself could well have already been legend at the time of actual historic biblical event and could clarify a two hundred disparity. I have found that these ancient records never fail to include a good story even if the actual linkage is a stretch. And why not? This was their only way to transmit cultural history.

You will have followed my recent pursuit of the possibility that the little ice age was triggered by a major volcanic event in Alaska. It certainly looks promising and explains the reoccurrence of cooling in the Arctic over the past two to three millennia without having to call upon other even less provable sources like solar variation. It is also one of the nastiest places on earth for this type of volcanic activity with no lack of candidates.

To this we now should add cosmic events. We should also get much more serious about their potential for damage. Science has understated and actually misled us all on the potential for damage from this source. Perhaps we need to respect ignorance instead.

Firstly a sea based impact has never been studied. We do not understand the possibilities. All the energy will surely be absorbed by the water, just as all the energy of the stony meteorites is typically absorbed in the atmosphere up to a fairly large size. So although a fair range of small to mid sized objects are packing huge amounts of energy, those two blankets will discharge the energy fairly well.

I add to this the 12900 BCE impact of the Canadian Ice sheet which hurled ice into the Carolinas and likely the Atlantic. It also delivered entrained material into the Ohio Valley recently identified. The bulk of the energy was still absorbed by the crust and surely left a crater now flooded with water. Happy hunting.

To affect climatic temperature, the event has to hit land and sent a vast amount of dust into the atmosphere or itself be a massive source of dust. Tunguska shows us how this could be. That means that a huge scar must exist that would be discernable even today if the event took place in the last 10,000 years.

Recall that the big volcanic events threw twenty to fifty cubic miles of rock into the atmosphere. An asteroid needs to be that large or at least a reasonable fraction thereof to have the same impact. Again the atmosphere will break it up on the way in. The fact is, is that we lack observational evidence to make proper predictions except by analogy.

What this article does bring home is that the energy is out there and has certainly been felt. Whether it applies to this sequence of climatic anomalies is only prospective when we have the alternatives of the Indonesian volcanoes and those of Alaska. And even for the protracted Little Ice Age, I am more inclined to chase volcanoes than a major cosmic event whose effect should have dissipated very quickly, if only because of the lack of chemical aerosols.

I have every reason to think that as our dating of the eruptive periods of all the world’s volcanoes improves so will the correlation with global climate. We only need to remember that an Arctic chill affects the northern portion of the northern hemisphere, while a much larger chill at the equator hits us all.


Collision Earth:

The Threat From Outer Space (2004)

BY JASON JEFFREY

Over a century ago Ignatius Donnelly summed up our precarious existence: We are but vitalized specks filled with a fraction of God’s delegated intelligence, crawling over an egg-shell filled with fire, whirling madly through infinite space, a target for the bombs of the universe.

By bombs Donnelly meant the untold number of asteroids and comets that fill the heavens around us which on perhaps not a few occasions have smashed into Earth itself, and may do so again.

Through revolutionary new techniques in observation, detection and photography, modern astronomers and astrophysicists have now identified two new classes of celestial objects which could pose a real danger to our planet within the foreseeable future, called NEA’s (Near Earth Asteroids) and ECC’s (Earth-Crossing Comets).

On September 29, asteroid “4179 Toutatis” passed within 1.6 million kilometres of Earth. Its approach was the closest in this century of any known asteroid the size of Toutatis, which measured around 4.6 kilometres in length. If it had struck the Earth, we could have faced what scientists have dubbed “a mass extinction event.”

Scientists believe the asteroid poses no risk at least through 2562, when Toutatis will pass within 400,000 kilometers of Earth – but astronomers admit there are forces in the solar system that can alter an asteroid’s orbit and put it on a collision course with Earth.

Earlier this year, on March 31, an asteroid skimmed past the Earth at a distance of just 6500 kilometres above the ground. Object “2004 FU162”, which spans 5-10 metres across, would have burned up as a fireball ending with a smaller explosion, had it ventured into the Earth’s atmosphere. The problem was astronomers did not discover it until after its passing. Scientists have since calculated the asteroid’s orbit was shifted by a whopping 20 degrees because of the Earth’s gravity.

The previous record for the closest asteroid approach to Earth was set on 18 March by an object called “2004 FH” which missed the Earth by about 40,000 kilometres. That was a much larger object, around 30 metres in diameter, but big enough to produce a one-megaton explosion in the atmosphere.

NASA calculates objects in the 100-200 metre range hit Earth about once every 700-1,000 years. Such an object did hit the Earth in 1908, over Tunguska in Siberia.

In the ECC (Earth-Crossing Comet) category, a very serious future candidate for an Earth grazing is comet Finlay, due to pass on October 27, 2060 – perhaps as close as 150,000 kilometres.In 1993, astrophysicist Brian Marsden announced that comet Swift-Tuttle could possibly strike Earth in the 22nd century. It is scheduled to pass the Sun incoming from deep space on July 11, 2126, and on August 14 will come very close to our world. Should the slightest irregularity occur in its long periodic path during the intervening one and a half centuries, it could hit the planet dead-centre, and with a force equivalent to 100 million megatons of TNT.

Over the past few years we have often heard about the discovery of new asteroids and comets. This is the result of NASA’s 25-year survey of the sky to find objects wider than a kilometre that could have a devastating impact if they collided with Earth.

Fortunately, nothing of a dangerous size has been spotted heading our way for at least a century – or so they tell us. According to a US government advisor, secrecy would be the best option if scientists discovered a giant asteroid was on course to collide with Earth.Speaking to a meeting of the American Association for the Advancement of Science, Geoffrey Sommer, of the Rand Corporation, said:

“If an extinction-type impact is inevitable, then ignorance for the populace is bliss. As a matter of common sense, if you can’t intercept it and you can’t move people out of the way in time, there’s nothing you can do in terms of reducing the costs of the potential impact.”
Deep Impact

For one week in July 1994, astronomers watched a planetary body under attack, when two dozen pieces of the disintegrated comet Shoemaker-Levy 9 plunged into Jupiter with explosive results, equivalent to 40 million megatons of TNT going off in a chain reaction. As several scientists warned, this was Earth’s wake-up call for a similar event to happen to us.

Recent computer simulations reveal that if a comet or asteroid hit the Earth on one side, the seismic waves generated would be transmitted through the planetary interior. By being focused on account of the Earth’s curvature, the waves would meet together at the location directly on the opposite side where the impact took place, and the high stress energy released could disrupt the surface area, causing a tremendous outpouring of volcanic activity.

The air blast resulting from an impact would lead to large-scale and worldwide pressure shock waves oscillating the entire atmosphere and ionosphere, creating winds greater than the most powerful hurricanes ever recorded.

Fragments of the asteroid and earth hurled into space by the impact would rain down all over the planet, setting forest fires. The resulting smoke would further darken the atmosphere, plunging the world into permanent night. The temperature would plummet.

Calculating the amount of dust, water vapour and smoke injected into the sky from a kilometre wide object hitting the Earth, scientists estimate a drop of world temperatures by about 15 degrees Celsius lasting for about 15 days.

By far the worst-case scenario is an asteroid or comet striking one of the world’s deep oceans. Some researchers worry the sudden displacement of such large volumes of water across thousands of kilometres of ocean would affect the axis spin and polar stability of the Earth, like adding an off-balancing weight to a spinning gyroscope. Even more disastrous would be a celestial object furrowed into the ocean at a more oblique angle. In this case the energy of the mass dissipates by pushing a titanic amount of water over a large surface area, creating a tsunami wave so high and large in size as to defy imagination.As a tsunami wave reaches nearer to a coast with a shallower continental shelf, its speed slows down, but its height is increased by a factor of 10 to 40. Thus a deep ocean wave of 100 metres might break ashore with a height of 1,000 to 4,000 metres.

A major earthquake triggered off the coast of Chile in May 1960 generated waves in the deep water of the Pacific travelling a full 150 degrees around the globe, or more than 16,000 kilometres distance, landing ashore in Japan at a height of up to 4.5 metres, and killing over 200 people. Earlier, in 1946, a similar event took place when a tsunami originating in the Aleutians killed a handful of people along the nearby Alaskan shores, yet also went on to take the lives of 150 people in Hawaii 8,000 kilometres away. Computer projections indicate that a 9-metre asteroid impacting the ocean between Australia and New Zealand would produce tsunamis breaking on the southern Japanese coastline at 38 to 50 metres high.

That large asteroids have hit the Pacific before is evident from geological remains on the islands within its perimeter. Deposits of unconsolidated corals have been found almost a thousand feet above the present coasts on Lanai, Hawaii, Oahu, Molokai and Maui, indicating they were washed up to that height by a tremendous wave of water in the distant past. Ordinary tsunamis generated by earthquakes along the Ring of Fire do not produce waves of that magnitude – only a major displacement of ocean waters from an impact event would fit the findings.

The Atlantic Ocean is also in danger. Estimates are an impact anywhere in the Atlantic by an asteroid 365 metres wide would devastate coasts on either side with tsunami waves 60 metres high. Major cities either on the coast or with river, bay or harbor accesses such as New York, Boston, Washington, London, Amsterdam and Copenhagen are in danger of being completely obliterated.

A computer simulation of an asteroid impact tsunami developed by scientists at the University of California shows waves as high as 120 metres sweeping onto the Atlantic Coast of the United States.
The researchers based their simulation on a real asteroid known to be on course for a close encounter with Earth eight centuries from now.

March 16, 2880, is the day the asteroid known as “1950 DA”, a huge rock 1.2 kilometres in diameter, is due to swing so close to Earth it could slam into the Atlantic Ocean at 60,000 kilometres per hour.
“From a geologic perspective, events like this have happened many times in the past. Asteroids the size of 1950 DA have probably struck the Earth about 600 times since the age of the dinosaurs,” warns researcher Steven Ward.

Impact Events Linked to Evolution of Life on Earth

It is known the Earth was pummelled by asteroids, comets and other massive heavenly bodies in the early days of its formation – over 3 billion years ago. But, until recently, most scientists thought this was an event limited to Earth’s distant past. They also believed the ancient celestial pounding eventually gave way to billions of years of gradual, non-catastrophic evolution.

In the 1950s, astronomer Gene Shoemaker sent shock waves through the scientific community by suggesting various craters on our planet (and the Moon) were formed by asteroids or comets, rather than volcanic eruptions, which was what most scientists believed at the time.

There doesn’t appear to be one square kilometre of the lunar surface that is not pockmarked with impact craters. While some craters are undoubtedly very ancient, they also contain within their rims a myriad of newer craters from more recent impacts.

The reason why craters do not remain visible on Earth is due to their swift erosion by rain, snow, and wind, whereas on the Moon they remain for eons until a new projectile strikes the scar zone.

Using the Moon’s potholed surface as a reference point, Shoemaker tried to determine how often celestial objects smashed into the Moon and, by extension, struck the Earth. With the help of modern satellite and aerial surveillance, Shoemaker and other scientists soon identified over 200 impact sites around the planet.

In 1980 scientists Luis and Walter Alvarez claimed they had found evidence of a huge impact event 65 million years ago. This age corresponded with the demise of the dinosaurs at the end of the Cretaceous Period. The evidence included a worldwide layer of clay with high levels of the rare element iridium, usually the signature of an impact.

In 1990, the buried remains of a 180-kilometre-diameter crater were discovered near the town of Chicxulub on the Yucatan Peninsula in Mexico. A crater this size would have been blasted out by a 16-kilometre-wide comet or asteroid colliding with the Earth at some 80,000 kph.

Some scientists now believe this crater as the long sought-after “smoking gun” responsible for the demise of the dinosaurs and more than 70 percent of Earth’s living species 65 million years ago.

In June 2003 Science published a report about a team of scientists who believe a massive object from space smashed into what is now the Moroccan desert 380 million years ago. Dates for the impact coincide with the “Kacak/otomari” extinction, when up to 40% of all animals living in the sea perished. Fossils found in rock layers just above the impact layer suggest many new species appeared after the disaster.

And in November 2003, another team of scientists reported on evidence for a massive asteroid colliding with the Earth 251 million years ago which may have killed 90 per cent of all life.

The study, based on meteorite fragments found in Antarctica, suggests the Permian-Triassic event, perhaps the greatest extinction in the planet’s history, may have been triggered by a mountain-sized space rock that smashed into a southern land mass.

“It appears to us that the two largest mass extinctions in Earth history... were both caused by catastrophic collisions” with asteroids, the researchers say in their study in Science.

The evidence indicates asteroid impacts are the key factors in the development of life on this planet. In wiping out a large proportion of life on the planet periodically, the asteroids have played a more important role in evolutionary development than previously thought.More pertinent is the question of cosmic impacts on the rise and fall of mankind’s ancient civilisations. Is there any evidence backing up the stories of ancient apocalypse and hell fire from the sky that are preserved in mythology and some of the world’s religions?

Collapse of Civilisation

...and the seven judges of hell ... raised their torches, lighting the land with their livid flame. A stupor of despair went up to heaven when the god of the storm turned daylight into darkness, when he smashed the land like a cup.

– An account of the Deluge from the Epic of Gilgamesh, circa 2200 BCE

Biblical stories, apocalyptic visions, ancient art and scientific data all seem to intersect at around 2350 BCE, when one or more catastrophic events wiped out several advanced societies in Europe, Asia and Africa.

Archaeological findings show that in the space of a few centuries, many sophisticated civilisations disappeared. The Old Kingdom in Egypt fell into ruin. The Akkadian culture of Iraq, thought to be the world’s first empire, collapsed.

Around the same time apocalyptic writings appeared. The Epic of Gilgamesh describes the fire, brimstone and flood of possibly real, not mythical, events. Omens predicting the Akkadian collapse preserve a record that “many stars were falling from the sky.” The “Curse of Akkad,” dated to about 2200 BCE, speaks of “flaming potsherds raining from the sky.”

In 1650, the Irish Archbishop James Ussher mapped out the chronology of the Bible – a feat that included stringing together all the “begats” to count generations – and put Noah’s great flood at 2349 BCE.

All coincidence? A number of scientists don’t think so.

Mounting hard evidence collected from tree rings, soil layers and even dust that long ago settled to the ocean floor indicates there were widespread environmental nightmares in the Near East during this period: Abrupt cooling of the climate, sudden floods and surges from the seas, huge earthquakes.

In 1999 geologist Dr. Sharad Master spotted a 3-kilometre-wide crater in southern Iraq after studying satellite images. Scientists now believe this circular depression bears all the hallmarks of an impact crater, one that caused devastating fires and flooding. They are now attempting to date the time of the impact, with some of the main researchers estimating an age of around 6,000 years – placing it in the close vicinity of the sudden decline in Middle East civilisation around 2300 BCE.

Mike Baillie, professor of palaeoecology at Queens University in Belfast and author of Exodus to Arthur: Catastrophic Encounters with Comets, figures it would have taken just a few bad years to destroy societies.

Even a single comet impact large enough to have created the Iraqi crater, “would have caused a mini nuclear winter with failed harvests and famine, bringing down any agriculture based populations which can survive only as long as their stored food reserves,” Baillie says. “So any environmental downturn lasting longer than about three years tends to bring down civilisations.”

Professor Mike Baillie is an authority on dendrochronology, the science of studying tree growth rings. His decades long collaborative effort with many scientists has developed a worldwide record of climate modulated, annual tree growth as recorded in tree growth rings. That effort has produced a reliable timeline from the present back to several thousand years BCE.

Occasionally environmental conditions are so extreme that trees all over the world are affected. Certain of these patterns imply weather conditions leading to local or worldwide catastrophes, including crop failures, famine and flooding.

As described in Exodus to Arthur, the dates linked to extreme events are: 3195 BCE, 2354 BCE, 1628 BCE, 1159 BCE, 207 BCE, 44 BCE, and 540 CE.

The significance of the date 2354 BCE has been noted. The other date to stand out is 540 CE, with the extreme weather events actually starting in 536 CE.

Until recently, historians had little notion dramatic climatic events had occurred. The accounts left by contemporary observers were poorly understood and overshadowed by later historical events. In fact, those later events, it turns out, may have been caused, directly or indirectly, by the weather of the time.

The Praetorian Prefect Magnus Aurelius Cassiodorus Senator, who lived between 490 and 585 CE, wrote a letter documenting the conditions. “All of us are observing, as it were, a blue coloured sun; we marvel at bodies which cast no mid-day shadow, and at that strength of intensest heat reaching extreme and dull tepidity... So we have had a winter without storms, spring without mildness, summer without heat... The seasons have changed by failing to change; and what used to be achieved by mingled rains cannot be gained from dryness only.”

In the wake of this inexplicable darkness, crops failed and famine struck. Then a new disease swept across the entire continent of Eurasia: bubonic plague. It ravaged Europe over the course of the next century, reducing the population of the Roman empire by a third, killing four-fifths of the citizens of Constantinople, reaching as far east as China and as far northwest as Great Britain.

Other reports about the weather conditions from Byzantine and Constantinople record the same environmental phenomena such as dry fog, darkness, cold, drought, and famine.In 1984, Mike Baillie proposed that the climatic event of 536 CE (and by extension, all six of the others) could have been caused by “an asteroid, a comet, cometary fragment(s), or cosmic swarms.”

Perhaps one of the most fascinating and well researched theories is offered by authors Christopher Knight and Robert Lomas in their book Uriel's Machine: The Prehistoric Technology That Survived The Flood.

They present recent geological evidence showing that in 7640 BCE Earth was hit by seven comet fragments causing gigantic tidal waves. These findings are derived from the work of Austrian geologists Alexander and Edith Tollmann of Vienna University's Geological Institute.

By combining evidence from various disciplines (including the global distribution of tektites and a study of worldwide myths and legends), the Tollmanns propose that a comet approached the Earth from the south-east and fragmented into seven pieces which fell subsequently into the oceans causing mass destruction on all continents. One piece is believed to have landed in the North Atlantic, while another is considered to have fallen into “the Central Atlantic south of the Azores” creating a direct hit on “Atlantis”.

According to the authors of Uriel's Machine, there is a Masonic tradition that the biblical character Enoch constructed a machine to predict comets on an Earth collision course. They believe the ancient Book of Enoch describes how this machine should be constructed, and how this secret technology has been preserved since ancient times in Freemasonic lore.

ConclusionThe fall of ancient civilisations may now come to be viewed not as a failure of social engineering or political might but rather the product of climate change and, possibly, heavenly happenstance.

The Bible and other ancient texts have kept alive the memory of ancient catastrophes whose scientific analysis and understanding might now be vital for the protection of our own civilisations from future impacts.

These concerns are probably why the European Space Agency’s chief scientist wants a “Noah’s Ark” on the Moon, in case life on Earth is wiped out by an asteroid or nuclear holocaust.“If there were a catastrophic collision on Earth or a nuclear war, you could place some samples of Earth’s biosphere, including humans, [on the Moon],” said Dr. Bernard Foing. “You could repopulate the Earth afterwards, like a Noah’s Ark.”

At this point, only two things are certain: The Earth could be hit at any moment by a roving asteroid or comet, and we will be hit, again, unless something is done to prevent it.

Jason Jeffrey holds an interest in a wide range of subjects including geopolitics, the "New World Order", Big Brother, suppressed technology, psychic/spiritual development, ancient civilisations and esotericism.
He can be contacted at:

jasonjeffrey33@yahoo.com.au© Copyright New Dawn Magazine,
[link to www.newdawnmagazine.com.] Permission granted to freely distribute this article for non-commercial purposes if unedited and copied in full, including this notice.

Wednesday, August 20, 2008

Field Biochar Manufacture

This posting by A. Karve at the terra preta/biochar forum brings fresh practical insight to the task of producing biochar in the field. As noted I have posted on an earthen kiln protocol that can be used by farmers without access to metal. This posting allows me to refine my thinking for the modern subsistence farmer and even well beyond that level.

Start with no more than a steel drum whose top and bottom has been removed. Place a layer of inch thick branches down as a packed floor for the kiln. Place the kiln end up on to this floor. Air will be able to pass under the edge of the drum through the packed branches at a moderate speed.

Pack the drum with chipped wood or chopped biomass. Do not create tight layering that could cut of air flow entirely. Yet get the packing level up to fifty percent. Once the drum is filled, place a charge of dry wood to act as a starter on top of this load. You may already place six inches of soil around the edge to reduce the center diameter to a quarter of the total. Fire it and let the fire burn most of the wood layer in order to be fully engaged.

At this point smother the top of the fire by throwing six inches of dirt over the center. Or alternately, place a metal lid with a six inch chimney pipe with a damper for fine control.

What we have done here is very familiar to those of us who experienced the methods of the nineteenth century. We have actually banked the fire. The surface cannot flare up losing both heat and fuel and only a limited amount of fuel is burning at any one time and it is mostly in the form of the volatiles in the early going.

Over several hours, the burn front will migrate down to the ground and burn out the floor of this kiln, allowing the edge of the kiln to connect to the earth, and cutting off the air flow eventually. In practice, I consider this to be more of a fail safe to prevent a total burn out of the fuel charge as eventually happens in a banked stove.

I like the dirt layer idea, with or without a metal lid. It acts like a filter for the escaping gases and likely maximizes their combustion. In addition, it will end up been blended with the end product to produce a dry easy to handle mixture if the fire is not quenched with water which is likely necessary.

I suspect that it will simply be better practice to water the fire before it fully engages the floor. Once that point is reached we are very close to running out of volatiles and the charcoal then becomes the primary fuel.

Thus, in lieu of naturally packable materials such as corn stover and bagasse able to produce an earthen kiln, we have a simple metal kiln design that is easily expandable and able to work on the modern farm. There a square set metallic box can be set up in the same manner and material loaded in and packed. This is all rough and ready and certainly will not achieve the optimal thirty percent yield, but it will produce twenty percent or better quite handily.

The key idea is to have the bottom edge set on a layer of branches or any other material able to sustain a fifty percent air flow in through the bottom. The top layer of dirt might be dispensed with if a holding layer does not exist. The Indians had palm fronds to work with. A metal sheet with a chimney closing it all off will do the rest.

My most important point is that this is easy to assemble in some form or the other anywhere and regardless of the local economy. Old rusty galvanized sheet steel is very suitable. You may even get away with using rope on the outside to hold it together. After all the core temperature will be mostly in the core and still be around 300 to 400 degrees. Hot spots on the wall will need an unusual source of air and that really means a fully engaged fire. If that is happening, you have plenty of other problems and it is not working at all.

Dear Martin,

I really do not know, how much char is to be applied per hectar. But I can tell you how to make char out of your burnable organic waste. The simplest device is a top-lit updraft kiln. It consists of a vertical cylinder, having relatively small holes near its base for primary air. You fill the cylindrical body of the kiln with the material to be charred and then light it from the top. Once the fire gets going, you place a lid on the cylinder. There is a chimney built into the lid. The lid does not sit flush on the kiln, but there is a gap between the lid and the kiln. The draft created by the chimney sucks secondary air into the chimney, where it gets mixed with the pyrolysis gas to burn it. The biomass burns downwards, leaving a layer of charcoal on top. As the primary air comes upwards, it meets the burning front which traverses downwards. The burning biomass utilises all the oxygen in the primary air, so that the air going up through the layer of char has only carbon dioxide, carbon monoxide, nitrogen and the pyrolysis gas left in it. As there is no oxygen left in the updraft air, it cannot burn the char that has formed above the burning biomass.The pyrolysis gas and carbon monoxide burn in the chimney, because of the secondary air that is sucked in through the gap between the chimney and the kiln. You have to find out by trial and error, how long it takes to char the material loaded in the kiln. After that much time is over, you remove the lid, and extinguish the fire by sprinkling water over the burning material. This particular device is portable and manually operated. There are larger charring kilns, based on the oven and retort process. Prof. Yuri Yudkevich, a Russian scientist, has made them for charring useless material generated by the timber industry in Russia. We are already using both types of kilns under field conditions in India for charring agricultural waste as also urban waste. We have a video CD that describes the kilns and you can fabricate them by watching the video CD. I have not used Prof. Antal's kiln and have absolutely no idea how it operates. Our web site
www.arti-india. org would show you how to get our CDs by paying us through Pay Pal.

Tuesday, August 19, 2008

Camelina

This recent item has introduced me to camelina, a flax-like crop that has been around for at least five thousand years but not as a viable source for human consumption. For that it may well have to do through the same product conversion as rape seed oil into canola. It does shape up to be a very promising agricultural plant with a few modern tweaks.

This is a crop that prospers on ten inches of rain and little more. There are plenty of prospective lands that can work this crop and little else very successfully. It will be a great transition land crop were folks are been squeezed out of farming by Mother Nature. Think in particular of the buffalo commons of the Great Plains that should never have been broken and are now reaching the end of the aquifers.

At this point the oil is useful only as a fuel source. At least we now have a market for it in the form it is in. I suspect that it will take little to convert it into high quality edible oil that will be easily marketed. It has not been done yet and will take years. Canola had the same problem.

More promising is the immediate meal market for the remaining product. This means that there are minimal waste materials although little is said of the straw except to note that it is clearly minimal from the pictures. I do not have a per acre yield figure as yet either, but assume it approaches that of flax and rape.

In practice, the amounts produced will help the fuel situation but it is unlikely to be more than a fraction of the supply system. I would be happy if it just displaced agricultural usage of fossil fuels as a good first step.

Without question, we are transitioning over to sustainable transportation fuels. It is also obvious that brewing up sugars using algae is the easiest and cheapest way to get there. Biodiesel promises to be an important fuel also because it also can be produced cheaply as a byproduct of algae production and integrated directly into the transportation system. Camelina looks like a good feedstock for this industry now and ultimately as a food product at a later stage. It is certainly a better choice than canola and soy and grows on lands that are poor choices for either.

I am also assuming that haulage using pure electrical systems will remain for a long time only practical for short haul applications. Do not count on it!


Camelina looks to be best crop for biodiesel production

By DALE HILDEBRANT, Farm & Ranch Guide

Friday, August 15, 2008 11:05 AM CDT

GRAND FORKS, N.D. - When considering biodiesel production, camelina appears to be the Cinderella crop, according to information presented at the recent Bio-Mass '08 Technical Workshop in Grand Forks.
In recent months biodiesel production has decreased in the U.S. because of high prices for soybean and canola oil, the two main oils currently used in biodiesel processing, since the oil from both of these seeds is in high demand in the food industry.

At the present time, about 90 percent of the oil used in biodiesel is soy oil and the other 10 percent is canola oil. But the biodiesel production capacity of the U.S., which is 2.5 billion gallons per year, isn't being fully utilized with production last year of only 500 million gallons.

However, Duane Johnson the vice president for agricultural development at Great Plains Oil and Exploration in Big Fork, Mont., thinks camelina, which is sometimes called “false flax” could return profit to the bio-diesel industry and thus spur further growth.

For example, at the current market prices, soybean oil feedstock costs $5.25 a gallon and the feedstock price is about 80 percent of the final product cost, making the final cost of a gallon of biodiesel approximately $6.60, which is a figure well above the current price of diesel fuel.

Johnson also noted that converting good grade vegetable oil such as soybean and canola oil is adding to the backlash over food versus fuel, a debate that is currently taking place world-wide. Since camelina is industrial oil, not food grade oil, using it as a feedstock for bio-diesel would lessen that argument.

Using figures prepared by various agencies back in 2003, Johnson provided the following comparison for using oil crops grown in North Dakota for biodiesel. Even though the growing costs per acre and the cost per gallon of the oil are considerably higher, the following data provides a good comparison between the various oil crops in regards to bio-diesel production.

Raising camelina could also be an economic plus for farmers in the more arid areas of the northern Great Plains.

Alice Pilgeram has been working with camelina research for the past several years at Montana State University and claims the crop can provide growers with a high value crop with relatively low input costs. Production acreage in Montana has increased from just 450 acres in 2004 to between 20,000 to 40,000 acres planted this year.

Several other states, including North Dakota, are currently raising camelina and looking at expanding acreage in the future.

When it comes to fuel production, biodiesel is the most efficient form of alternative fuels, according to Johnson. In terms of gasoline and diesel fuel production, for each calorie expended in the extraction and manufacture of these products we recover 0.8 calories of energy. Ethanol production returns 1.1 calories for each calorie expended, but for biodiesel, for each calorie expended 3.5 to 5.2 calories of energy are recovered.

And, camelina is a superior oil when it come to biodiesel. The oil contains a high amount of linolenic fatty acid, which usually leads to a short oil life before it turns rancid. However, the camelina oil also contains a high level of vitamin E that serves as an anti-oxidant and extends the oil's shelf life.

The high linolenic content is important to biodiesel production, since it gives the product a pour point of around -15 degrees Fahrenheit, which is considerably lower than the other oils offer and is important for users in this region of the country.

Pilgeram also noted that at least five biodiesel companies in Montana will be utilitzing camelina oil in 2008.

Agronomically, camelina is an ideal crop for this region, since it produces well with about 10 inches of rain and requires a low rate of fertilization and pesticide use, and does well on marginal land, Johnson explained.

“We can get maximum yield with up to 10 inches of rainfall,” he said. “After that we start having disease problems.”

Johnson claims the biodiesel industry needs to look to a new generation of feedstocks if it is going to be successful.

“The future of biodiesel is going to be what happens in the next generation,” he said. “Right now all of the oilseeds that we use to make biodiesel, whether it be soybeans, sunflower, canola or mustard, are competing against a world food market. We need to start looking at non-food crops, or the next generation of crops, for biodiesel production.”

These next generation crops should be lower in cost, because they aren't competing for food use. These sources include using algae, where the technology is five to 10 years away, the tropical plant jatropha, which is three to seven years away, and camelina, where the technology is here now.

Camelina has one more advantage - a meal by-product that can be successfully used in beef, dairy, poultry and fish rations. Cold-pressed camelina meal contains a residual oil of 8 to 11 percent and this oil contains 34 to 38 percent omega 3 fatty acids and very high levels of vitamin E.

The meal is also an excellent source of protein and is very low in ash content.

Beef feeding trials are currently underway at Montana State University that show feedlot daily rates of gain were higher with a ration containing 3.5 percent camelina meal than rations containing 3.5 and 7.0 percent soybean meal.

It may have been dubbed “false flax” in the past, but many feel there is nothing false about the future of camelina as one of the new sources for biodiesel production.


From Wikipedia we have:

Camelina sativa, usually known in English as gold-of-pleasure or false flax, also occasionally wild flax, linseed dodder, camelina, German sesame, and Siberian oilseed, is a
flowering plant in the family Brassicaceae which includes mustard, cabbage, rapeseed, broccoli, cauliflower, kale, brussels sprouts. It is native to Northern Europe and to Central Asian areas, but has been introduced to North America, possibly as a weed in flax.

It has been traditionally cultivated as an
oilseed crop to produce vegetable oil and animal feed. There is ample archeological evidence to show it has been grown in Europe for at least 3,000 years. The earliest findsites include the Neolithic levels at Auvernier, Switzerland (dated to the second millennium BC), the Chalcolithic level at Pefkakia in Greece (dated to the third millennium BC), and Sucidava-Celei, Romania (circa 2200 BC).[1] During the Bronze age and Iron age it was an important agricultural crop in northern Greece beyond the current range of the olive. [2][3] It apparently continued to be grown at the time of the Roman Empire, although its Greek and Latin names are not known.[4] According to Zohary and Hopf, until the 1940's C. sativa was an important oil crop in eastern and central Europe, and currently has continued to be cultivated in a few parts of Europe for its seed which was used,[1] for example, in oil lamps (until the modern harnessing of natural and propane and electricity) and as an edible oil.

The crop is now being researched due to its exceptionally high levels (up to 45%) of
omega-3 fatty acids, which is uncommon in vegetable sources. Over 50% of the fatty acids in cold pressed Camelina oil are polyunsaturated. The major components are alpha-linolenic acid - C18:3 (omega-3-fatty acid, approx 35-45%) and linoleic acid - C18:2 (omega-6 fatty acid, approx 15-20%). The oil is also very rich in natural antioxidants, such as tocopherols, making this highly stable oil very resistant to oxidation and rancidity. It has 1 - 3% erucic acid. The vitamin E content of camelina oil is approximately 110mg/100g. It is well suited for use as a cooking oil. It has an almond-like flavor and aroma. It may become more commonly known and become an important food oil for the future.

Because of its certain apparent health benefits and its technical stability gold-of-pleasure and camelina oil are being added to the growing list of foods considered as
functional foods. Gold-of-pleasure is also of interest for its very low requirements for tillage and weed control. This could potentially allow vegetable oil to be produced more cheaply than from traditional oil crops, which would be particularly attractive to biodiesel producers looking for a feedstock cheap enough to allow them to compete with petroleum diesel and gasoline. Great Plains - The Camelina Company began research efforts with camelina over 10 years ago. They are currently contracting with growers throughout the U.S. and Canada to grow camelina for biodiesel production. A company in Seattle, Targeted Growth, is also developing camelina.[5]

The subspecies C. sativa subsp. linicola is considered a weed in flax fields. In fact, attempts to separate its seed from flax seeds with a winnowing machine over the years have selected for seeds which are similar in size to flax seeds, an example of Vavilovian mimicry.