Wednesday, August 6, 2008

The First Dark Age

I recently caught up on some of my reading by working through the archived discussion led by Jerry Pournelle on the subject of Velikovsky and his enthusiasms. Central to this is an active reconstruction of Bronze Age society and its abrupt cessation. We have dated its ending in 1159 BCE driven by the massive eruption of Hekla in earlier posts. The attached item is still quite fuzzy of time and place.

I had proposed the existence of a fully developed palace based society distributed throughout the ancient world and also including a subsidiary presence in the Americas. A large seaborne trade existed to support this society. Writing and record keeping was complex and restricted to a handful. This was true even in Mesopotamia.
The evidence strongly supports the above proposition and also leads to the very natural conclusion that the evidence to date represents but a fraction of that which will be found. It was way too easy and way too appealing to establish a new trade and husbandry center in any open valley to which sea access was available.

The millennia old civilization led by the city state of Atlantis and trading with the states of Egypt and the Middle East and Brazil and Maya land and the Mississippi was palace controlled. This was necessary in light of the need to organize access to copper and tin through the sea lanes. Bronze was money in the Bronze Age and most certainly not lost willingly, let alone buried for us to find. We get a taste of the political and economic strength of this civilization through reading the remaining ancient sources such as Homer and the Bible.

The 1159 BCE collapse did not end the Bronze Age which continued on as the preferred economic system in China and elsewhere. It did however leave a huge surplus of bronze above ground that was not in immediate need of replenishment. This cutting off of fresh supply certainly encouraged the adoption of low quality iron at a local level to produce plows and hoes. Long distance sources of copper disappeared from the market with the collapse of the sea trade at this time.

Since bronze was the currency of the palace system their economic basis collapsed at this time and never really recovered in the same form. The Iron Age needed access only to the local bog to produce a viable product. It was pretty lousy metal but it improved over time and it was produced at the village level, rather than at the state treasury. Even when better ores became available they were plentiful unlike high quality copper ores.

What is often forgotten is that the archeological evidence supports a very long lived Bronze Age world of at least two thousand years that simultaneously organized agriculture that was immensely stable. It was also predecessed by an assortment of equally long lived antique stone based agricultural societies. Political stability was the norm in this world. Dynasties might last a few generations but for most their families and villages survived for centuries.

The principal reason for this is the limited mobility of the warfare practice of the day. That only changed in the last two thousand years with the advent of the riding horse.

THE FIRST DARK AGE

At some point I will do a short introductory essay; the important point is that sometime in the Bronze Ages, a thriving civilization with writing and the ability to build large walled cities and the beginnings of a market economy -- there were traders who were not merely raiders -- collapsed so thoroughly that it became legendary. The walls of Tiryns were so large and imposing that the people who lived in the region thought they were built by giants: by the Cyclopes, and they were called Cyclopean Walls by people who probably counted the actual builders among their ancestors.

Writing was lost and had to be reinvented. Much technology was lost.

It is a time that bequeaths us many legends, from the Trojan War to the legends of the House of Atreus, and Pelops, and Theseus, and Minos, Achilles and Odysseus, Talos and the stone god who rose from the sea, Jason and the Argonauts, all of which seem to reflect real events, embellished, of course, but real all the same. It was a time when the Maryannu and the Battle Ax people roamed the land, and the Peoples of the Sea invaded Egypt and came to Palestine where, as Philistines, they gave the region its name and passed into history as giants whose champion was a bronze armored hero named Goliath.

In the Bible it is an age in which there was no king in Israel, and each man did as he thought right in his own heart. And so it was through the world.

But that Dark Age came after a rich civilization with writing and commerce and technology: what killed that civilization? Theories run from barbarian invasions (the return of the Dorians) to earthquakes, to astronomical disasters, to volcanoes. It may have been all of these. If the issue is settled once and for all, that has happened very recently indeed: it certainly was no more than speculation last year...

Tuesday, August 5, 2008

Micron Carbon History Note

There has been a lot of conversation over carbon and the effect of particle size. It is clear that most have no appreciation of available surface area to the efficiency of a chemical process. We are all taught our little chemistry with soluble reagents omitting surface effects entirely.

Most people can get their minds around the idea that increased porosity means more surface area. That is a good start. What is poorly understood is that powdered carbon that can be derived from charring plant material can be presenting surface area orders of magnitude greater than that of wood char that retains a lot of its integrity.

My own experience with this is derived from work done in 1993 with some businessmen, who had contracted with a physicist who had worked in the area of artificial blood. They created a stable suspension of carbon by heating wood at a high temperature and running the resulting gases over a water surface. Gases and powdered carbon were absorbed into the water. A wait period served to allow the lights to remain on top while the heavies settled out. The active and stable center portion was drawn of. The particle size ran to a micron or less and was likely predominantly nano carbon particles. This made me conscious of the real potential of working with various products whose particle size approached this size distribution.

One of the first things that we learned was that a drop of this fluid nicely converted raw whiskey into a fine well aged quality whiskey. It also when sprayed on raw fish added ten days to refrigerator shelf life. In our case we were using it to create micro droplets of hyroluronic acid. The carbon was potent enough to grab the large acid molecules and wrap them up to form a droplet. At least that is what I think happened. We generated a stable suspension of micron sized droplets that acheived superior delivery to the skin surface. Methods such as this are very much in evidence today.

In this brave new world size is all important. Therefore it is no stretch at all to imagine powdered carbon collecting nutrients in a soil and holding them until a stronger biological agent comes along and removes it.

Therefore the best strategy for the manufacture of terra preta surely must involve the charcoaling of plant material lacking structural integrity. Corn, with its lack of woody material and annual nature and huge volume thus becomes an ideal feedstock.

It should also mean that it will take much less effort to create a productive soil if we are able to use such fine carbon powders produced without grinding.

Monday, August 4, 2008

2012 Arctic Sea Ice Minima

I have attached a copy of friday's NASA report on the current status of the annual sea ice melt in the Arctic.

This year we do not have the wind system driving the pack ice out of the western Arctic as occured last season. We can assume that this is because we had a much more conventional winter and that there was no surplus heat to dispose of.

In any event, there is scant perennial ice left in the Arctic today. I expect that the extent and volume of perennial sea ice is likely to reach a stable minima over the next five years. What I mean by his is that a natural cycle of creation and destruction will dominate in which a finite amount of ice created close to the arctic islands will travel perhaps three years accumulating more ice each season until pushed into Arctic Gyre and been broken up.

It is apparent that the ice caught in the Gyre is losing more mass than it is gaining during the winter and will eventually reach a mass minima.

Much as I would love to see a month of clear sailing in the Arctic, it is certainty not necessary.

Today the areal extent is still large for lack of wind packing, but it is also very obvious that the coverage is likely around fifty percent and it will melt for another six weeks.

As I have emphasized in the past, we are going to have a net loss of ice mass again this year. There has been no reversal of this very clear trend. The problem for observers has been that the apparent areal extent of the annual winter ice has totally obscured our ability to measure the actual sea ice mass.

That is why we woke up one morning in 2000 and discovered that sixty percent had disappeared over the past forty years, which had been the last time we checked. I think that we are a little better at it now.

And as I have pointed out last year, we are in the final collapse phase of this primary melt. The winds last year gave it all a good kick, but it was already primed. This is a normal year, so once again the losses are still obscured.

We are still very much on track for a minima been established by 2012 as the collapse is continuing and shows no sign of been even slowed down.




Daily image update


Sea ice data updated daily, with one-day lag: extent (left), time series (right). Orange line in extent image and gray line in timeseries show normal extent for the day shown from 1979 to 2000. Click for high-resolution versions. To learn more about the data used, see About the data.

—Credit: National Snow and Ice Data Center

Arctic sea ice reflects sunlight, keeping the polar regions cool and moderating global climate. According to scientific measurements, Arctic sea ice has declined dramatically over at least the past thirty years, with the most extreme decline seen in the summer melt season.

Read timely scientific analysis year-round below. We provide an update during the first week of each month, or more frequently as conditions warrant.

Please credit the National Snow and Ice Data Center for image or content use unless otherwise noted beneath each image.

Have a question about sea ice? Visit our updated questions and answers page.

August 1, 2008

Race between waning sunlight and thin ice

Sign up for the Arctic Sea Ice News RSS feed for automatic notification of analysis updates.

The Arctic sea ice is now at the peak of the melt season. Although ice extent is below average, it seems less likely that extent will approach last year’s record low.

The pace of summer decline is slower than last year’s record-shattering rate, and peak sunlight has passed with the summer solstice. However, at least six weeks of melt are left in the season and much of the remaining ice is thin and vulnerable to rapid loss. A race has developed between the waning sunlight and the weakened ice.

Note: Analysis updates, unless otherwise noted, now show a single-day extent value for Figure 1, as opposed to the standard monthly average. While monthly average extent images are more accurate in understanding long-term changes, the daily images are helpful in monitoring sea ice conditions in near-real time.

Map of sea ice from space, showing sea ice, continents, ocean

Figure 1. Daily Arctic sea ice extent for July 31, 2008 was 7.71 million square kilometers (3.98 million square miles). The orange line shows the 1979-2000 average extent for that day. The black cross indicates the geographic North Pole. Sea Ice Index data. About the data.

—Credit: National Snow and Ice Data Center

High-resolution image

Overview of conditions

Arctic sea ice extent on July 31 stood at 7.71 million square kilometers (3.98 million square miles). While extent was below the 1979 to 2000 average of 8.88 million square kilometers (3.43 million square miles), it was 0.89 million square kilometers (0.35 million square miles) above the value for July 31, 2007. As is normal for this time of year, melt is occurring throughout the Arctic, even at the North Pole.

Graph with months on x axis and extent on y axis

Figure 2. Daily sea ice extent; the blue line indicates 2008; the gray line indicates extent from 1979 to 2000; the dotted green line shows extent for 2007. Sea Ice Index data.
—Credit: National Snow and Ice Data Center

High-resolution Image

Conditions in context

Sea ice extent continues to decline, but we have not yet seen last July’s period of accelerated decline. Part of the explanation is that temperatures were cooler in the last two weeks of July, especially north of Alaska.

Because we are past the summer solstice, the amount of potential solar energy reaching the surface is waning. The rate of decline should soon start to slow, reducing the likelihood of breaking last year's record sea ice minimum.


graph showing projections of 2008 sea ice minimum

Figure 3. Using average long-term decline rates is one way to project sea ice extent at the end of the 2008 season. The bottom dashed line shows decline rate one standard deviation faster than normal, the middle dashed line shows decline at average rates, and the top dashed line shows decline rate one standard deviation slower.

—Credit: National Snow and Ice Data Center

High-resolution image

Slower decline than 2007

To estimate the range of possibilities, we have used average long-term daily decline rates to project ice extent during the rest of the season (dashed blue lines). The bottom dashed line shows decline rate one standard deviation faster than normal, the middle dashed line shows decline at average rates, and the top dashed line shows decline rate one standard deviation slower.

If the Arctic experiences a normal decline rate, the minimum extent will be between the second-lowest extent, which occurred in 2005, and the third-lowest extent, which occurred in 2002. Even at a rate one standard deviation faster than normal, the extent will not fall below last year’s minimum—so it appears unlikely that we will set a new record low.

View of Arctic from above

Figure 4. Passive-microwave satellite data shows ice concentration on July 31, 2008. Widespread areas of low concentration ice exist, shown in yellows. NASA AMSR-E data.

—Credit:From National Snow and Ice Data Center courtesy University of Bremen

High-resolution image

But a more vulnerable ice cover

Nevertheless, it is perhaps too soon to make a definitive pronouncement concerning this year’s probable extent at the summer minimum. The Arctic sea ice is in a condition we have not seen since satellites began taking measurements. As discussed in our April analysis, thin first-year ice dominated the Arctic early in the melt season. Thin ice is much more vulnerable to melting completely during the summer; it seems likely that we will see a faster-than-normal rate of decline through the rest of the summer.

Building on our July 17 analysis, the fragility of the current ice conditions is evident in the sea ice concentration fields produced at the University of Bremen using NASA Advanced Microwave Sounding Radiometer (AMSR) data. Widespread areas of reduced ice concentration exist, particularly in the Beaufort Sea. Even north of 85 degrees latitude, pockets of much-reduced ice cover appear. The passive microwave data used in Figure 4 tends to underestimate ice concentration during summer because melt water on the surface of the ice can be mistaken for open water. Nevertheless, such low concentrations indicate strong melt and a broken, thin ice cover that is potentially vulnerable to rapid melt.


View of Arctic from above showing ice age

Figure 5.Visible-band satellite imagery confirms the low-concentration ice cover seen in Figure 4. This view places NASA MODIS Aqua data in a perspective generated in Google Earth, simulating a view from far above Earth.


—Credit: From National Snow and Ice Data Center courtesy NASA

High-resolution image

Friday, August 1, 2008

Berkeley Pit

This article on the Berkeley Pit in Montana is well worth the trouble. We are discovering new biological responses to extreme toxic conditions. An objective worth achieving is to encourage organisms that concentrate the metals and other free ions in the mine liquor. It may be necessary to actually feed them a lot of nitrates to accelerate the process.

I know of closed mines whose liquors were simply passed through a trough full of tin cans. This was sufficient to reduce the copper. In fact driving truck loads of tin cans down to soaking pad and leaving them there for a few months may not be a stupid idea. If I recall correctly, a ton of iron will naturally precipitate a matching ton of copper. This is today an eight fold jump in dollar value. Hmm!

There are many industrial waste sites that need resolution. Relying on biological agents is the only cost effective way of solving these headaches. It does take imagination and perhaps a prize for the best practical solution. At least with mine products, we are dealing with materials that are chemically active.

With the oil industry we are always dealing with some form of salty water which resists such elegant fixes.

What is not mentioned here is that once the Berkley pit fills up the hydrostatic head will block further significant inflow of leachate. The pit fluids will stratify and we will have a top layer of much fresher water derived from rain and snow. This can likely be safely released into the local water table and drainage system. If there is something unusual about the local water table, then now is a good time to see if it is possible to do any preventative work.

The Pit of Life and Death

Written by Richard Solensky on July 1st, 2008 at 3:09 pm

From DamnInteresting.com

Just outside Butte, Montana lies a pit of greenish poison a mile and a half wide and over a third of a mile deep. It hasn't always been so - it was once a thriving copper mine appropriately dubbed “The Richest Hill in the World.” Over a billion tons of copper ore, silver, gold, and other metals were extracted from the rock of southwestern Montana, making the mining town of Butte one of the richest communities in the country, as well as feeding America’s industrial might for nearly a hundred years. By the middle of the twentieth century, the Anaconda Mining Company was in charge of virtually all the mining operations. When running underground mines became too costly in the 1950’s, Anaconda switched to the drastic but effective methods of “mountaintop removal” and open pit mining. Huge amounts of copper were needed to satisfy the growing demand for radios, televisions, telephones, automobiles, computers, and all the other equipment of America’s post-war boom. As more and more rock was excavated, groundwater began to seep into the pit, and pumps had to be installed to keep it from slowly flooding.

By 1983, the hill was so exhausted that the Anaconda Mining Company was no longer able to extract minerals in profitable amounts. They packed up all the equipment that they could move, shut down the water pumps, and moved on to more lucrative scraps of Earth. Without the pumps, rain and groundwater gradually began to collect in the pit, leaching out the metals and minerals in the surrounding rock. The water became as acidic as lemon juice, creating a toxic brew of heavy metal poisons including arsenic, lead, and zinc. No fish live there, and no plants line the shores. There aren’t even any insects buzzing about. The Berkeley Pit had become one of the deadliest places on earth, too toxic even for microorganisms. Or so it was thought.

In 1995, an analytic chemist named William Chatham saw something unusual in the allegedly lifeless lake: a small clump of green slime floating on the water's surface. He snagged a sample and brought it to biologist Grant Mitman at the nearby Montana Tech campus of the University of Montana, where Mitman found to his amazement that the goop was a mass of single-celled algae. He called in fellow Tech faculty Andrea and Don Stierle, experts in the biochemistry of microorganisms. The Stierles had recently been trekking about the northwest, looking for cancer-fighting compounds in local fungi with great success. Coincidentally, the Stierles’ funding had just run out, and they needed a new project. They leapt at the opportunity to study these bizarre organisms.

After examining the slime under a microscope, the researchers identified it as Euglena mutabilis, a protozoan which has the remarkable ability of being able to survive in the toxic waters of the Berkeley Pit by altering its local environment to something more hospitable. Through photosynthesis, it increases the oxygen level in the water, which causes dissolved metals to oxidize and precipitate out. In addition, it pulls iron out of the water and sequesters it inside of itself. This makes it a classic example of an extremophile. Euglena mutabilisExtremophiles are organisms that can tolerate and even thrive in environments that will destroy most other living things. Some can even repair their own damaged DNA, a trait which makes them extremely interesting to cancer researchers. The Stierles reasoned that where there’s one extremophile, there may be others – most likely blown in by the wind. Given their previous successes with strange microorganisms, the researchers believed that the Berkeley Pit and its fledgling extremophile population could produce some medically useful chemicals.

The Stierles were so intrigued by the possibilities that they started work even before securing funding. A squadron of expert researchers was recruited from the undergrads at Montana Tech, and even from a local high school. They collected water samples, isolated microorganisms, and cultured them. The team eventually identified over 160 different species, but they lacked the equipment needed to isolate the interesting chemicals from the microorganisms. Shlepping around western Montana, the Stierles begged and borrowed time at other facilities while they doggedly processed the cultured organisms. Their tenacity led to the discovery of a number of promising chemicals. Three of these, berkeleydione, berkeleytrione, and Berkeley acid, came from species of the fungus Penicillium that had never been seen before, and were therefore named after the Berkeley Pit.

The next step was to see what effect these chemicals had, if any, on other living cells. Thanks to modern biochemical assay techniques, dozens of chemicals can be tested against one organism– or one chemical against dozens of organisms– in a single pass. For reasons that are not entirely clear, many compounds which attack cancer cells are also harmful to brine shrimp, therefore most modern assay tests include the brine shrimp lethality test as a standard procedure. The Stierles exposed swarms of tiny crustacean volunteers to the Berkeley Pit chemicals, and to their delight, five of the chemicals showed anti-cancer properties. Further tests revealed that berkeleydione helped slow the growth of a type of lung cancer cell, and Berkeley acid went after ovarian cancer cells. All five were passed along to the National Cancer Institute for further study.

Other researchers are looking into the Pit as well - not for cancer-fighters or other drugs, but simply for ways to help clean the place up. In 1995, a flock of migrating snow geese stopped at the massive pond for a rest, and at least 342 of them died there. Authorities now use firecrackers and loudspeakers to scare away migrating waterfowl, but there have been a few smaller die-offs nonetheless. Also, on certain mornings, a sinister mist creeps out of the Pit and wraps its tentacles around the streets of Butte. Citizens are understandably anxious about this potentially poisonous fog of doom. The water level is rising at a rate of several inches a month, and if unchecked it will spill over into the area’s groundwater in twenty years. That danger has earned the area the dubious distinction of being one of the EPA’s largest Superfund sites. Normally such water is treated by adding lime to the water to reduce the acidity and remove much of the metal, however the Berkeley Pit is so saturated with undesirables that this process would produce tons of toxic sludge every day. Other methods are safer, but are prohibitively expensive. Currently, the EPA's plan is to focus on containment.

Grant Mitman believes that the best way to clean up the Pit is to use the algae that already live there. E. Mutabilis, for one, tends to grow in clumps. These clumps clean up their neighborhoods enough for other extremophiles to move in. These organisms would collect the metals within their own cells, and upon dying they would sink to the bottom and drag the metals with them. To Mitman, it’s all a matter of finding the right mix of extremophiles for a self-sustaining algal colony. Once the right mix is found, there are many other mine-contaminated waters awaiting treatment that could use a similar biology-based cleanup.

With metals concentrated at the bottom, and cleaner water at the top, the Pit could conceivably be reopened. The bottom sludge could be collected and processed for its ever-more-valuable metal content, and the water could be used for industry or agriculture. While it might not be safe to drink, the water could still be worth a quarter million dollars a year in a water-hungry West. In the meantime, the Pit has become a popular tourist attraction. There's a small museum and gift shop located well above the water level. A number of National Historic Landmarks related to mining are in the area, which has prompted some people to call for the creation of a National Park centered on the Pit. With luck, what was once the Richest Hill in the World could eventually provide riches of a different sort.

Thursday, July 31, 2008

Terra Mulata

Reading through the recent postings I came across a mention of Terra Mulata for the first time. It is associated as a secondary soil to the now becoming recognized Terra Preta soils. I do not assume that this is a particularly recent coinage, but it will surely now be used in conjunction with Terra Preta in scholarly papers. The attached abstract defines both terms and their apparent usage rather well.

This resolves an issue that was bothering me from the beginning and I think is now totally clarified.

It was obvious that a full blown Terra preta soil developed over decades, if not centuries and entailed a lot of ongoing effort every year. Yet it was also obvious that even one year’s effort gave you a productive soil to work with. Now we have a clear resolution of this problem.

Terra preta was solely associated with the settlement site itself where we could expect each family to sustain a hectare of land. The use of an ad hoc earthen kiln would convert a season’s supply of plant waste and cultural waste into a mound of soil and biochar. The clay shards were likely there only because of breakage. Human waste and the like were likely buried in the general waste tip. Every season, this tip was packed down and covered first with a layer of palm fronds to keep the final layer of soil from initially smothering the burn. This was then fired to produce a load of soil and char. This enhanced soil was then carried to the seed hills.

When we come to Terra Mulata we have a slightly different story. We are now dealing with exterior fields that were producing large crops likely to support the state itself (taxes and trade inventory). Corn is a great way to pay your taxes and a great way to store a surplus for much later expenditure and is thus a great example of what is possible.

In this case, they only put in enough char to preserve the soil fertility. There are also no cultural artifacts such as clay shards eliminating the hypothesis that the shards had anything to do with the manufacture of Terra Preta.

The abstract shows that the agricultural culture was very well managed with the existence of large field monocultures not unlike today. You must appreciate that this existed when the only source of energy was man himself. The rise of large field farming in Europe at least had oxen and horses to support it. The diversity of crops is also a surprise. Where were the markets?

We are slowly overcoming the centuries of scholarly dismissal of this culture’s wonderful achievements and are now learning from their achievement.

An earthen corn kiln likely works just fine without the special use of wet clay. And it is very clear that all our tropical soils can be converted to Terra Mulata within a much shorter time frame than expected and does not really need to done too often after initially established.

Going through the numbers again for the corn kiln in particular we have the following;
A given acre of land is cleared by fire. The three sisters are planted in seed hills occupying about twenty five percent of the land. The three sisters consists of corn, legumes for beans and nitrogen fixing and the odd squash to provide soil cover, lowering weed problems and partially protecting the soil from rain erosion.

At the end of season, the dehydrated corn stalks are pulled and used to produce an earthen kiln. There is about ten tons of stover per acre. This will reduce to a little less than two tons of char and admixed soils. Once done, the soil char mix is carried in baskets back to the seed hills.

Eventually the build up of charcoal will naturally migrate into the surrounding soils through any tillage done for other crop types.



Terra Preta and Terra Mulata in Tapajônia: A Subsidy from Culture
Joseph M. McCann

Abstract. The large, multi-ethnic chieftaincy centered on Santarém, Brazil disappeared rapidly in the face of early European slave raids and subsequent missionization, but its physical legacy persists. Ornate ceramics, bamboo forests, relic crops, roads, wells, and manmade waterways in association with patches of anthropogenic dark earth corroborate 17th century chroniclers’ depictions of settled farmers. The evidence suggests the people of Tapajônia lived in permanent settlements and practiced intensive agriculture; they extended the ranges of useful plants, established plantations of fruit-bearing trees, and enhanced soil fertility through inputs of ash, organic mulch and household wastes, creating two kinds of persistently fertile dark earths, the classic terra preta formed under settlements and the somewhat lighter, less chemically enriched terra mulata of agricultural contexts.

Patches of terra preta and terra mulata range from less than 1 ha to more than 100 ha, and are abundantly distributed throughout the region. The spatial distribution and landscape orientation of dark earth patches do not appear to significantly favor varzea or other riverine locations including bluffs, and the largest expanses are located in interior settings. This pattern does not fit the expectations of existing models of Amazonian settlement and subsistence. Rather, it suggests a strategy for optimizing access to a wide range of important resources within a complex landscape mosaic. In addition to inherent ecological factors, the choice of settlement and field locations may have been influenced by access to trade and kinship networks, vulnerability to attack, seasonal transhumance strategies, and cumulative anthropogenic modifications of the landscape.

These modifications, including the enhancement of soil fertility and concentration of useful species and crop germplasm, continue to benefit today’s caboclos, in contrast to most recent development in Amazonia (e.g. mining, ranching, logging). As such, they are an important subsidy from an "extinct" culture. However, access to dark earth is limited by population growth and changing land tenure systems, and the techniques (such as applying mulch and ash, and inoculation with microbiota) which led to their creation are no longer practiced by today’s shifting cultivators. Furthermore, the cultivation of dark earths destroys archaeological artifacts and stratigraphic context that could shed much light on these practices. Clearly, further study of the processes of creation and persistence of Amazonian dark earths are warranted, so that they may serve as models for the development of high-yield, land intensive, yet sustainable land management strategies in the tropics.

J. McCann, Department of Social Sciences, New School University, New York, NY 10011. Email:
arapiuns1@msn.com; mccannj@newschool.edu

Wednesday, July 30, 2008

What Congress Can do about Oil Now

The USA is currently consuming 20,000,000 barrels of oil per day or about 25% of the global supply. No one is in a better position to reduce consumption on a scale that matters to everyone else. And where the USA leads the rest of the world will surely follow.

The auto industry is now gearing desperately up to transition away from oil based products. Pricing alone is driving this move as consumers have been forced to forgo the pleasure of driving their mobile fuel hogs. No one seems prepared to buy a vehicle for the pleasure of parking it. I certainly expect the advent of a very inexpensive electric for short haul work. It makes eminent sense to use the SUV to haul; the kids to work while using the electric to go into work a lot further away. This is a new version of the two car family that everyone was excited about in the fifties. Yes, I remember all that.

The only problem with that is that it can only be a transitional process and may actually have little net effect on the oil supply even over several years. What is more, any attempt to legislate an outcome will be resented and paid for at the poles.

This again brings me back to the subject of diesel fuel. It represents about half of our consumption and it is primarily used by industry. It has never been very popular in the automotive industry.

It is thus possible for Congress to mandate a crash program to replace diesel with LNG or liquid natural gas. Apparently diesel engines can be converted easily over to LNG and the manufacturers are already prepared to produce LNG engines directly. That means fleet conversion and industrial conversion is possible within a very short time line of between two to five years. I would expect actual completion by the end of the five year cycle.

I would expect that the first two years will be needed to establish the necessary infrastructure.

LNG supplies are not in short supply, although a lot of the easiest supplies are again offshore. In any event, we have massive global supplies available to keep costs down for the trucking industry. It is also the cleanest possible hydro carbon and will go a long way to cleaning up the atmosphere.

California has seen the light and is already well down this road.

What Congress can do today is to mandate a swift conversion of the nation’s diesel consumption over to LNG ASAP.

This can release a possible ten millions of barrels of oil per day back into the global market. And our industry is even using a cheaper fuel.


Gallons of Oil per Barrel
42

U.S. Crude Oil Production
5,102,000 barrels/day
Texas - 1,088,000 barrels/day
U.S. Crude Oil Imports
10,118,000 barrels/day

Top U.S. Crude Oil Supplier
Canada - 1,802,000 barrels/day

U.S. Petroleum Product Imports
3,589,000 barrels/day

U.S. Net Petroleum Imports
12,390,000 barrels/day

Top U.S. Total Petroleum Supplier
Canada - 2,353,000 barrels/day

U.S. Total Petroleum Exports
1,317,000 barrels/day

U.S. Petroleum Consumption
20,687,000 barrels/day

Crude Oil Domestic First Price (2007 wellhead price)
$66.52/barrel

Motor Gasoline Retail Prices (2007 U.S. City Average)
$2.85/gallon

Regular Grade Motor Gasoline Retail Prices (2007 U.S. City Average)
$2.80/gallon

Premium Motor Gasoline Retail Prices (2007 U.S. City Average)
$3.03/gallon

U.S. Motor Gasoline Consumption
9,253,000 barrels/day (388.6 million gallons/day)

U.S. Average Home Heating Oil Price
$2.37/gallon (excluding taxes)

U.S. Refiners Ranked Capacity (1/1/2006) #1 - Baytown, Texas (ExxonMobil) 562,500 barrels/day Top
U.S. Petroleum Refining States #1 - Texas 4,337,026 barrels/day
U.S. Proved Reserves of Crude Oil as of December 31, 2006
20,972 million barrels

Top U.S. Oil Fields (2005)
Prudhoe Bay, AK

Top U.S. Producing Companies (2006)
BP - 827,000 barrels/day

Total World Oil Production (2005)
82,532,000 barrels/day

Total World Petroleum Consumption (2005)
83,607,000 barrels/day

Tuesday, July 29, 2008

Plasma Storms

This particular story has absolutely nothing to do with the global climate, but I suppose a bit to do with near earth space weather. However, I could not resist posting this story by way of NASA. Amazingly, another problem mystery is now well understood.

As always, the simple idea that the magnetic field protecting Earth collects energy until it hits an explosive release point is almost obvious after the fact. I am sure it was conjectured a time or two. Now we have the hard data.
This also suggests that the build up and discharge should follow a predictable cycle, whose base line can be established to a high level of precision. Perhaps somewhat like the hurricane season.

If we can do this, then it may also be possible to correlate this with any changes taking place on the sun that may be having an effect on the climate, although I think we have many better proxies.

In any event it is all good stuff and a mystery that has not been fully understood is how crystal clear to any school child.
Perhaps once again we are been reminded of the reality of a sea of magnetic energy impacting our planet's magnetic shielding.

Plasma Bullets Spark Northern Lights

07.24.2008 July 24, 2008: Duck! Plasma bullets are zinging past Earth.

That's the conclusion of researchers studying data from NASA's five THEMIS spacecraft. The gigantic bullets, they say, are launched by explosions 1/3rd of the way to the Moon and when they hit Earth—wow. The impacts spark colorful outbursts of Northern Lights called "substorms."

"We have discovered what makes the Northern Lights dance," declares UCLA physicist Vassilis Angelopoulos, principal investigator of the THEMIS mission. The findings appear online in the July 24 issue of Science Express and in print August 14 in the journal Science.

The THEMIS fleet was launched in February 2007 to unravel the mystery of substorms, which have long puzzled observers with their unpredictable eruptions of light and color. The spacecraft wouldn't merely observe substorms from afar; they would actually plunge into the tempest using onboard sensors to measure particles and fields. Mission scientists hoped this in situ approach would allow them to figure out what caused substorms--and they were right.
The discovery came on what began as a quiet day, Feb 26, 2008. Arctic skies were dark and Earth's magnetic field was still. High above the planet, the five THEMIS satellites had just arranged themselves in a line down the middle of Earth’s magnetotail—a million kilometer long tail of magnetism pulled into space by the action of the solar wind. That's when the explosion occurred.

A little more than midway up the THEMIS line, magnetic fields erupted, "releasing about 1015 Joules of energy," says Angelopoulos. "For comparison, that's about as much energy as a magnitude 5 earthquake."

Although the explosion happened inside Earth's magnetic field, it was actually a release of energy from the sun. When the solar wind stretches Earth's magnetic field, it stores energy there, in much the same way energy is stored in a rubber band when you stretch it between thumb and forefinger. Bend your forefinger and—crack!—the rubber band snaps back on your thumb. Something similar happened inside the magnetotail on Feb. 26, 2008. Over-stretched magnetic fields snapped back, producing a powerful explosion. This process is called "magnetic reconnection" and it is thought to be common in stellar and planetary magnetic fields.

The blast launched two "plasma bullets," gigantic clouds of protons and electrons, one toward Earth and one away from Earth. The Earth-directed cloud crashed into the planet below, sparking vivid auroras observed by some 20 THEMIS ground stations in Canada and Alaska. The opposite cloud shot harmlessly into space, and may still be going for all researchers know.

Above: An artist's concept of the THEMIS satellites lined up inside Earth's magnetotail with an explosion between the 4th and 5th satellites.

The THEMIS satellites were perfectly positioned to catch the shot.

"We had bulls-eyes on our solar panels," says THEMIS project scientist David Sibeck of NASA's Goddard Space Flight Center. "Four of the satellites were hit by the Earth-directed cloud, while the opposite cloud hit the fifth satellite." Simple geometry pinpointed the site of the blast between the 4th and 5th satellite or "about 1/3rd of the way to the Moon."

No damage was done to the satellites. Plasma bullets are vast, gossamer structures less dense than the gentlest wisp of Earth's upper atmosphere. They whoosh past, allowing THEMIS instruments to sample the cloud’s internal particles and fields without truly buffeting the satellite.

This peaceful encounter on the small scale of a spacecraft, however, belies the energy deposited on the large scale of a planet. The bullet-shaped clouds are half as wide as Earth and 10 times as long, traveling hundreds of km/s. When such a bullet strikes the planet, brilliant auroras and geomagnetic storms ensue.

Right: A collection of ground-based All-Sky Imagers (ASI) captures the aurora brightening caused by a substorm. Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio.

"For the first time, THEMIS has shown us the whole process in action—from magnetic reconnection to aurora borealis," says Sibeck. "We are finally solving the puzzle of substorms."

The THEMIS mission is scheduled to continue for more than another year, and during that time Angelopoulos expects to catch lots more substorms--"dozens of them," he says. "This will give us a chance to study plasma bullets in greater detail and learn how they can help us predict space weather."

"THEMIS is not finished making discoveries," believes Sibeck. "The best may be yet to come."

Monday, July 28, 2008

New Model Farm Update

New Model Farm Update

After a year of writing this column, it is timely to revisit and update the conceptualization of the new model farm. What have we added to our bag of tools and how can we deploy this throughout the globe?

1 Forest management and optimization will now be fully and economically integrated into farm management because cellulose and lignin harvesting is now becoming possible. Wood chips can be sold at the farm gate rather than presenting a disposal cost or simply been burned. This can be applied from the Arctic tree line to the Amazon Jungle. The lignin is a direct source of gasoline and diesel fuels, while the cellulose is becoming a source of ethanol.

2 Shallow wet lands can become growing pads of cattails again over a range that includes at least the boreal forests and the southern jungles. This crop easily produces around thirty dry tons per acre (150 tons wet) of starch rich root material that again is a good feedstock for ethanol as well as producing product for human consumption.

3 The advent of the first two promotes the optimization of both tree based products and wetland products not easily exploited otherwise.

4 Many additional forms of animal and fish husbandry can also be easily implemented with this expansion of boots on the ground. As simple as bison in the eastern woodlands combined with tight management of venison becomes a very good agrobusiness. And we have barely begun to find ways to produce fish in wetlands.

In fact the energy demands of agriculture alone will drive this agricultural revolution. That we can supply the fuel for long haul travel is a natural outcome of this and a secure method to provide sustainable supply.

I for one never imagined that the boreal forest would ever have any commercial value for agriculture. Now we can imagine cattails and wood chips and even a little moose and caribou husbandry on top of various fish production scenarios. And let us not forget beaver husbandry with which I was intimately involved back in the early eighties.

5 Biochar production from plant wastes and corn in particular because of its sheer volume will be applied to soils globally. This will mange nutrient availability and sustain soil fertility, as well as reconstituting all soils as fertile terra pretas. The only remaining restraint will be the availability of water.

Again I never imagined that it might be possible to manufacture soil. Yet this appears possible. You can take a patch of desert sand and create seed hills of sand mixed with biochar. You then plant corn and legumes and squash in the seed hill and add sufficient water. Use the corn stover to produce more biochar and repeat. Five years of this will establish the soil and another generation will have a splendid bulked out humus rich soil inches thick. This can be all be done without any tools other than a hoe, a shovel and a basket.

6 Cheap nanosolar power will permit atmospheric water harvesting. This opens the door for the recovery of deserts everywhere.

In time, the restoration of the deserts will increase the temperature of the northern hemisphere and produce a much more temperate climate. The arctic will be open for navigation. Global agricultural productivity will respond by rising as these global terraforming efforts go to completion.

In fact it is now possible to imagine every acre of land between the Arctic circles feeling the hand of human husbandry. Even wild woods set aside for conservation can benefit from the harvesting of debris to manage forest fire destruction. It should thus be clear that the earth can sustain and feed populations massively larger than we have ever thought possible. Ten billion is a gimme and only a hundred billion seems rather extreme.

But what is our footprint? With terra preta, most nutrients are readily recycled and even built up in the soils. The expansion of such soils is strictly a function of water production into every available nook and cranny possible. Energy is produced by solar conversion and sustainable harvesting methods. We can have all we want.

Our only limit is in fact the land surface of the earth and its maximum production capacity. With a gross surface area approaching 150,000,000 square kilometers of which we can easily disregard a third, we have access to 100,000,000 square kilometers or ten billion hectares. If we assume that a hectare can support a family, a population of around fifty billion is not too far fetched. By then we should be finding other limits.

The possibility of a zero footprint is now very real. Therefore this population is possible and space colonies become possible two seconds after we invent a continuous one G thruster allowing us to move freely. We now know how to live there, without importing anything once established.

And as I have posted earlier, the probability is no longer zero that the ancestral races of mankind have already done this around fifteen thousands of years ago. I now have way too much conforming evidence. We are merely the smucks left with the nasty job of completing the terraforming process. I suspect that I will have to grind out a book on this subject to properly make the case. To me it is now rather obvious. I just need to organize a publisher.


Friday, July 25, 2008

Bussard Fusion

I have commented on the late Robert Bussard’s efforts to harness fusion energy. This article is an accessible explanation of the technology. As I recently reported the recent effort to replicate aborted earlier work is advancing splendidly and this should result in news.

Their success will put us much closer to a thruster able to lift a large mass by applying and maintaining one G of gross thrust. Although there is obvious enthusiasm for long trips outside the solar system, the real value comes from been able to explore and develop the solar system itself.

Of course having unlimited grid power at a trivial cost is no small bonus also. And it is impossible to overstate the importance of huge amounts of cheap energy. All our difficult processes need huge amounts of energy, usually supplied by very expensive means. To start with, imagine having the energy to disassociate metals in a plasma arc, then blasting them through an electrostatic field to separate them, and then assembling them in a mold to shape a profoundly complex object. All this becomes possible.

Bussard’s work does appear to have a chance and should be watched closely.

Bussard Fusion Systems

I've frequently mentioned a type of electrostatic fusion reactor (and associated rocket systems), designed by Robert Bussard and detailed in the papers listed below. In it the fusion fuel is confined by a spherical voltage potential well of order 100,000 volts. When the fuel reacts, the particles are ejected with energy of order 2 MeV, so escape the potential well of the voltage confinement area. The standard reactor design discused used p+11B (hydrogen and Boron-11) as fuel, since it fuses without releasing any of its energy as radiation or neutrons. All the energy of the reaction is contained in the kinetic energy of released charged particals. If the fusion reaction is surrounded with voltage gradiants or other systems to convert the kinetic energy of high speed charged particals directly to electricity. Virtually all their energy (about 98%) directly to electricity. Making a ridiculously compact and simple electrical generator.

For example: Bussard fusion reactor with a radius of about 2.5-3 meters burning P+11B (hydrogen and Boron-11) produces 4500-8000 megawatts (4.5-8 E9 watts) of electricity and weighs about 14 tons (.00174 kg per KWe (575 KWe/Kg)). That's enough electricity per reactor to power a couple of big cities, and will be the standard power plant of most of our support craft. This isn't nearly enough power for the drive systems of one of the Explorer ships of course, but the system can be scaled up and retain its efficency.

This design is Bussard's variation of the Farnsworth/Hirsch electrostatic confinement fusion technology. Whats important to us is that it makes such a fantasticly small light power plant, that it can be used as part of a high thrust to weight ration propulsion system for space craft and aircraft. Bussard and associates have designed propulsion versions with a specific impulse of between 1500 and 6000 sec. (Best chemical specific impulse is 450.) I'll discus the rocket applications below.

Comparison of Fuel Cycles
As I mentioned above certain fusion fuels release all the energy of their fusion reaction in the kinetic energy of the released particals. The table below lists the various fuels with this charicteristic, and the power they release per kilogram of fuel.

Fuel --> Exhaust
Power / Reaction
Number of Particals
Joule / kg

p + 11B --> 3 4He
8.68 MeV
12
6.926 E13

De + 3He --> 4He + p
18.3 MeV
5
3.505 E14
*





p + 6Li --> 4He + 3He
4.00 MeV
7
5.472 E13
**
3He + 6Li --> 2 4He + p
16. MeV
12
1.277 E14
**
6Li + 6Li --> 3 4He (Combined)
20 MeV
12
1.596 E14
**



3He + 3He --> 4He + 2 p
12.9 MeV
6
2.059 E14
***

* De +3He reactors would have at least some De + De reactions, which produce neutrons. Possibly 2% of the energy will be released this way. This Neutron bombardment could not be directed like the charged particals. So the energy would impact the rear of the ship. Neutron bombardment is a very toxic form of radiation for people and metal, and would cause tremendous heat load on the rear of the ship.

To avoid shielding mass, we could put the reactor behind a shielding wall at the back of the ship, and make the reactor out of open mesh to limit its exposure.

** Since p can be reused in the two stage p + 6Li --> 4He + 3He &;3He + 6Li --> 2 4He + p reaction, its mass isn't counted as consumed in the reaction. This two staged reaction effectivly combines into 6Li + 6Li --> 3 4He with 20 MeV of total output.

*** 3He + 3He reactions are difficult to convert into electricity in a Bussard reactor, but can be used as a direct plasma thrust.

Power is given in Million electron volts (MeV). An electron volt is equal to 1.60219 E-19 Joule, and a Joule / sec is equal to a Watt. Mass in this case is the number of Protons and neutrons involved in the reaction. They each weigh about 1.673 E-27 kg. So since p + 11B has 12 P's and N's per 8.68 MeV reaction. (Note I'm ignoring electrons. Life's to short, they're too light).

8.7 MeV
= 8.7 E6 eV per 12 * 1.673 E-27 kg
= 4.324 E32 eV per / kg
= 6.926 E13 watts / kg (1 ev/second = 1.60219 E-19
Joule / sec)
1 j/s = Watt

Since the fusion reactions listed above, release all their energy in the kinetic energy of their exaust particals; the most direct way to power a rocket is to use these particals directly as a fusion rocket exaust stream. This can be done by placing an uncontained fusion reactor (without any electrical power conversion equipment), in the focus of a paraboloidal shell. The shell then reflects the fusion particles to the rear in a narrow beam (20-30 degree width), making a high efficency plasma rocket.

The efficency of such a rocket depends on the exaust velocity of the fuel, and the energy released per pound of fuel. This is decribed in the table below.

QED Direct plasma thrusters

Fuel --> Exhaust
Power / Reaction
(f) Mass conversion fraction
Joules / kg
Exhaust speed m/s
p + 11B --> 3 4He
8.68 MeV
1287
6.926 E13
11,800,000 m/s
p + 6Li --> 4He + 3He
4.00 MeV
1645
5.472 E13

3He + 6Li --> 2 4He + p
16.0 MeV
704
1.277 E14

6Li + 6Li --> 3 4He (Combined)
20.0 MeV
564
1.596 E14
17,800,000 m/s
3He + 3He --> 4He + 2 p
12.9 MeV
437
2.059 E14
20,300,000 M/s
De + 3He --> 4He + p
18.3 MeV
257
3.505 E14
26,500,000 m/s

Note that: watts are Joules per second.

F is the fraction of the fuel mass converted to energy. I.E. An f of 257, means 1/257th of the fuel mass is converted to energy.

For example: the fusion plasma from a P + 11B reaction has an exaust velocity of 11,800,000 meters per second. This translates to a specific impulse of approximately a *million* seconds. I.E. a pound of fuel consumed would provide a million pounds of thrust for a secound. For higher thrust, thou less effecient, rocket. We can use the fusion plasma to heat a working fluid. If you dilute the plasma with 100 times its weight in the following propellents (assuming I figured out the right equations), the direct QED plasma drive generates the following specific impulses:

Fluid

Specific Impulse (Isp) in seconds
H2
Hydrogen
14,000
NH3
Ammonia
3,500
H2O
Water
5,000

For an externally fueled reaction, or for a short range craft, the high thrust to weigh reactions in the mixed plasma table would be desirable. But for the deceleration into the target starsystem using only fuel and reaction mass stored in the ship. Only the high efficency (high specific impulse), low fuel mass, pure fusion plasma reaction would be usable. It is those reactions we will consider in our fuel load calculations below.

The classic rocket equation gives the fuel to ship mass ratio, needed to get a given change in speed, with a fuel that has a given exaust velocity.

dv = Desired change (or delta) in the ships speed.Vexh = Exaust velocity of the materialM = The fuel mass ratio =Exp(dV/Vexh)

The specific impulse of a fuel (The pounds of thrust, a pound of fuel will give, when burned in a second) is determined by dividing the fuels exaust velocity by 9.8 m/s^2.

The following table shows the fuel mass ratios nessisary to get a ship fueled with various fuels up to the listed speeds. For example, to accelerate a ship up to 1/4the the speed of light (.25 c or 75,000,000 m/s) a ship fueled with hydrogen and boron-11 (p + 11B) would need to start out with 576 times its own weight in fuel. To get the same ship to half the speed of light, it would need to carry over 300,000 times its weight of fuel! Fuel the same ship with duterium and helium-3 (De + 3He) and you'ld only need 17 and 287 times the ships weight of fuel.

Fuel --> Exhaust
Vexh
75E6m/s( .25c )
100 E6 m/s(1/3rd C)
125 E6m/s(.42c)
150 E6m/s(.5 c)
p + 11B --> 3 4He
11,800,000 m/s
576.0
4,790.0
39,900
332,000
6Li + 6Li --> 3 4He(Combined)
17,800,000 m/s
67.6
275.0
1,120
4,570
3He + 3He --> 4He + 2 p
20,300,000 M/s
42.5
138.0
472
1,620
De + 3He --> 4He + p
26,500,000 m/s
16.9
43.5
112
287

Note: that the fuel ratios listed are only enough to accelerate the ship to, OR decelerate the ship from the listed speed. Carrying enough fuel to do both, would require squaring the ratio. In our case that would mean that the amount of fuel that could get you to 1/2 light speed, would only be enough to get you up to and down from 1/4th of light speed.

One thing the table illistrates clearly is why engineers want high exaust speeds and high energy fuels. Doubling the exaust speed can cut over 70%-80% out of the fuel mass needed to get to a speed. It also, all other things beings equal, allows a small lighter engine system and theirfor ship. This is an especially great advantage when the desired maximum speeds are several times greater than the exaust speeds.

So historically most starship design studies have gravitated toward the higher exaust speed fuels like 3He + 3He or De + 3He. However these fuels have a couple of serious practical problems not normally considered. First Helium-3 (3He) and Deuterium (De) are extremly rare isotopes, virtually unknown on Earth. Starship Design studies that used these fuels, were forced to also assume masive mining operations, scooping the materials out of Jupiters atmosphere. Also these fuels are extreamly light gases.
Unless these gases are kept liquid at cryogenic temperatores in sealed tanks, they will boil away into space. In our case where deceleration fuel will need to be stored for a decade or more, this is highly questionable at best. Also the tanks themselves are very bulky and therefore heavy. A tank could probably not carry much more then 20-30 times its own weight in fuel. Which in our case would require a much bigger heavyer ship, and one that drops its empty tanks along route. This becomes absurdly impractical.

An under considered fuel is Lithium-6 (6Li). As you can see from the table it still has fairly good performance as a fuel. But it also has two very large practical advantages. At room temperature it is a solid structural metal requireing half the volume of Helium-3, and not requireing any tanks or structural supports. It could even be used as part of the engine. Allowing the ship to consume part of its engine structure toward the end of the flight. Its secound advantage is that it is a cheap plentiful material on earth. So we can get access to virtually any amount of it we want, cheaply; and we can carry it in space for years without any tanks or leakage problems. This means we can easily carry far greater amounts of it on the ship. So even though it doesn't seem that efficent on paper. It has critical design advantages for a starship, and is the baseline fuel chosen for our fusion driven starship designs.

QEB Fusion/Electric/Thermal rocket

Fusion/Electric/Thermal-rocket motor configuration.

The QED reactor can be used to produce electricity. That electric power can be efficently converted to heat in a reaction mass such as hydrogen or water. Which makes for a good drive system assumed for secoundary thrusters on the starship, and all the primary rocket thrusters on support ships and shuttles.

Again a Bussard fusion reactor with a radius of about 2.5-3 meters burning P+11B produces 4500-8000 mega watts (4.5-8 E9 watts) of electricity and weighs about 14 tons (.00174 kg per KWe (575 KWe/Kg)).
Bussard and co-authors in his various papers have designed rocket systems using these reactors to heat a reaction mass (usually with relativistic electron beams). Giving a specific impulse of between 1500 and 6000 sec, and thrust to weight ratios of 1.5 - 6. (Best chemical specific impulse is 450.) For example, in the Support_craft web document, I have worked up two classes of support ships using these engines.
These are a high speed aerospace shuttle, and a simpler vacume rocket craft.

See:

H. D. Froning, Jr. and R. W. Bussard, "Fusion-Electric Propulsion for Hypersonic Flight," AIAA paper 93-261.

R. W. Bussard and L. W. Jameson, "The QED Engine Spectrum: Fusion-Electric Propulsion for Air-Breathing to Interstellar Flight," AIAA paper 93-2006.

R. W. Bussard and L. W. Jameson, "Some Physics considerations of Magnetic Inertial-Electrostatic Confinement: A New Concept for Spherical Converging-Flow Fusion," Fusion Technology vol 19 (March 1991).
\
R. W. Bussard and L. W. Jameson, "Design Considerations for Clean QED Fusion Propulsion Systems," prepared for the Eleventh Symposium on Space Nuclear Power and Propulsion, Albuquerque, 9-13 Jan 94.

R. W. Bussard , "The QED Engine System: Direct Electric Fusion-Powered Systems for Aerospace Flight Propulsion" by Robert W. Bussard, EMC2-1190-03, available from Energy/Matter Conversion Corp., 9100 A. Center Street, Manassas, VA 22110.

(This is an introduction to the application of Bussard's version of the Farnsworth/Hirsch electrostatic confinement fusion technology to propulsion. Which gives specific impulses of between 1500 and 5000 secounds. Farnsworth/Hirsch demonstrated a 10**10 neutron flux with their device back in 1969 but it was dropped when panic ensued over the surprising stability of the Soviet Tokamak. Hirsch, responsible for the panic, has recently recanted and is back working on QED. -- Jim Bowery)

Thursday, July 24, 2008

Lignin Breakthrough

I had given up on any early success in the conversion of wood chips into fuel several months ago. Mother Nature had not designed wood for easy disassembly for obvious reasons. The best promise though was coming from the use of enzymes to break down cellulose. And we have seen some promise there.

The other portion of the wood stock was the lignins. As far as I could see, conversion was not even been tried because of its chemistry. This was altogether an unsatisfactory situation with apparently limited promise of any early resolution.

This article announces an important breakthrough in the processing of the lignin.

Again this is all in its infancy, but it appears to be to be tailor made for the efficient processing of the wood chip feedstock. If one step can separate the cellulose first and then a second stage can be used to process and separate the lignin component, what is left is a much smaller brew of complex chemicals amenable to further processing.

I have not sourced a chemical breakdown of wood, but suffice to say that these two components are it for wood. We can now envisage processing that easily consumes the feed stock while producing a modest amount of other chemicals.

Cellulose can be expected to produce ethanol after been broken down into glucose by enzymes. And this article shows us that we can get diesel and gasoline from the lignin. This is about as good as it gets in terms of likely outcomes..

This article shows us that we have unexpectantly solved the lignin problem. So far, we are still struggling with the cellulose problem, but it just became much easier.

This also reinstates my early drive to bring proper forest management under agricultural control. They are the natural operators if it were possible to create a viable economic model.

I had thought that conversion of wood to biochar a viable option, except it does not compete with corn culture for that application. Corn biochar will naturally powder, while wood will need to be ground or the soil will soon become almost gravelly in texture.

Now we have an economic model that supports solid forest husbandry. An annual clean up of waste wood into the chipper will now produce tons of immediately salable fiber that can be taken to the local fuel producer. There should be enough coin in this process to keep everyone very happy, particularly if the price for the wood is similar to that of straw.

Initially there is a lot of labour involved but that will soon be managed and improved on.

This type of program will not need long term financial support although that should still be part of any long term forest management scheme. Only government is able to operate on century long horizons.


Chemical breakthrough turns sawdust into biofuel

* 17:08 18 July 2008
* NewScientist.com news service
* Colin Barras

A wider of range of plant material could be turned into biofuels thanks to a breakthrough that converts plant molecules called lignin into liquid hydrocarbons.

The reaction reliably and efficiently turns the lignin in waste products such as sawdust into the chemical precursors of ethanol and biodiesel.

In recent years, the twin threats of global warming and oil shortages have led to growth in the production of biofuels for the transportation sector.

But as the human digestive system will attest, breaking down complex plant molecules such as cellulose and lignin is a tricky business.

Food crisis

The biofuels industry has relied instead on starchy food crops such as corn and sugar cane to provide the feedstock for their reactions. But that puts the industry into direct competition with hungry humans, and food prices have risen as a result.

A second generation of biofuels could relieve the pressure on crop production by breaking down larger plant molecules – hundreds of millions of dollars are currently being poured into research to lower the cost of producing ethanol from cellulose.

But cellulose makes up only about a third of all plant matter. Lignin, an essential component of wood, is another important component and converting this to liquid transport fuel would increase yields.

However, lignin is a complex molecule and, with current methods, breaks down in an unpredictable way into a wide range of products, only some of which can be used in biofuels.

Balancing act

Now Yuan Kou at Peking University in Beijing, China, and his team have come up with a lignin breakdown reaction that more reliably produces the alkanes and alcohols needed for biofuels.

Lignin contains carbon-oxygen-carbon bonds that link together smaller hydrocarbon chains. Breaking down those C-O-C bonds is key to unlocking the smaller hydrocarbons, which can then be further treated to produce alkanes and alcohol.

But there are also C-O-C bonds within the smaller hydrocarbons which are essential for alcohol production and must be kept intact. Breaking down the C-O-C bonds between chains, while leaving those within chains undamaged, is a difficult balancing act.In hot water

Kou's team used their previous experience with selectively breaking C-O-C bonds to identify hot, pressurised water – known as near-critical water – as the best solvent for the reaction.Water becomes near-critical when heated to around 250 to 300 °C and held at high pressures of around 7000 kilopascals. Under those conditions, and in the presence of a suitable catalyst and hydrogen gas, it reliably breaks down lignin into smaller hydrocarbon units called monomers and dimers.

The researchers experimented with different catalysts and organic additives to optimise the reaction. They found that the combination of a platinum-carbon catalyst and organic additives such as dioxane delivered high yields of both monomers and dimers.

Under ideal conditions, it is theoretically possible to produce monomers and dimers in yields of 44 to 56 weight % (wt%) and 28-29 wt% respectively. Weight % is the fraction of the solution's weight that is composed of either monomers or dimers.

Easy extraction

Impressively, the researchers' practical yields approached those theoretical ideals. They produced monomer yields of 45 wt% and dimer yields of 12 wt% – about twice what has previously been achieved.
Removing the hydrocarbons from the water solvent after the reaction is easy – simply by cooling the water again, the oily hydrocarbons automatically separate from the water.

It is then relatively simple to convert those monomers and dimers into useful products, says Ning Yan at the Ecole Polytechnique Fédérale de Lausanne, Switzerland, and a member of Kou's team.

That results in three components: alkanes with eight or nine carbon atoms suitable for gasoline, alkanes with 12 to 18 carbons for use in diesel, and methanol.

Efficient process

"For the first time, we have produced alkanes, the main component of gasoline and diesel, from lignin, and biomethanol becomes available," says Yan.

"A large percentage of the starting material is converted into useful products," he adds. "But this work is still in its infancy so other aspects related to economic issue will be evaluated in the near future."

John Ralph at the University of Wisconsin in Madison thinks the work is exciting. He points out that there have been previous attempts to convert lignin into liquid fuels. "That said, the yields of monomers [in the new reaction] are striking," he says.

Richard Murphy at Imperial College London, UK, is also impressed with Kou's work. "I believe that approaches such as this will go a considerable way to help us extract valuable molecules including fuels from all components of lignocellulose," he says.

Wednesday, July 23, 2008

T.Boone Pickens Goes LNG

You may have picked this up in the press, but T Boone Pickens has publicly stepped up to the plate and is leading two initiatives aimed at securing future energy supplies and freeing the USA from dependence on the middle East in particular.

His strategy has the merit of been applicable immediately. Essentially he is building mega wind farms to produce electricity and aggressively displace natural gas from the power generation business. This is happening now.

The displaced natural gas will be diverted to supply the transportation business. His press coverage mentions both cars and trucks.

In practice, the best immediate improvement will come from converting the diesel fleet directly to LNG for which a huge global resource is in place today.

I will never be the wind power supporter that many are, only because it truly needs to be integrated with a variable energy source that can used to offset shortfalls. The best solution is a power dam with a well filled reservoir and a LNG power generator is not far behind.

In any event nanosolar panels are not too far away and that is capable of dealing with much of our power needs.

LNG however, is the only viable alternative for the long haulage business that can be rolled out immediately and fully installed within a couple of years. Biodiesel suffers from been too little too late and needs a protracted build out on the supply side. The same holds true for ethanol.

It is obvious though that the industrial aspects of all three are been tested and mastered and their availability is eminent. The strength of LNG is that ample feedstock is available now. Feedstocks for either ethanol or biodiesel are grossly insufficient and will take a fair amount of time to nurture. Ethanol, though is well on the way to been a significant part of the fuel cycle, having already reached critical mass. We are now seeing late generation supply crops hit the market, so that sector should be optimized in side of five years.

Pickens is totally correct in his strategy of supporting LNG conversion in the transport industry while diverting LNG from the power generation industry through the immediate build out of wind farms now. The important fact is that all of this can be done now without any delay as either supply is built out or technology matured.

He gives lip service to the use of LNG in the automobile industry which I see as window dressing. The real conversion must take place in the trucking industry which has both the immediate need and the resources. They do not have any option.

The auto industry is embarking on a multi pronged conversion strategy that will shake out over the next five years. On top of that, the consumer even today has options to exploit. Most can easily switch down to a smaller car or public transport fairly easily. The advent of hybrids is now accelerating that process.

The advent of ethanol fuels is at hand and is now been implemented. Supply will soon follow this demand.

Biodiesel is now a cottage industry but is also building momentum.

These are all steps that can remove our need for any use of oil in the transportation sector by themselves.

Far more importantly, $140 oil has convinced everyone in North America that any dependence on offshore oil is A Bad Thing. They will support any and all initiatives that will speed us all out of the oil business. And where we go, the rest of the world will swiftly follow particularly Europe, India and China.

The electric car will slid into the short haul niche as ample electric power is made available. Quite simply, with a nanosolar plant outside of town, it is very attractive to go pick up an inexpensive light weight town car for personal mobility.

Pickens is just getting into this massive market a step ahead of the crush.

As an aside, LNG has always been held back from full exploitation because of perceived handling problems. These seem to not be the problem that they once were but they still require active management that is easily supplied in the trucking industry. In the meantime, global resources are ample for centuries of use and ultimately are backstopped by the oceanic frozen methane reserves. It is also the cleanest burning hydrocarbon fuel and will slash the percentage of CO2 emissions.

The transport industry could switch to biodiesel and perhaps will do some of that also, however, it is better all around to switch to LNG now. It alone will slash smog in the urban environment.