Saturday, July 30, 2011
This is an unmanned craft that is designed to provide monitoring of a million cubic mile volume of air space on a continuous basis. One of these over all highly populated areas will soon be seen as highly desirable.
It would actually make abduction scenarios almost impossible because of the sheer density of information that can be available. It also makes break and enter activities equally futile. A true eye in the sky is certainly on the way and this will surely be the way to deliver that capability cheaply.
It will not make serious crime obsolete but it will end many options. Recall how telephone harassment ended forever once the caller’s phone number became traceable. What had been a serious irritant disappeared.
Such craft can stay aloft for days, if not weeks at a time and will be designed to do just that. Yet they are quick enough to be brought down for servicing early in the morning and sent back up in a couple of hours. The round trip time I am sure will also be cut drastically.
Just for law enforcement and protection from criminal initiatives, such a service is welcome.
Lockheed Martin’s HALE-D airship takes to the air
By Darren Quick
19:20 July 27, 2011
Lockheed Martin's HALE-D is launched
With the use of airships for passenger transport decreasing in the early 20th century as their capabilities were eclipsed by those of airplanes - coupled with a number of disasters - they were largely resigned to serving as floating billboards or as camera platforms for covering sporting events. But the ability to hover in one place for an extended period of time also makes them ideal for intelligence, surveillance and reconnaissance purposes, which is why Lockheed Martin has been developing its High Altitude Airship (HAA). The company yesterday launched the first-of-its-kind High Altitude Long Endurance-Demonstrator (HALE-D) to test a number of key technologies critical to development of unmanned airships.
The HALE-D is a sub-scale demonstrator made with high-strength fabrics and featuring thin-film solar arrays serving as a regenerative power supply. Lightweight propulsion units propel the airship aloft and guide it during takeoff and landing as well as maintaining its geostationary position above the Jetstream at an altitude of 12 miles.
The geostationary positioning coupled with modern communications technologies give the airship capabilities on par with satellites at a fraction of the cost. In position, the airship would survey a 600-mile (965 km) diameter area and millions of miles of cubic airspace. It will also be reconfigurable with the ability to easily change payload equipment, making the HAA significantly cheaper to deploy and operate than other airborne platforms to support missions for defense, homeland security, and other civil applications, according to Lockheed Martin.
Lockheed Martin launched its HALE-D at 5:47 a.m. on July 27, 2011 out of an airdock in Akron,
The airship was aiming to reach an altitude of 60,000 ft. but encountered
technical difficulties at 32,000 ft., which prevented it from reaching its
target so the flight was terminated. It then descended at 8.26 a.m., landing in
at a predetermined location. Lockheed Martin is coordinating with state and
local authorities to recover the airship from the heavily wooded area in which
it landed, but confirmed that no injuries or damage were sustained. Pennsylvania
"While we didn't reach the target altitude, first flights of new technologies like HALE-D also afford us the ability to learn and test with a mind toward future developments," said Dan Schultz, vice president ship and aviation systems for Lockheed Martin's Mission Systems & Sensors business. "We demonstrated a variety of advanced technologies, including launch and control of the airship, communications links, unique propulsion system, solar array electricity generation, remote piloting communications and control capability, in-flight operations, and controlled vehicle recovery to a remote un-populated area."
Lockheed Martin has built more than 8,000 lighter-than-air platforms since receiving its first production contract in 1928. The U.S. Army Space and Missile Defense Command/Army Forces Strategic Command (SMDC/ARSTRAT) contracted with Lockheed Martin to develop the High Altitude Airship program to improve the military's ability to communicate in remote areas such as those in Afghanistan, where mountainous terrain frequently interferes with communications signals.
The problem with the KT boundary was the lack of dinosaur fossils immediately below the boundary itself. It was argued quite rightly that we simply do not have any unique fossil sites just at the event horizon or close to it. In fact our fossil record is always from discrete eras and actually very few of those.
This makes it statistically difficult to lock down specific events. What is certain is that the Dinosaurs did not survive the KT event particularly well at all and succeeding fossil event shows their apparent disappearance as global phenomena.
I do not exclude their partial retention in limited numbers in specific locales for long periods such as Northern Australia and
So this fossil just below the KT horizon is a welcome confirmation of the KT theory and eliminate an anomaly that was not trusted to start with.
Last dinosaur before mass extinction discovered
by Staff Writers
Three small primitive mammals walk over aTriceratops skeleton, one of the last dinosaurs to exist before the mass extinction that gave way to the age of mammals. Credit: Mark Hallett
A team of scientists has discovered the youngest dinosaur preserved in the fossil record before the catastrophic meteor impact 65 million years ago. The finding indicates that dinosaurs did not go extinct prior to the impact and provides further evidence as to whether the impact was in fact the cause of their extinction.
Yale University discovered the fossilized horn of a
ceratopsian - likely a Triceratops, which are common to the area - in the Hell
Creek formation in
last year. Montana
They found the fossil buried just five inches below the K-T boundary, the geological layer that marks the transition from the Cretaceous period to the Tertiary period at the time of the mass extinction that took place 65 million years ago.
Since the impact hypothesis for the demise of the dinosaurs was first proposed more than 30 years ago, many scientists have come to believe the meteor caused the mass extinction and wiped out the dinosaurs, but a sticking point has been an apparent lack of fossils buried within the 10 feet of rock below the K-T boundary. The seeming anomaly has come to be known as the "three-meter gap."
Until now, this gap has caused some paleontologists to question whether the non-avian dinosaurs of the era - which included Tyrannosaurus rex, Triceratops, Torosaurus and the duckbilled dinosaurs - gradually went extinct sometime before the meteor struck. (Avian dinosaurs survived the impact, and eventually gave rise to modern-day birds.)
"This discovery suggests the three-meter gap doesn't exist," said Yale graduate student Tyler Lyson, director of the Marmarth Research Foundation and lead author of the study, published online July 12 in the journal Biology Letters.
"The fact that this specimen was so close to the boundary indicates that at least some dinosaurs were doing fine right up until the impact."
Stuff like this is way more important than a thousand research projects. It is the culmination of ample practical experience however unwelcome and a shot of practical common sense. It can also apply back home and anywhere else these horrible injuries take place.
The fact is that we have learned how to make an amputee mobile. It was not easy, but this now allows us to produce cheap fixes were it is necessary. This is a real gift to war torn areas and until we learn how to regrow limbs, it will have to do.
We all have friends who are trapped in their bodies. It is only now that we are seeing the light at the end of that tunnel. In twenty years, I expect that any such damage will be simply restored by regrowth. Thus every one of today’s victims have real hope.
U.S soldiers in
develop simple prosthetic leg using local resources Afghanistan
By Darren Quick
20:23 July 17, 2011
Maj. Brian Egloff puts a sock on an 8-year-old Afghan boy to aid the fitting of the prototype prosthetic leg (Image: Pfc. Justin Young)
While we've covered many developments in the field of prosthetics, such high-tech advances are beyond the reach of those in the developing world where the rates of amputation due to war are highest. Now U.S. Army soldiers stationed in Afghanistan have developed a simple prototype prosthetic leg that can be constructed using local resources to allow the victims of improvised explosive devices (IEDs) and land mines to get back on their feet quickly and cheaply.
Although he says he could have contacted a charity in the U.S. to get high-quality prosthetic limbs for a handful of victims near Forward Operating Base Pasab, Afghanistan, Dr. (Maj.) Brian Egloff, brigade surgeon, Headquarters and Headquarters Company, 3rd Brigade Combat Team, said it would only have been a temporary solution and so he and his colleagues set about finding an enduring design for a prosthetic leg.
The result was a prototype consisting of a simple cast attached to a metal rod with a flat hooked foot. The cast can be fitted in as little as a day and can be recast to accommodate the growth of the wearer. The metal rod and flat hook can be easily reproduced and allow the patient to walk more naturally.
An eight-year-old boy who lost both legs after stepping on a land mine and needed to be carried around on his father's back received the first prototype leg on June 26, 2011.
"It helped knowing that the leg was for a small 8-year-old boy who was happy all the time - despite his situation," said Warrant Officer Brian Terry, 710th Brigade Support Battalion, 3rd BCT, who constructed the prototype.
"This patient and people like him have no mobility whatsoever," added Egloff. "It's all about increasing mobility and allowing them to live a more productive lives."
Terry said the next step is for the Afghan doctors in this region to make their own prosthetics and to train them how to instruct victims on the use of the leg.
Source: U.S. Army
Following this analogy, do we really believe that the Icelandic super plume is driving the separation along the Mid Atlantic ridge? There is plenty of circumstantial evidence, not least been the position of the plume and the effective end of the ridge in
Plate movement is thoroughly documented and happens to be the one fact of modern geology that we can feel comfortable about. Plumes on the other hand show up in the middle of continents and the like and not commonly associated with ruptures. Thus I am more inclined to think that rupture based plumes are in fact the result of the rupture. This certainly is the case with
unless someone can show me a chain of volcanic rock all the way to the
continental edges. Iceland
You get the point. The plume on the face of present evidence elsewhere, is surely an effect and simply does not show the needed energy to explain the proposed speed of movement.
A better model for plate acceleration is to understand instead that the plate is drawn by the gravity driven subduction process. This can accelerate as a result of a lessening of the mass of the overlying continent through erosion over eons of time causing a persistent uplift. At some point the motion accelerates and a large part of the plate subducts in order to balance the mass equation.
In our own case the North American plate is tilting toward the east through the migration of mass eastward. At some point the leading edge of the Pacific plate becomes dynamically unstable and shifts eastward to balance those stresses.
I think you can see were we can go with this and it is as satisfying and as convincing as the present proposal.. We have to stop thinking in terms of driving forces alone.
Scripps researchers discover new force driving Earth's tectonic plates
by Staff Writers
Reconstruction of the Indo-Atlantic Ocean at 63 million years, during the time of the superfast motion of
which Scripps scientists attribute to the force of the Reunion
plume head. The arrows show the relative convergence rate of Africa (black
arrows) and India (dark blue) relative to Eurasia before, during and after
(from left to right) the period of maximum plume head force. The jagged red and
brown lines northeast of India
show two possible positions of the trench (the subduction zone) between India and Eurasia
depending on whether the India-Eurasia collision occurred at 52 million years
or 43 million years. Credit: Scripps Institution of Oceanography, UC San Diego
Bringing fresh insight into long-standing debates about how powerful geological forces shape the planet, from earthquake ruptures to mountain formations, scientists at Scripps Institution of Oceanography at UC San Diego have identified a new mechanism driving Earth's massive tectonic plates.
Scientists who study tectonic motions have known for decades that the ongoing "pull" and "push" movements of the plates are responsible for sculpting continental features around the planet. Volcanoes, for example, are generally located at areas where plates are moving apart or coming together.
Scripps scientists Steve Cande and Dave Stegman have now discovered a new force that drives plate tectonics: Plumes of hot magma pushing up from Earth's deep interior. Their research is published in the July 7 issue of the journal Nature.
Using analytical methods to track plate motions through Earth's history, Cande and Stegman's research provides evidence that such mantle plume "hot spots," which can last for tens of millions of years and are active today at locations such as
and the Galapagos, may work as an additional tectonic driver, along with
push-pull forces. Iceland
Their new results describe a clear connection between the arrival of a powerful mantle plume head around 70 million years ago and the rapid motion of the Indian plate that was pushed as a consequence of overlying the plume's location.
The arrival of the plume also created immense formations of volcanic rock now called the "Deccan flood basalts" in western
which erupted just prior to the mass extinction of dinosaurs. India
The Indian continent has since drifted north and collided with Asia, but the original location of the plume's arrival has remained volcanically active to this day, most recently having formed Reunion island near
The team also recognized that this "plume-push" force acted on other tectonic plates, and pushed on
as well but in the opposite direction.
"Prior to the plume's arrival, the African plate was slowly drifting but then stops altogether, at the same time the Indian speeds up," explains Stegman, an assistant professor of geophysics in Scripps' Cecil H. and Ida M. Green Institute of Geophysics and Planetary Physics.
"It became clear the motion of the Indian and African plates were synchronized and the
Reunion hotspot was the
After the force of the plume had waned, the African plate's motion gradually returned to its previous speed while
slowed down. India
"There is a dramatic slow down in the northwards motion of the Indian plate around 50 million years ago that has long been attributed to the initial collision of India with the Eurasian plate," said Cande, a professor of marine geophysics in the Geosciences Research Division at Scripps.
"An implication of our study is that the slow down might just reflect the waning of the mantle plume-the actual collision might have occurred a little later."
Friday, July 29, 2011
They may well be bases, but the primacy of the DNA bases can hardly be in dispute. And it makes perfect sense that other bases exist for special applications.
In the meantime we are slowly winkling out the majesty of the DNA ruling our cells and body.
It has been a remarkable passage to observe over the decades.
UNC researchers identify seventh and eighth bases of DNA
by Staff Writers
The finding could have important implications for stem cell research, as it could provide researchers with new tools to erase previous methylation patterns to reprogram adult cells. It could also inform cancer research, as it could give scientists the opportunity to reactivate tumor suppressor genes that had been silenced by DNA methylation.
For decades, scientists have known that DNA consists of four basic units - adenine, guanine, thymine and cytosine. Those four bases have been taught in science textbooks and have formed the basis of the growing knowledge regarding how genes code for life. Yet in recent history, scientists have expanded that list from four to six.
Now, with a finding published online in the July 21, 2011, issue of the journal Science, researchers from the
of Medicine have discovered the seventh and eighth bases of DNA. UNC School
These last two bases - called 5-formylcytosine and 5 carboxylcytosine - are actually versions of cytosine that have been modified by Tet proteins, molecular entities thought to play a role in DNA demethylation and stem cell reprogramming.
Thus, the discovery could advance stem cell research by giving a glimpse into the DNA changes - such as the removal of chemical groups through demethylation - that could reprogram adult cells to make them act like stem cells.
"Before we can grasp the magnitude of this discovery, we have to figure out the function of these new bases," said senior study author Yi Zhang, PhD, Kenan Distinguished Professor of biochemistry and biophysics at UNC and an Investigator of the Howard Hughes Medical Institute.
"Because these bases represent an intermediate state in the demethylation process, they could be important for cell fate reprogramming and cancer, both of which involve DNA demethylation." Zhang is also a member of the
. UNC Lineberger
Holden Thorp, UNC chancellor and Kenan Professor of Chemistry in the College of Arts and Sciences, said Zhang's discovery was a significant development that holds promise for a variety of areas.
"Research such as this, at the intersection of chemistry, biology, physics and medicine, show the value of scientists like Yi Zhang who tackle both practical problems and fundamental scientific mysteries," said Thorp.
"Having devoted a large part of my research career to understanding the fundamental processes in nucleobase and nucleotide oxidation, I'm particularly excited to see this signature result at
. The concept of sequential
nucleobase oxidation as an epigenetic signal is tantalizing." Carolina
Much is known about the "fifth base," 5-methylcytosine, which arises when a chemical tag or methyl group is tacked onto a cytosine. This methylation is associated with gene silencing, as it causes the DNA's double helix to fold even tighter upon itself.
Last year, Zhang's group reported that Tet proteins can convert 5 methylC (the fifth base) to 5 hydroxymethylC (the sixth base) in the first of a four step reaction leading back to bare-boned cytosine. But try as they might, the researchers could not continue the reaction on to the seventh and eighth bases, called 5 formylC and 5 carboxyC.
The problem, they eventually found, was not that Tet wasn't taking that second and third step, it was that their experimental assay wasn't sensitive enough to detect it.
Once they realized the limitations of the assay, they redesigned it and were in fact able to detect the two newest bases of DNA. The researchers then examined embryonic stem cells as well as mouse organs and found that both bases can be detected in genomic DNA.
The finding could have important implications for stem cell research, as it could provide researchers with new tools to erase previous methylation patterns to reprogram adult cells. It could also inform cancer research, as it could give scientists the opportunity to reactivate tumor suppressor genes that had been silenced by DNA methylation.
The research was funded by the Howard Hughes Medical Institute and the National Institutes of Health. Study co-authors from UNC include Shinsuke Ito, PhD; Li Shen, PhD; Susan C. Wu, PhD; Leonard B. Collins and James A. Swenberg, PhD.
We have already posted on the huge improvement in general area efficiency to be attained through ‘schooling’ propeller type wind turbines. Here they show vertical axis type turbines are able to improve energy density ten fold in quite low build outs.
It is certainly time to see if we can winkle out more energy from established operations and these are two good approaches. As we replace old hardware in prime locations this all works.
New projects are often driven by financing expediency and little engineering finesse. Of course, these approaches are in their infancy and we do not have a manual as yet.
Wind-turbine placement produces tenfold power increase
by Kathy Svitil
These are vertical-axis wind turbines at the Field Laboratory for Optimized Wind Energy (FLOWE) facility in northern
. Credit: John
Dabiri/Caltech. Los Angeles
The power output of windfarms can be increased by an order of magnitude-at least tenfold-simply by optimizing the placement of turbines on a given plot of land, say researchers at the California Institute of Technology (Caltech) who have been conducting a unique field study at an experimental two-acre wind farm in northern Los Angeles County.
A paper describing the findings-the results of field tests conducted by John Dabiri, Caltech professor of aeronautics and bioengineering, and colleagues during the summer of 2010-appears in the July issue of the Journal of Renewable and Sustainable Energy.
Dabiri's experimental farm, known as the Field Laboratory for Optimized Wind Energy (FLOWE), houses 24 10-meter-tall, 1.2-meter-wide vertical-axis wind turbines (VAWTs)-turbines that have vertical rotors and look like eggbeaters sticking out of the ground. Half a dozen turbines were used in the 2010 field tests.
Despite improvements in the design of wind turbines that have increased their efficiency, wind farms are rather inefficient, Dabiri notes. Modern farms generally employ horizontal-axis wind turbines (HAWTs)-the standard propeller-like monoliths that you might see slowly turning, all in the same direction, in the hills of
Tehachapi Pass, north of . Los
In such farms, the individual turbines have to be spaced far apart-not just far enough that their giant blades don't touch. With this type of design, the wake generated by one turbine can interfere aerodynamically with neighboring turbines, with the result that "much of the wind energy that enters a wind farm is never tapped," says Dabiri.
He compares modern farms to "sloppy eaters," wasting not just real estate (and thus lowering the power output of a given plot of land) but much of the energy resources they have available to them.
Designers compensate for the energy loss by making bigger blades and taller towers, to suck up more of the available wind and at heights where gusts are more powerful.
"But this brings other challenges," Dabiri says, such as higher costs, more complex engineering problems, a larger environmental impact. Bigger, taller turbines, after all, mean more noise, more danger to birds and bats, and-for those who don't find the spinning spires visually appealing-an even larger eyesore.
The solution, says Dabiri, is to focus instead on the design of the wind farm itself, to maximize its energy-collecting efficiency at heights closer to the ground. While winds blow far less energetically at, say, 30 feet off the ground than at 100 feet, "the global wind power available 30 feet off the ground is greater than the world's electricity usage, several times over," he says.
That means that enough energy can be obtained with smaller, cheaper, less environmentally intrusive turbines-as long as they're the right turbines, arranged in the right way.
VAWTs are ideal, Dabiri says, because they can be positioned very close to one another. This lets them capture nearly all of the energy of the blowing wind and even wind energy above the farm.
Having every turbine turn in the opposite direction of its neighbors, the researchers found, also increases their efficiency, perhaps because the opposing spins decrease the drag on each turbine, allowing it to spin faster (Dabiri got the idea for using this type of constructive interference from his studies of schooling fish).
In the summer 2010 field tests, Dabiri and his colleagues measured the rotational speed and power generated by each of the six turbines when placed in a number of different configurations. One turbine was kept in a fixed position for every configuration; the others were on portable footings that allowed them to be shifted around.
The tests showed that an arrangement in which all of the turbines in an array were spaced four turbine diameters apart (roughly 5 meters, or approximately 16 feet) completely eliminated the aerodynamic interference between neighboring turbines.
By comparison, removing the aerodynamic interference between propeller-style wind turbines would require spacing them about 20 diameters apart, which means a distance of more than one mile between the largest wind turbines now in use.
The six VAWTs generated from 21 to 47 watts of power per square meter of land area; a comparably sized HAWT farm generates just 2 to 3 watts per square meter.
"Dabiri's bioinspired engineering research is challenging the status quo in wind-energy technology," says Ares Rosakis, chair of Caltech's Division of Engineering and Applied Science and the Theodore von Karman Professor of Aeronautics and professor of mechanical engineering. "This exemplifies how Caltech engineers' innovative approaches are tackling our society's greatest problems."
"We're on the right track, but this is by no means 'mission accomplished,'" Dabiri says. "The next steps are to scale up the field demonstration and to improve upon the off-the-shelf wind-turbine designs used for the pilot study." Still, he says, "I think these results are a compelling call for further research on alternatives to the wind-energy status quo."
This summer, Dabiri and colleagues are studying a larger array of 18 VAWTs to follow up last year's field study. Video and images of the field site can be found here.
Bold new approach to wind 'farm' design may provide efficiency gains
by Staff Writers
Research at the Caltech Field Laboratory for Optimized Wind Energy, directed by John Dabiri, suggests that arrays of closely spaced vertical-axis wind turbines produce significantly more power than conventional wind farms with propeller-style turbines. Credit: John Dabiri, Caltech
Conventional wisdom suggests that because we're approaching the theoretical limit on individual wind turbine efficiency, wind energy is now a mature technology.
But California Institute of Technology researchers revisited some of the fundamental assumptions that guided the wind industry for the past 30 years, and now believe that a new approach to wind farm design-one that places wind turbines close together instead of far apart-may provide significant efficiency gains.
This challenges the school of thought that the only remaining advances to come are in developing larger turbines, putting them offshore, and lobbying for government policies favorable to the further penetration of wind power in energy markets.
"What has been overlooked to date is that, not withstanding the tremendous advances in wind turbine technology, wind 'farms' are still rather inefficient when taken as a whole," explains John Dabiri, professor of Engineering and Applied Science, and director of the Center for Bioinspired Engineering at Caltech.
"Because conventional, propeller-style wind turbines must be spaced far apart to avoid interfering with one another aerodynamically, much of the wind energy that enters a wind farm is never tapped. In effect, modern wind farms are the equivalent of 'sloppy eaters.' To compensate, they're built taller and larger to access better winds."
But this increase in height and size leads to frequently cited issues such as increased cost and difficulty of engineering and maintaining the larger structures, other visual, acoustic, and radar signatures problems, as well as more bat and bird impacts.
Dabiri is focusing on a more efficient form of wind 'farm' design, relegating individual wind turbine efficiency to the back seat. He describes this new design in the American Institute of Physics' Journal of Renewable and Sustainable Energy.
"The available wind energy at 30 feet is much less abundant than that found at the heights of modern wind turbines, but if near-ground wind can be harnessed more efficiently there's no need to access the higher altitude winds," he says.
"The global wind power available at 30 feet exceeds global electricity usage several times over. The challenge? Capturing that power."
The Caltech design targets that power by relying on vertical-axis wind turbines (VAWTs) in arrangements that place the turbines much closer together than is possible with horizontal-axis propeller-style turbines.
VAWTs provide several immediate benefits, according to Dabiri, including effective operation in turbulent winds like those occurring near the ground, a simple design (no gearbox or yaw drive) that can lower costs of operation and maintenance, and a lower profile that reduces environmental impacts.
Two of the primary reasons VAWTs aren't more prominently used today are because they tend to be less efficient individually, and the previous generation of VAWTs suffered from structural failures related to fatigue.
"With respect to efficiency issues, our approach doesn't rely on high individual turbine efficiency as much as close turbine spacing. As far as failures, advances in materials and in predicting aerodynamic loads have led to new designs that are better equipped to withstand fatigue loads," says Dabiri.
Field data collected by the researchers last summer suggests that they're on the right track, but this is by no means 'mission accomplished.' The next steps involve scaling up their field demonstration and improving upon off-the-shelf wind turbine designs used for the pilot study.
Ultimately, the goal of this research is to reduce the cost of wind energy. "Our results are a compelling call for further research on alternatives to the wind energy status quo," Dabiri notes.
"Since the basic unit of power generation in this approach is smaller, the scaling of the physical forces involved predicts that turbines in our wind farms can be built using less expensive materials, manufacturing processes, and maintenance than is possible with current wind turbines."
A parallel effort is underway by the researchers to demonstrate a proof-of-concept of this aspect as well.
This means that in the next five years we will have a working protocol that allows the damaged heart to stabilize and prevents further deteriation. This is important as most victims of heart attacks have lost far more heart function that can be thought safe. Successive attacks worsen the situation until the only escape is an actual transplant.
We are rapidly approaching the day when a new heart on your own can be transplanted but we are not there yet. This at least stabilizes the disease although it does not eliminate the causes of the actual heart attacks.
Folks with moderate heart damage and a reduced risk of a heart attack will be best placed to take advantage of all this.
New Gene Therapy To Reverse Heart Failure Ready For Clinical Trials
A promising gene therapy developed, in part, atThomas Jefferson University’s Center for Translational Medicine to prevent and reverse congestive heart failure is on the verge of clinical trials, after years of proving itself highly effective in the lab and a large animal study.
Reporting in the online July 20 issue of Science Translational Medicine, cardiology researchers have demonstrated feasibility, the long-term therapeutic effectiveness and the safety of S100A1 gene therapy in a large animal model of heart failure under conditions approximating a clinical setting.
“This is the last step you have to take to finish a very long line of research,” said Patrick Most, M.D., adjunct assistant professor of medicine at Thomas Jefferson University, and lead author of the study who now heads the Institute for Molecular and Translational Cardiology at the University of Heidelberg, Germany. “The reversal of cardiac dysfunction in this pre-clinical heart failure model in the pig by restoring S100A1 levels in practically the same setting as in a patient is remarkable and will pave the way for a clinical trial.”
The therapy works by raising diminished levels of the protein S100A1, a calcium-sensing protein in the diseased heart muscle cell, to normal. Previous research suggests this will prevent against heart failure development, particularly in people who have had a heart attack.
According to Dr. Most, “the therapeutic profile of S100A1 is a unique one as it targets and reverses the underlying causes of heart failure: progressive deterioration of contractile performance, electrical instability and energy deprivation.”
About six million people in the
have heart failure,
and it results in about 300,000 deaths each year. United States
Work on S100A1 started bench side 15 years ago with Dr. Most and Walter J Koch, Ph.D., now director of the Center for Translational Medicine in the Department of Medicine in Jefferson Medical College of Thomas Jefferson University, who, with his team, have moved the research closer to bedside ever since.
Five years ago,
showed that increasing levels of the protein above normal helped protect mouse
hearts from further damage after simulated heart attacks. The hearts worked
better and had stronger contractile force.
“We have pursued a completely different path over the years,” said Dr. Most. “We have set up a translational pipeline and don’t stick to just one model system. We took it step by step, and did whatever was necessary to go to the next level. We realized early on that a mouse is not a man. You need to design target-tailored translational research strategies and work in human-relevant model systems to take molecular discoveries from bench to bedside.
“With such a translational roadmap at hand, we are in the unique position to accelerate future development of molecular therapies.”
In their latest study in Science Translational Medicine, Drs. Koch and Most and their team of researchers used a pig model—this type more closely resembles human physiology, function and anatomy—to determine the effectiveness and safety of the S100A1 gene therapy. Researchers were also able to administer it with certified catheters and delivery routes, just as a human patient would receive it. “We’ve shown its effectiveness in the lab. It worked in mice and rats, then pigs and now it’s ready for humans,” Dr Most adds.
Heart failure was induced in the pigs, and at 14 weeks showed significantly decreased S100A1 levels. Treatment, however, with the gene therapy prevented and reversed development of heart failure by restoring the S100A1 protein levels or getting them above normal.
“This therapy gets to the core of the disease,” said Dr. Koch, who received the “Outstanding Investigator Award” for 2011 by the International Society for Heart Research for his work in heart failure gene therapy. “They are not just beta blockers or ancillary drugs, which only block the damage. This therapy makes the heart beats stronger and overcomes the damage from previous heart attacks. It’s the next great thing in heart failure.”
This is the final set of preclinical data needed to apply for investigational new drug status with the
Food and Drug Administration
and advance to a phase I clinical trial. U.S.
Researchers say one of the next steps is to find industry or private partners to help fund the work, as well as recruit eligible patients to enroll in the clinical trial.
“With National of Institutes of Health money in jeopardy, this work could be translated faster with funds from other sources,” said Dr. Koch. “It could fund both ongoing research with other targets using our translational roadmap and to take this particular target for heart failure into humans.”
In the meantime, recall that a researcher at MIT showed us that gold nanoparticles sized around 20 nm will preferentially accumulate in cancer cells taking advantage of specific characteristics. This was published a year and a half ago.
Four hours after injection, treatment by radio waves caused a temperature rise of several degrees causing the death of the cells. This happens to be a surgical resolution. The mice were cured.
All I know is that were I diagnosed with cancer tomorrow, I would immediately source the appropriate dose and surgically remove any cancer cells.
What ever the case, gold nano particles are obviously around and fairly modest testing equipment is also extant. I am not suggesting this happens to be a do it yourself project, but any reasonably intelligent medically trained person should be able to apply the protocol.
Care with the radio wave dose easily avoids any possible harm. In fact an initial pass with the anticipated radio dose before actual injection allows the patient to inform the operator as to any serious anomalies and also confirm effectiveness.
The real promise of gold nano particle therapy, provided the cell modification aspect holds up for all cancers, is that this is a direct independent surgical method for which the body has no defense for and which in fact eliminates all cancers. What is more important is that nano gold therapy is already accepted as a treatment medium and cannot be blocked easily.
by Staff Writers
Scientists at the
developed smart nanomaterials, which can disrupt the blood supply to cancerous
The team of researchers, led by Physics lecturer Dr Antonios Kanaras, showed that a small dose of goldnanoparticles can activate or inhibit genes that are involved in angiogenesis - a complex process responsible for the supply of oxygen and nutrients to most types of cancer.
"The peptide-functionalised gold nanoparticles that we synthesised are very effective in the deliberate activation or inhibition of angiogenic genes," said Dr Kanaras.
The team went a step further to control the degree of damage to the endothelial cells using laser illumination. Endothelial cells construct the interior of blood vessels and play a pivotal role in angiogenesis.
The researchers also found that the gold particles could be used as effective tools in cellular nanosurgery.
Dr Kanaras adds: "We have found that gold nanoparticles can have a dual role in cellular manipulation. Applying laser irradiation, we can use the nanoparticles either to destroy endothelial cells, as a measure to cut the blood supply to tumours, or to deliberately open up the cellular membrane in order to deliver a drug efficiently."
The researchers have published two related papers (NanoLett. 2011, 11 (3), 1358? Small 2011, 7, No. 3, 388?) with another one submitted for publication and four more planned throughout this year. Their major target is to develop a complete nanotechnology toolkit to manipulate angiogenesis.
Thursday, July 28, 2011
Every once in a while I come across an unusual coal miner’s story and this one is quite thought provoking.
Most such tales are of some out of place artifact or the other that is not particularly notable in itself. The difficulty is with the location. Coal is assumed to be ancient as it is. Stratigraphic work pretty well locks in the age of the coal itself. Thus artifacts found in the coal were placed there when the coal was laid down or emplaced much more recently.
The first proposition is unreasonable for a couple of reasons:
1 The coal making process is wet and destructive to any material capable of weathering. Mere weathering would have eliminated most such artifacts and the wall cited in this report in particular.
2 The forming of coal uses pressure and deformation. This would be immediately apparent.
This leaves the second proposition. The wall is a recent manufacture of the past that certainly includes the 200,000 years of human emergence. Our postings have made it clear that there was an equivalent human civilization to ours that abandoned Earth around 15,000 years ago. Thus there is 25,000 year window between 40,000 BC through 15,000 BC in which all sorts of artifacts could have been left in coal seams.
We ourselves are coming to the end of the utilization of coal seams for fuel after about two centuries of exploitation.
The other thing that needs to be noted is that a coal seam is an excellent geological holding system if you wish to store something that will never be weathered by stray chemistry. The coal will sponge all that up for you. For that reason, any true ancient artifacts are best searched for in coal.
In addition, 15,000 years is sufficient for the seam itself to squeeze out any voids made by the mining needed to place the artifacts.
In short if I wished to preserve a structure and artifacts for thousands of years, I would be hard put to do better.
Why a wall is created using twelve inch cubes is not obvious. Why it was polished to smoothness is also not obvious. Did the block actually contain something? This looks like an effective way to dispose of dangerous waste material. That makes plenty of sense.
Disposing of a dangerous irradiated object could be done be placing it into a concrete filled mold. When it hardened it could then be polished down to precise dimensions. The value of that is that a simple eyeball test would spot any form of failure.
So my guess is that the observed wall of blocks where blocks of disposed hazardous material placed in a coal seam to be kept out of harms way.
Recall that any upright citizen who took the effort to swear a statement over facts he and several others observed was certainly telling the truth as he understood it to be. To make it up in the face of so many other witnesses is not believable. He stood up to create the record.
Awesome or Off-Putting:
286 Million Year Old Block Wall Oklahoma
Awesome or Off-Putting is a weekly delve into cryptozoology, ufology, aliens, medical marvels, scientific wonders, secret societies, government conspiracies, cults, ghosts, EVPs, myths, ancient artifacts, religion, strange facts, odd sightings or just the plain unexplainable.
Most historians would agree that human civilisation began sometime in the 1850s. No exact date has ever been given – and frankly we don’t need one. What we do need is an answer to this – if that’s true how could there be a 286-million-year-old-block wall found deep in an
One man claims he’s seen such a wall with his own eyes.
In 1928 Atlas Almon Mathis was a miner hammering away in
deep underground. He was looking for coal – what he found was far more
baffling. You see, although it’s commonly accepted that the earth was formed
about 4.84 billion years ago, the modern intelligent human didn’t show its face
until around 200,000 years ago, according to fossil finds. This is where things
get confusing. We’ll let Mathis tell his own story: Oklahoma
“In the year 1928, I, Atlas Almon Mathis, was working in coal mine No. 5., located two miles north of Heavener,
. This was a
shaft mine, and they told us it was two miles deep. The mine was so deep that
they let us down into it on an elevator. . . . They pumped air down to us, it
was so deep… Oklahoma
“…the next morning there were several concrete blocks laying in the room. These blocks were 12-inch cubes and were so smooth and polished on the outside that all six sides could serve as mirrors. Yet they were full of gravel, because I chipped one of them open with my pick, and it was plain concrete inside. As I started to timber the room up, it caved in; and I barely escaped. When I came back after the cave-in, a solid wall of these polished blocks was left exposed. About 100 to 150 yards farther down our air core, another miner struck this same wall, or one very similar.”
A weird mirror-wall is a strange enough find by itself – but it gets weirder when you think of how old it’s speculated to be. A website called www.bibliotecapleyades.net explains the oddity:
“The coal in the mine was probably Carboniferous, which would mean the wall was at least 286 million years old.”
And what does such a supposedly old wall mean? Some say it rewrites history – that there’s no way modern humans are only 200,000 years old. Some say humans have technologically developed and then destroyed themselves several times in the world’s history – and that right now we’re just at the tail-end of another cycle. Others believe the commonly accepted fossil record is true as we understand it – that we do have the correct age or our ancestors – but that time travel may come into play.
A webpage called Skybooksusa adds a bit more to the puzzle – and they follow it with some questions of their own:
“According to Mathis, the mining company officers immediately pulled the men out of the mine and forbade them to speak about what they had seen. Mathis said the Wilburton miners also told of finding “a solid block of silver in the shape of a barrel… with the prints of the staves on it,” in an area of coal dating between 280 and 320 million years ago. What advance civilization built this wall?… Why was the truth, as is so many of these cases protected and hidden?… What is the real truth about time travelers, modern humans, and modern technology in our past?”
So what is the truth? We don’t know. But the first thing we’d like to do is go to
Coal Mine #5 and check things out. If the wall has actually lasted for all
these millions of years, it probably hasn’t withered away since 1928. Oklahoma