Thursday, November 15, 2018

Qanon Placeholder Whitiker



 
 
 
What we have here is a long working list of ongoing investigations that naturally trigger sealed indictments.  The tactical operation we have juust witnessed is having a placeholder stepping in as acting AG who is completely familiar with these files and has the authority to proceed without political interference.  I do believe this was the plan from the beginnuing and that Sessions is very much part of it.
 
In this manner we have a window in which those indictments can be unsealed and those reports all released and/or declassified.  All roadblocks have been side stepped and the DEEP STATE knows this.
 
At the same time  we have secured both the Senate and the Supreme Court and now have a window in which it is also possible to position a large number of judges as well.

This is the worst possible outcome for those who have gamed the Justice system.  It is also telling that Trump does not know this man.  He was chosen for his ability and adherence to the rule of law.  A lot of folks have lost all political cover and will not be sleeping well.
 
.
 
Q !!mG7VJxZNCI No.464

FAKE NEWS > [optics] 'FALSE' majority > BLUE WAVE
BLUE WAVE [optics] > No Voter Irregularities > Nothing to See Here
FAKE NEWS > Voter Irregularities Challenge > CONSPIRACY
Q3


2489
Q !!mG7VJxZNCI No.463
[Placeholder - OIG Report & Findings]
[Placeholder - OIG report & FBI, DOJ, & Media Coll]
[Placeholder - OIG report & Exe B_ABCs & Media Coll]
[Placeholder - OIG report & Foreign ASST_D1]
[Placeholder - OIG report & Foreign_Insert(s)_ORec & Info Diss]
[Placeholder - OIG report & FISA Abuse DIR_INDIR_Source_DIS]
[Placeholder - OIG report & Umbrella SPY & Targeting]
[Placeholder - OIG report & OTR_C_]



2480
Q !!mG7VJxZNCI No.454
ThankYou.jpg
 


With Respect, Honor, and Gratitude.
Your sacrifice(s) will never be forgotten.
Thank you and God Bless, Veterans!
Q

THE IMPORTANCE OF THE BATTLE OF BELLEAU WOOD


 
 
Yes it should be remembered.  It was a difficult fight not improved by still naive tactics.  That they won is more a measure of the reality that in early august 1918, a million man British Army was now rolling up the Western front spearheaded by the storm trooper tactics of the four division strong well equipped and trained Canadian Corps.  
 
German forces were seriously stretched leading into this Strategic counter attack and that made anything but defensive actions against the Marines unlikely.   However. it certainly served to bolster low French Morale which needed this badly.  It was important for that as well
 
 
The losses were far too high for the reasons explained in the item.  The cost in hard to replace highly trained soldiers cannot be ever easily replaced.  The British army did exactly the same thing in 1914 and essentially got themselves almost wiped out.  It took a long time to bring themselves back with the real millions they needed.
 
The USA was fortunate that they ultimately were chasing a retreating enemy.  Another year would have demanded a million ill trained soldiers at least and that would have been nasty.  In WWII, the western allies got to do the same thing after D day and avoided facing an intact million man army of veterans..
 
 

THE IMPORTANCE OF THE BATTLE OF BELLEAU WOOD

DAVID JOHN ULBRICH

JUNE 4, 2018
 
https://warontherocks.com/2018/06/the-importance-of-the-battle-of-belleau-wood/

Every U.S. marine knows the famous quotes from their comrades fighting in 1918 in the Battle of Belleau Wood: “Retreat, hell we just got here!” by Capt. Lloyd Williams, and “C’mon you sons-of-bitches, do you want to live forever?” by Gunnery Sgt. Dan Daley. Every marine proudly claims the “Devil dogs” moniker because of their ferocity in combat. Alongside the Battles of Fallujah, Khe Sanh, Chosin, and Iwo Jima, Belleau Wood occupies a hallowed place in U.S. Marine Corps lore and history. These battles are ingrained in the Marines’ collective consciousness from the first days of boot camp, during ceremonies at birthday balls, on walls in museums, and on pages of publications.


The Battle of Belleau Wood occurred 100 years ago in June 1918 during World War I. The battlefield lays about five miles west of the town of Ch√Ęteau-Thierry, barely fifty miles northeast of Paris,
France. Looking at the strategic context in early 1918, Belleau Wood was only one small piece of a major campaign that saw the American forces help the French and British armies stem the tide of the DeutschesHeer‘s spring offensive. In March, the Germans launched this massive attack along the Western Front in France because a peace treaty with the new Bolshevik government in Russia had freed up German units deployed on the Eastern Front. The German leadership hoped the influx of 50 divisions could overwhelm the Allied forces in France, bringing the war to an end before millions of Americans could cross the Atlantic and reinforce France and Britain. The German offensive made significant gains for the first few weeks but began to falter by May during the Aisne Offensive. This was when American units like the 2ndDivision and its 4th Marines Brigade joined the fray to help stop the Germans at the Battle of Belleau Wood. The marines remained in contact with the enemy for almost all of June.


The fighting around Belleau Wood pitted units from five German divisions against the U.S. Army’s 2nd Division of the American Expeditionary Forces, which was subdivided into the Army’s 3rd Infantry Brigade and the 9,500 man-strong 4th Marine Brigade. This unit included the 5th Regiment led by Col. Wendell Neville and the 6th Regiment led by Col. Albert Catlin. Three rifle battalions, of 800 men each, and a machine gun company comprised each regiment. The 2nd Division also contained the 2nd Field Artillery Brigade and other organic units like the 2nd Regiment of Engineers.





The area of operation included a forested area (Belleau Wood proper) on high ground running approximately one mile north to south and between one-quarter and one-half mile east to west. To the west of the wood lay Hill 142 under German control. A wheat field lay to the southeast of the wood. The 60 buildings in the village of Bouresches sat to the north across 800 yards of wheat. By June 4, more than 2,000 German soldiers with at least 30 machine guns had ensconced themselves in Belleau Wood, and another 100 Germans with at least six machine guns held Bouresches. German machine gun fire from the wood could sweep much of the wheat field. Looking to the north and east from their lines of departure, the marines faced two difficult obstacles: either advance from tree to tree in close quarter fighting or make a perilous march across the open field of green wheat that rose barely above knee-level.


In the first few days of June, the 4th Marine Brigade dug into a defensive line just to the southwest of the wheat field and Belleau Wood. The battalions in the 5th Marine Regiment established themselves on the left, and those in the 6th Marine Regiment on the right. Retreating French soldiers warned them of coming German attacks, urging the marines to withdraw. It was here that Capt. Williams retorted: “Retreat, hell we just got here!” The Americans stood their ground and forced the Germans to halt their advance and withdraw to Belleau Wood and Bouresches. The marines then prepared their own plans to assault those German positions.


To overcome the disadvantages of open ground and concealed Germans, the Americans expected to advance across the open area without concentrated artillery support and to achieve small-arms “fire superiority” as they neared Belleau Wood and Bouresches. The marines embraced the goal of fire superiority because they placed so much emphasis on rifle marksmanship. The tactics coincided the doctrine of “open warfare” espoused by Gen. John J. Pershing who commanded the AEF. He expected fast-moving American infantry units to make aggressive attacks against German positions over open ground, overwhelm them, and drive into the interior behind enemy lines. The American tactics ran counter to French doctrine as well as hard- experiences in the trenches, which called for a rolling artillery barrage to soften enemy positions and clear the path for infantry units to follow. Gen. Pershing naively assumed that the AEF could succeed in battle using uniquely American tactics, despite nearly four years of bloody fighting that pointed to the decisive advantages that machine guns and fortified positions afforded defenders against attackers. The marines embraced open warfare, expecting that their highly accurate rifle fire would give them the advantage.


Before dawn on June 6, the marines of the 1st Battalion, 6th Marine Regiment (denoted as 1/6) drove the Germans from Hill 142. This anchored the American line to the Allied units farther to the west. It also allowed the marines to pour fire into Belleau Wood to the east. Next began an uncoordinated American attack that started on the evening of June 6. The 3/5 and the 3/6 hit the center and southern sides of Belleau Wood respectively. However, while marching across the open ground, heavy German machine and artillery fire cut the 3/5 to shreds. Meanwhile, the 3/6 fought their way in the southern edge of the woods before their advance ground to a halt in the face of enemy fire. In all the confusion, the two-time Medal of Honor recipient Gunnery Sgt. Daly questioned his men: “C’mon you sons-of-bitches, do you want to live forever?” Despite their best efforts, the marines’ marksmanship failed to silence the German guns. By nightfall, both Marine battalions suffered debilitating casualties.

To the east in the evening of June 6, two smart-looking companies of the 2nd Battalion, 6th Marine Regiment began an orderly advance across the eight hundred yards of wheat toward the enemy positions in Bouresches. This assault was doomed from its start because the Americans did not obtain supporting artillery to provide a rolling barrage. Instead the 2/6 faced withering German gunfire from the village to the northeast and from Belleau Wood to the northwest. The two Marine companies quickly began taking casualties as they were pinned down without communications with each other or the battalion’s commanding officer Maj. Thomas Holcomb. Even so, the surviving marines pushed their way into the village of Bouresches where they fought house to house and expelled the German defenders.


That first day of June 6 proved to be costly for the 4th Marine Brigade: Six officers and 222 enlisted men and noncommissioned officers (NCOs) killed in action, and another 25 and 834 wounded in action respectively. This amounted to more casualties than in the entire history of the Marine Corps to date. On June 8, two days into the battle, Holcomb scribbled a letter his wife back on the American homefront. He described his men’s performance in the wheat field:


The regiment has carried itself with undying glory, but the price was heavy. My battalion did wonderfully. . . There was never anything finer than their advance across a place literally swept with machine gun fire. . . There never was such self-sacrifice, courage, and spirit shown.


Holcomb next gave his wife an inkling, albeit sanitized, of the conditions in Bouresches on June 9. “I am safe and well. I have not even had my shoes off for 10 days, except once for ten minutes. Several days I’ve been without food and my only sleep has been snatched at odd moments during the retorted,” wrote Holcomb. “The whole brigade put up a most wonderful fight. We have been cited twice by the French authorities.”


After being reinforced by more than 100 soldiers of the Company A of the 2nd Regiment of Engineers, the remaining 200 marines in 2/6 dug in and withstood several German infantry assaults on Bouresches before relief arrived a week later. Meanwhile, together with soldiers in the 2nd Regiment of Engineers, the marines of the 1/5, 2/5, 3/5, 1/6, 3/6, and 6th Machine Gun Battalion secured most of Belleau Wood by June 11. They encountered concentrated German small arms, machine gun, and artillery fire, often at point-blank range. Exploding shells from enemy and Allied guns splintered the trees, showering the ground with deadly wood splinters and metal shrapnel. The Germans also used mustard gas shells to try to halt the advance. The adversaries clashed in bitter hand-to-hand combat with knives, rifle butts, bayonets, and trench shovels. As Marine officers and NCOs fell dead or wounded, junior officers and enlisted men took their places. The most determined counterattack on June 13 came when elements of three Germans divisions attempted to reclaim their old positions. Then, the French Army’s artillery finally unleashed a 14-hour long heavy bombardment that allowed marines in 2/5, 3/5, and 3/6 to dislodge the remaining Germans from the northern end of Belleau Wood on June 26.


After three of weeks of intense combat, a report announced the 4th Marine Brigade’s success with the message “Belleau Wood now U.S. Marine Corps entirely.” The French government renamed it Bois de la Brigade de Marine in honor of the incredible sacrifices and fierce struggles there. The members of the 4th Marine Brigade were also awarded the French Croix de Guerre.


Although a victory for the Americans, the Battle of Belleau Wood exacted a heavy toll on the 4th Marine Brigade. Of its complement of 9,500 men, the brigade suffered 1,000 killed in action, and 4,000 wounded, gassed, or missing equaling a 55 percent casualty rate. The supporting 2nd Regiment of Engineers lost another 450 casualties of its assigned unit strength of 1,700 soldiers. During the three weeks of fighting, Thomas Holcomb’s 2nd Battalion alone suffered a shocking 764 casualties out of a paper strength of 900 marines. On June 6 alone, his unit started across the wheat field with two companies with some 500 marines. After wrenching control of Bouresches, only 200 of Holcomb’s men remained able to repel German counterattacks. This represented a 60-percent casualty rate, which matched the rates of earlier battles in World War I. Not to put too grim a face on this high figure, but Holcomb’s career as a future commandant of the Marine Corps may easily have ended in obscurity, and the heroic memories of the wheat field would have been for naught if the remnants of the 2nd Battalion had not held Bouresches. Surviving the rest of the World War I, Holcomb continued to rise through ranks until being named the seventeenth commandant of the Marines Corps in December 1936. He shepherded the Corps through the last years of the Great Depression, managed its mobilization, and directed the Corps’ first two years of the Pacific War. In this time, the Corps expanded from 17,000 marines in 1936 to 385,000 by Holcomb’s retirement in December 1943.


Battlefield success at Belleau Wood merited an immortal place in Marine Corps history and lore. Similar observations can be made about the other famous battles of Fallujah, Khe Sanh, Chosin, and Iwo Jima, all of which should be sobering reminders that victorious ends have often required bloody means.


These points of pride notwithstanding, all the battles left other indelible marks on those marines fighting in them that exceeded celebratory and triumphalist tones. Beyond Marine Corps lore, the Battle of Belleau Wood represented a substantive step in the organization’s maturation from shipboard guard or constabulary forces of the 19th century into the multi-purpose force-in-readiness of the 20th and 21st centuries. This battle and the others later in World War I gave the Marines invaluable experiences of prolonged combined arms operations in modern warfare.


Several future Marine Corps senior leaders saw action at Belleau Wood, including future commandants such as John Lejeune, Clifton Cates, Lemuel Shepherd Jr., Wendell Neville, and Thomas Holcomb, as well as marines who later attained flag rank such as Roy Geiger, Charles Price, Holland Smith, Keller Rockey, and Merwin Silverthorn. In one extraordinary case, Gerald Thomas rose through the ranks from sergeant at Belleau Wood in 1918 to become lieutenant general and assistant commandant of the Marine Corps from 1952 to 1954. These marines became a group of veterans famously known as the “Old Breed” during the decades after World War I ended. The future flag officers gleaned many vital lessons from serving in France, thereby recognizing that the Marine Corps needed effective training in appropriate weapons and tactics, relevant doctrines for those weapons and tactics, planning for operational roles for the Marine Corps in future conflicts, military education of Marine officers in the art of war, suitable force structures to perform particular missions, and reorganization of the U.S. Marine Corps in structures similar to the French General Staff. Consequently, Belleau Wood has maintained not only a legacy as an iconic battle but also as the first of several learning laboratories for those Marine officers who eventually led their Corps to victory in World War II.


CORRECTION: Due to an editing error, an earlier version of this post mistakenly referred to the German army as the Wehrmacht. This term did not come into use until later in the 20th century, and the error has been corrected.


David Ulbrich, Ph.D., is currently director of the M.A. in Military History program at Norwich University. This article draws on materials in his award-winning book Preparing for Victory: Thomas Holcomb and the Making of the Modern Marine Corps, 1936-1943 (Naval Institute Press, 2011) and in Ways of War: American Military History from the Colonial Era to the Twenty-First Century (2nd ed., Routledge, 2017) co-authored with Matthew S. Muehlbauer.

Sous Vide



We all need to know this.  This is mostly about meat which is fine, particularly as it prevents heavy water loss which is a problem for most cooking methods.

However, what about vegetables?  I ask this because I grew up with boiled vegetables and leaped gladly to steamed vegetables and some form of frying to deliver flavor.  Welcome to chinese Wok cookery.  It looks like that it may be possible to prepare a large batch of vegetables in this manner to breakdown the fibers without losing a lot of water.  Again it is then a simple transition to the fry pan to caramelize up some flavor.

I suspect that this is how restaurants delver superior results themselves.  Now we know.  By the by,  the slow cooker could deliver this result set at 130 degrees F.  I do know that a perfect prime rib is at 125 F though you would still want to sear it before or after. ..

Give your meat a break today


No matter how you chop, slice, or butcher it, meat can be kind of gross. Blood, flesh, and viscera can kill your appetite, and that’s before you get to the uncertain process of searing it on a grill or roasting it to a crisp. Too little and it’s dangerous; too much and it’s disappointing.
Enter sous vide (“under vacuum”) cooking, a way of injecting scientific precision into the kitchen. Seal meat or fish (or eggs, or, sure, vegetables) in plastic with aromatics and a dab of your chosen fat, then simmer in water at a low temp over a long period of time. You’ll be rewarded with exquisitely tender morsels whose protein fibers have been gently massaged in their tiny jacuzzi to perfection.

Popularized in the early 21st century as another tool in the kits of liquid nitrogen-wielding molecular gastronomists, immersion circulators were once found mostly in rarefied fine-dining kitchens. But the past few years have seen them move into the home. It’s almost impossible to overcook something sous vide, so if you have to go pick up your kid from soccer, no worries: the pork chops will just bubble along at the same temperature as the water itself. Still, it takes much adjusting for some cooks, and others find it overly clinical. Time to dip in—slowly.



By the digits
129°F (54°C): Temperature to sous vide half a pound of flank steak to medium rare
90 minutes: Time it takes to cook said steak
1-2 days: Time it takes for a tough cut of meat, such as beef chuck, to become fork-tender through sous vide.
$180 million: Size of the sous vide market in 2017
$1.3 billion: Projected size of the sous vide market in 2023
$1,220: 2003 price of a Polyscience circulator, lab equipment that early home sous vide fans used
$449: 2009 price of the SousVide Supreme
$199: Price of Anova and Sansaire immersion circulators in late 2013, the “year of the inexpensive water circulator”
$80: Current retail price of the handheld Anova Precision Cooker Nano

Explain it like I’m 5!
How does sous vide work?


Start with this: Why do we cook? When it comes to meat, it’s largely about killing bacteria before they kill us. One way of doing that is making meat very hot—the US government recommends 145°F for steak, and even higher for chicken and ground beef. Sous vide might seem freaky since those numbers are seared into home cooks’ minds. But if you’re willing to wait longer, 130°F—Cooks Illustrated’s recommended minimum temp—will take them out slowly.
Over 140°F, the shrinkage of muscle fibers starts to push out a lot of water, which is why overcooked meat tastes dry. Sous vide lets you stay below that temperature, but still in the sweet spot where bacteria are killed. Finally, and most importantly to gourmands, cooking at 130°F converts tough collagen into gentle gelatin, which is why it can redeem a cheap, tough cut of meat. The obsessives at Cooks Illustrated have a thorough explanation of the process.

The birth of the warm



The promise of sous vide is that it’s convenient and foolproof yet delicious. And it came, almost simultaneously, from large-scale food prep and elite chefs.
As far back as 1806, Frenchman Nicholas Appert was boiling foods in sealed bottles, but sous vide needed the invention of food-grade plastic to work. In the 1950s, Cryovac created a plastic film to extend the shelf life of freshly slaughtered meat. Shortly after, plastic-sealed food found retail and commercial uses, like boil-in-bag curry in 1968 and hospital food in 1969. It went upscale in 1974, when French chef George Pralus invented a water-bath method for foie gras, which went from losing 30% to 50% of its weight in cooking to just 5%. (Cryovac soon hired Pralus to teach other chefs.)

Around the same time, Pralus’s fellow countryman Bruno Goussault, the chief food scientist at a cooking agency, was working on the opposite end of the spectrum: using the method to pre-prepare food for massive commercial kitchens. That’s why sous vide was “considered a technique suitable only to chains and factories,” as Amanda Hesser wrote in the New York Times in 2005, just as sous vide was starting to go mainstream. Eventually the twain would meet: Famous chefs rediscovered its qualities, and home cooks its ease.
Breakin’ the law


Cutting-edge chefs have given sous vide a ritzy reputation, but in the 1980s US law only permitted it for processing plants, which chilled its use stateside while Continental chefs were perfecting it. American barriers took a long time to fall: In the early 2000s, restaurants using sous vide techniques were routinely visited by their local health departments. As a relatively new technology, sous vide was not covered in municipal health codes or a restaurant’s HACCP plan. In the absence of regulation, the health department saw violation, and chefs like Momofuku’s David Chang were slapped with fines and forced to destroy thousands of dollars of sous vide-prepared meat.

You are what you eat
Our immersion cookers, ourselves


Now that sous vide has hit a saturation point, the food world is starting to figure out what it all means. Salt, Fat, Acid, Heat host Samin Nosrat recently noted that the cult of sous vide can be a wee bit sexist—after food legend Alice Waters got ruthlessly mocked for slow-cooking an egg over fire in a $250 custom spoon, Nosrat told the New York Times “is it any more practical to sous vide an egg? No. But it’s this amazing thing because a man is doing it.” 
More generally, some home cooks object to the very thing that aficionados embrace: its lab-like sterility. Cooking sous vide, they say, strips away the sensory pleasure of searing a steak, of sweating fragrant onions in a skillet of hot butter. The joy of dipping into a stock pot, adding a pinch of this or that, gives way to plastic bags of food drifting in plastic tubs.
Quotable
“I think sous vide in general is a very controlled, precise way to cook.… [I]t affords the home cook a higher level of accuracy. I look at it like the new slow-cooker.”

Pop quiz
Which meat is the dodgiest to cook sous vide?

If your inbox doesn’t support this quiz, find the solution at bottom of email.
Brief history
1985: Bruno Goussault, chef Joel Robuchon, and food critic Henri Gault team up to create a sous vide menu for French railroad SNCF.
1989: Food & Wine declares that sous vide—in precooked, heat-and-serve form from gourmet stores—will be one of the most important food trends in coming years.
1990s: “In America… sous vide still implied factory food,” according to famed chef Thomas Keller.
2005: “Cryovacking” makes Wired magazine’s Jargon List.
2006: WD-50 chef Wylie Dufresne uses an immersion circulator in his Iron Chef America battle against Mario Batali, the first time sous vide was seen on American TV.
2011: Nathan Myhrvold’s Modernist Cuisine: The Art and Science of Cooking is published. The four-volume, 2,438 page cookbook, with a retail price of $625, offered the first extensive explanation of sous vide and its culinary applications.
Watch this!
You don’t need to invest in a set of nice polycarbonate Cambro containers to sous vide your steak. Use an old beer cooler instead! If you want to get a little more MacGyver, here’s how to sous vide with a rice cooker and a Seal-A-Meal.
Fun fact!
Recently, sous vide innovators have encouraged cooks to immerse their food in a tub full of not just water but also plastic balls. Such “sous vide balls” help maintain an even water temperature and prevent evaporation. Don’t want to spring for fancy custom balls? Ping pong balls will work just as well.

Exclusive: Grave doubts over LIGO’s discovery of gravitational waves

LIGO's detectors

First of,  General Relativity is completely accepted.  However, Black Holes come out of the derivative equations that had to make major simplifications for any mathematics to work and I have real knowledge of those equations.  After the Relativists largely left the scene, the Black hole has been used as a magic wand.
 
My own work leads to a deep understanding of event horizons and what happens there.   Matter as we know it is unraveled into two dimensional photonic energy which then exits the horizon at light speed carrying off huge amounts of information.  This allows reforming to take place and many particles are created far from the event horizon and a long way outside the gravity well.  It is my contention that such an object should look like a Quaser or a Blazer.  It also explains how we can even see all this as all this newly reformed matter radiates over large distances producing size without a lot of mass.
 
This also looks a lot like the calculations ran in the repetitive data problem of computation.  What i mean there is the limitations of computers create a rounding bias that is unavoidable.  This can bite you often with scientific computing and assuming that the team avoided that problem is naive.  It happens to be my first suspect when a pile of calculations throws up something.  And yes, your signal should be clear enough in the raw data.
 
 You wonder why i do not trust Global Warming Models?  No better place for this problem and it does not attract the best and brightest.  Newton does not work there.  Again too many grad students running amok.



...
Exclusive: Grave doubts over LIGO’s discovery of gravitational waves

https://www.newscientist.com/article/mg24032022-600-exclusive-grave-doubts-over-ligos-discovery-of-gravitational-waves/

The news we had finally found ripples in space-time reverberated around the world in 2015. Now it seems they might have been an illusion

LIGO’s detectors


Enrico Sacchetti

By Michael Brooks

THERE was never much doubt that we would observe gravitational waves sooner or later. This rhythmic squeezing and stretching of space and time is a natural consequence of one of science’s most well-established theories, Einstein’s general relativity. So when we built a machine capable of observing the waves, it seemed that it would be only a matter of time before a detection.

In point of fact, it took two days. The Laser Interferometer Gravitational-Wave Observatory collaboration, better known as LIGO, switched on its upgraded detectors on 12 September 2015. Within 48 hours, it had made its first detection. It took a few months before the researchers were confident enough in the signal to announce a discovery. Headlines around the world soon heralded one of the greatest scientific breakthroughs of the past century. In 2017, a Nobel prize followed. Five other waves have since been spotted.

Or have they? That’s the question asked by a group of physicists who have done their own analysis of the data. “We believe that LIGO has failed to make a convincing case for the detection of any gravitational wave event,” says Andrew Jackson, the group’s spokesperson. According to them, the breakthrough was nothing of the sort: it was all an illusion.

The big news of that first sighting broke on 11 February 2016. In a press conference, senior members of the collaboration announced that their detectors had picked up the signature of gravitational waves emitted as a pair of distant black holes spun into one another.

The misgivings of Jackson’s group, based at the Niels Bohr Institute in Copenhagen, Denmark, began with this press conference. The researchers were surprised at the confident language with which the discovery was proclaimed and decided to inspect things more closely.

Their claims are not vexatious, nor do they come from ill-informed troublemakers. Although the researchers don’t work on gravitational waves, they have expertise in signal analysis, and experience of working with large data sets such as the cosmic microwave background radiation, the afterglow of the big bang that is spread in a fine pattern across the sky. “These guys are credible scientists,” says Duncan Brown at Syracuse University in New York, a gravitational wave expert who recently left the LIGO collaboration.





The first gravitational wave discovery was announced to the world on 11 February 2016


SAUL LOEB/AFP/Getty Images Gravitational waves are triggered by the collision of massive objects such as black holes or neutron stars. They travel for billions of years, alternately squeezing and stretching the space-time in their path. Spreading out in all directions, they get weaker as they go, but they can be detected on Earth with a sufficiently sensitive instrument.


The LIGO collaboration built two such instruments, the Hanford detector in Washington state and the Livingston detector in Louisiana. A third, independent instrument called Virgo, located near Pisa, Italy, joined the others in 2017. These “interferometers” shoot lasers down two long tunnels, then reflect them back in such a way that the pulses should arrive at the same time. Passing gravitational waves will distort space-time, making one tunnel longer than the other, and throwing off the synchronisation.

By the time the waves wash over Earth, they are extremely weak, and the sort of change in tunnel length we expect is equivalent to about a thousandth of the diameter of a proton. That is far smaller than the disturbances that come from background seismic tremors and even the natural thermal vibrations of the detector hardware. Noise is a huge problem in gravitational wave detections.

Hence why there are detectors in different places. We know that gravitational waves travel at the speed of light, so any signal is only legitimate if it appears in all the detectors at the right time interval. Subtract that common signal, and what is left is residual noise unique to each detector at any moment, because its seismic vibrations and so on constantly vary.

This is LIGO’s main ploy for extracting a gravitational wave signal from the noise. But when Jackson and his team looked at the data from the first detection, their doubts grew. At first, Jackson printed out graphs of the two raw signals and held them to a window, one on top of the other. He thought there was some correlation between the two. He and his team later got hold of the underlying data the LIGO researchers had published and did a calculation. They checked and checked again. But still they found that the residual noise in the Hanford and Livingston detectors had characteristics in common. “We came to a conclusion that was very disturbing,” says Jackson. “They didn’t separate signal from noise.”

The Danish team wrote up their research and posted it online. After receiving no response from the LIGO collaboration, they submitted it to the Journal of Cosmology and Astroparticle Physics. The journal’s editor, Viatcheslav Mukhanov of the Ludwig Maximilian University in Munich, Germany, is a world-renowned cosmologist. The editorial and advisory boards include top physicists such as Martin Rees from the University of Cambridge, Joanna Dunkley at the University of Oxford and Andrei Linde of Stanford University in California.




 
 
Mukhanov sent the paper for review by suitably qualified experts. Reviewers’ identities are routinely kept secret so they can comment freely on manuscripts, but these were people with a “high reputation”, says Mukhanov. “Nobody was able to point out a concrete mistake in the Danish analysis,” he says. “There is no mistake.”

A storm in a teacup, still? General relativity is one of our most well-verified theories, after all, so there is every reason to think its prediction of gravitational waves is correct. We know LIGO should be sensitive enough to detect them. The instruments are finding the waves at exactly the right rate predicted by theory. So why worry about this noise?
Seek and ye shall find

There’s a simple answer to that question. Physicists have made mistakes before, mistakes that have been exposed only by paying close attention to experimental noise (see “Embarrassing noises”).

The first step to resolving the gravitational wave dispute is to ask how LIGO’s researchers know what to look for. The way they excavate signal from noise is to calculate what a signal should look like, then subtract it from the detected data. If the result looks like pure, residual noise, they mark it as a detection.

Working out what a signal should look like involves solving Einstein’s equations of general relativity, which tell us how gravitational forces deform space-time. Or at least it would if we could do the maths. “We are unable to solve Einstein’s equations exactly for the case of two black holes merging,” says Neil Cornish at Montana State University, a senior figure among LIGO’s data analysts. Instead, the analysts use several methods to approximate the signals they expect to see.

The first, known as the numerical method, involves cutting up space-time into chunks. Instead of solving the equations for a continuous blob of space, you solve them for a limited number of pieces. This is easier but still requires huge computing power, meaning it can’t be done for every possible source of gravitational waves.

A more general approach, known as the analytic method, uses an approximation of Einstein’s equations to produce templates for gravitational wave signals that would be created by various sources, such as black holes with different masses. These take a fraction of a second to compute, but aren’t accurate enough to model the final merger of two black holes. This endgame is modelled in an add-on calculation in which researchers tweak the parameters to fit the results of the initial analytic solution.







To spy gravitational waves, LIGO’s detectors need a quiet environment

David Ryder/Bloomberg via Getty Images This use of precalculated templates is a problem, Cornish concedes. “With a template search, you can only ever find what you’re looking for.” What’s more, there are some templates, such as those representing the waves created by certain types of supernovae explosions, that LIGO researchers can’t create.


That’s why Cornish prefers the third method, which he helped develop. It involves building a model from what he calls wavelets. These are like tiny parts of a wave signal that can be assembled in various ways. You vary the number and shape of the parts until you find a combination that removes the signal from the noise. Because wavelet analysis makes no assumptions about what created the gravitational wave, it can make the most profound discoveries. The wavelets “allow us to detect the unknown unknowns”, says Cornish. The downside is that they tell us nothing about the physical attributes of the detected source. For that, we have to compare the constructed signal against the templates or the numerical analysis.

The challenge with all three methods is that accurately removing the signal from the data requires you to know when to stop. In other words, you have to understand what the residual noise should look like. That is exceedingly tricky. You can forget running the detector in the absence of gravitational waves to get a background reading. The noise changes so much that there is no reliable background. Instead, LIGO relies on characterising the noise in the detectors, so they know what it should look like at any given time. “A lot of what we do is modelling and studying the noise,” says Cornish.


“The paper on the first detection used a data plot that was more ‘illustrative’ than precise”

Jackson is suspicious of LIGO’s noise analysis. One of the problems is that there is no independent check on the collaboration’s results. That wasn’t so with the other standout physics discovery of recent years, the Higgs boson. The particle’s existence was confirmed by analysing multiple, well-controlled particle collisions in two different detectors at CERN near Geneva, Switzerland. Both detector teams kept their results from each other until the analysis was complete.

By contrast, LIGO must work with single, uncontrollable, unrepeatable events. Although there are three detectors, they work almost as one instrument. And despite there being four data-analysis teams, they cannot work entirely separately, because part of the detection process involves checking that all the instruments saw the signal. It creates a situation in which each positive observation is an uncheckable conclusion. Outsiders have to trust that LIGO is doing its job properly.
Purely illustrative

And there are legitimate questions about that trust. New Scientist has learned, for instance, that the collaboration decided to publish data plots that were not derived from actual analysis. The paper on the first detection in Physical Review Letters used a data plot that was more “illustrative” than precise, says Cornish. Some of the results presented in that paper were not found using analysis algorithms, but were done “by eye”.

Brown, part of the LIGO collaboration at the time, explains this as an attempt to provide a visual aid. “It was hand-tuned for pedagogical purposes.” He says he regrets that the figure wasn’t labelled to point this out.

This presentation of “hand-tuned” data in a peer-reviewed, scientific report like this is certainly unusual. New Scientist asked the editor who handled the paper, Robert Garisto, whether he was aware that the published data plots weren’t derived directly from LIGO’s data, but were “pedagogical” and done “by eye”, and whether the journal generally accepts illustrative figures. Garisto declined to comment.

There were also questionable shortcuts in the data LIGO released for public use. The collaboration approximated the subtraction of the Livingston signal from the Hanford one, leaving correlations in the data – the very correlations Jackson noticed. There is now a note on the data release web page stating that the publicly available waveform “was not tuned to precisely remove the signal”.

Whatever the shortcomings of the reporting and data release, Cornish insists that the actual analysis was done with processing tools that took years to develop and significant computing power to implement – and it worked perfectly.

However, anyone outside the collaboration has to take his word for that. “It’s problematic: there’s not enough data to do the analysis independently,” says Jackson. “It looks like they’re being open, without being open at all.”

Brown agrees there is a problem. “LIGO has taken great strides, and are moving towards open data and reproducible science,” he says. “But I don’t think they’re quite there yet.”

The Danish group’s independent checks, published in three peer-reviewed papers, found there was little evidence for the presence of gravitational waves in the September 2015 signal. On a scale from certain at 1 to definitely not there at 0, Jackson says the analysis puts the probability of the first detection being from an event involving black holes with the properties claimed by LIGO at 0.000004. That is roughly the same as the odds that your eventual cause of death will be a comet or asteroid strike – or, as Jackson puts it,”consistent with zero”. The probability of the signal being due to a merger of any sort of black holes is not huge either. Jackson and his colleagues calculate it as 0.008.Simultaneous signal

There is other evidence to suggest that at least one of the later detections came from a gravitational wave. On 17 August 2017, the orbiting Fermi telescope saw a burst of electromagnetic radiation at the same time as the LIGO and Virgo detectors picked up a signal. Analysis of all the evidence suggests that both signals came from the brutal collision of two neutron stars.

The double whammy makes LIGO’s detection seem unequivocal. Even here, though, the Danish group is dissenting. They point out that the collaboration initially registered the event as a false alarm because it coincided with what’s known as a “glitch”. The detectors are plagued by these short, inexplicable bursts of noise, sometimes several every hour. They seem to be something to do with the hardware with which the interferometers are built, the suspension wires and seismic isolation devices. Cornish says that LIGO analysts eventually succeeded in removing the glitch and revealing the signal, but Jackson and his collaborators are again unconvinced by the methods used, and the fact there is no way to check them.

What are we to make of all this? Nothing, apparently. “The Danish analysis is just wrong,” insists Cornish. “There were very basic mistakes.” Those “mistakes” boil down to decisions about how best to analyse the raw data (see “How to catch a wave”).

Not everyone agrees the Danish choices were wrong. “I think their paper is a good one and it’s a shame that some of the LIGO team have been so churlish in response,” says Peter Coles, a cosmologist at Maynooth University in Ireland. Mukhanov concurs. “Right now, this is not the Danish group’s responsibility. The ball is in LIGO’s court,” he says. “There are questions that should be answered.”

Brown thinks the Danish group’s analysis is wrong, but worth engaging with. And Cornish admits the scrutiny may not be a bad thing. He and his colleagues plan to put out a paper describing the detailed properties of the LIGO noise. “It’s the kind of paper we didn’t really want to write because it’s boring and we’ve got more exciting things to do.” But, he adds, it is important, and increased scrutiny and criticism may in the end be no bad thing. “You do have to understand your noise.”

Coles himself doesn’t doubt that we have detected gravitational waves, but agrees with Jackson that this cannot be confirmed until independent scientists can check the raw data and the analysis tools. “In the spirit of open science, I think LIGO should release everything needed to reproduce their results.”

Jackson is unconvinced that explanatory papers will ever materialise – the collaboration has promised them before, he says. “This LIGO episode continues to be the most shocking professional experience of my 55 years as a physicist,” he says. Not everyone would agree – but for a discovery of this magnitude, trust is everything.

Embarrassing noises

In 2014, the operators of the BICEP2 telescope made an announcement so momentous there was talk of a Nobel prize. A year later however, far from making their way to Stockholm for the award ceremony, they were forced to admit they had been fooled by an embarrassing noise.

Situated at the South Pole, BICEP2 had been scanning the cosmic microwave background, the pattern of radiation left on the sky from light emitted soon after the big bang. The big announcement was that it had found that gravitational waves had affected the pattern in such a way that proved a core theory of cosmology. The theory in question was inflation, which says the universe went through a period of superfast growth right after the big bang. For almost four decades it had been unproven. Now, suddenly, inflation’s supporters were vindicated.

Except awkward warnings emerged within weeks, suggesting that cosmic dust clouds had scattered the radiation in a way that fooled the BICEP2 researchers. In the end, the team’s estimate of the amount of dust present and the analysis of the kind of noise the dust would produce both proved to be flawed. Noise can hoodwink even the smartest. That is why, despite LIGO being a highly respected collaboration, there is good reason to take questions about its noise analysis seriously (see main story).

How to catch a wave

Output from gravitational wave detectors is full of noise. Disentangling the signal requires decision–making – and poor ones could be disastrously misleading.

The best weapon in the arsenal is known as a Fourier transform. This splits a signal into various frequency components and converts it into a power spectrum, which details how much of the signal’s power is contained in each of those components. This can be done with a window function, a mathematical tool that operates on a selected part of the data. Whether or not to use one is at the heart of the disagreement over LIGO’s results (see main story).

Andrew Jackson’s dissenting team at the Niels Bohr Institute in Denmark chose not to use a window function, a decision that LIGO’s Neil Cornish describes as a “basic mistake”. Jackson says they didn’t use one because it subtly alters the Fourier-transformed data in a way that can skew the results of subsequent processing.

Even with the Fourier analysis done, judgements must be made about the noise in the detectors. Is it, for example, distributed in a predictable pattern equivalent to the bell-shaped Gaussian distribution? And does it vary over time or is it “stationary”? The appropriate techniques for processing the data are different depending on the answers to these questions, so reliably detecting gravitational waves depends on making the right assumptions. Jackson’s group says the decisions made during the LIGO analysis are opaque at best, and probably wrong.