This is a rather good look see
into the world of big fusion research that has sucked up a maximal budget for
decades with not much sense that we are much closer to a practical
solution. Yet his criticism of
alternative protocols is well taken.
This is science that really hates to scale.
This well worth the read and yes,
throwing billions of dollars at it is an excellent use of government
spending. As I pointed out decades ago,
not one dollar ever landed on the moon and not one dollar spent here is ever going
to Iraq
and the appropriate Swiss bank account.
While we are at it, also throw
billions at all the alternative approaches.
We have to discover something important.
Details of Current Fusion Energy Work from Commenter Sebtal
JANUARY 12, 2011
In response to a posting on UK nuclear fusion research there
were several detailed comments from someone knowledgable in nuclear fusion.
1. Sure, they modelled it, and this is part of the ongoing work. Remember, Culham (I did my PhD there) used to be known as UKAEA and has been in magnetic confinement fusion from the beginning, being home of the team from the west sent to verify the then amazing temperatures being claimed by the Soviet Tokamak people, when the west were messing around with Torsatrons and Reversed Field Pinches.
This is not some Johnny-come-lately "new scheme for fusion" bull-snot, nor (as you are correct in saying) is it some new silver bullet: these people ARE the Europeans who have been trying dozens of topologies etc. and more of that endless theorizing: it never stopped!
Plasmas are hideously non-linear and difficult to model. We still don't know exactly how burning plasmas will behave in fusion reactors. Such models, being horribly non-linear and massively computer intensive, and can always be improved. Sure, the potential benefits of alphas in generating "free" current and transitioning to steady state operation without the need for enormous indirect current drive have been theorized, and even modeled before.
The significance of this paper is that it represents an advance in the accuracy of the modeling and thus improves the credibility of the idea. One must realize that the mainstream fusion programme is still very much in the physics stage. ITER is designed as a physics experiment, not an engineering test bed. The models of all the stuff that goes on in a fusion plasma are not exact, and in the past such approaches have turned out to diverge from the reality. This, incidentally, is one of the reasons to have a healthy dose of skepticism about all these small private Fusion researchers... MCF fusion in the public sector started out with a plethora of ideas of how to do fusion and converged on Tokamaks and Stellarators as they seemed to work best experimentally. Nevertheless, with each new generation of machines performed less well than theory suggested as new physics kicks in (one of the main reasons that fusion is always 20 years away), and likely those exploring new concepts will find the same.
Anyway, these analytical and numerical results need to be compared against real data before they can be truly believed, and this need is one of the main reasons for building ITER: to better confirm our present knowledge of MCF plasmas and verify what we think we know about the behavior of a burning MCF plasmas. ITER does not represent the ideal commercial reactor, it represents the ideal Physics experiment. After that, we have DEMO (and probably, DEMOs) to act as engineering testbeds and to try out optimized machines that use all the tricks we have discovered to be smaller, simpler and cheaper.
2. Tom Craver –
While it certainly makes sense to study hot plasmas, I often wonder if an engineering approach isn't exactly what is missing from fusion research.
No, not the engineering approach of "optimize an existing system based on known physical principles". More of the "hack around with wild ideas to try coming up with a new approach, and THEN apply known physics to beat on the idea until nothing is left - but maybe sometimes SOMETHING is left that's worth trying
This is more of the "Science fiction as inspiration" approach to engineering - an engineer reads about something that is currently impossible in a SF story, thinks it is just too cool to continue NOT existing, and starts thinking about odd-ball approaches and trying to see what physics might apply to make them work.
"Dang I wish I had a Tri-corder/communicator/space-drive/ray-gun/holodeck! But how would it work? How can I MAKE it work?!" Where a good scientist would look at the thing, think of 3 or 4 good reasons why it is impossible and dismiss it, a good engineer will be thinking "Well, yeah, but whaddabout...." - looking for loop-holes in the reasons why it can't work.
That's what I see happening now in the various alt-fusion approaches. Granted, most or perhaps all of them will fail - but then, so have the traditional science-based "study plasma" approaches, so far. It's just that the science-base approach cares at least as much about learning about plasmas and fusion, as they do about getting a working fusion reactor.
"Collapsing Fractal Magnetic fields!" "Magnetically Compressed shells of plasma!" "X-Ray beams!" "Steam-punk fusor!" "Kinked magnetic fields!" "Immaterial electron grid!"
3. Well, really it needs both. In an ideal world, I guess a fusion program would consist of a team of senior engineers and physicists working out what the next machine in a path towards something that met the specification for a "commercially competitive reactor" would look like, iterating between sound engineering principles and the requirements of the physics, and modeling the kind of performance you expect of the plasma. Where there is a question mark, it would the role of physicists and engineers to come up with experiments on the available machines, new diagnostics, modeling etc. to fill in that question mark, verify the model etc. As soon as the new machine has been conceptually designed, the work of securing funding to build this conceptual design would go forward.
In practice, the program is definitely too physics led in my view. The scientists are not employed like an engineer in the Apollo program with a clear product in mind, rather, they run the normal academic treadmill of publish or perish. This tends to mean they are perfectly happy and able to make an interesting an valuable scientific career out of running experimental campaigns on a given machine using every possible diagnostic they can think of (verifying an old result on a new machine is still publishable, and scientifically valuable too) until literally everything that can be done has been done. This is useful, because governments fund MCF experiments like Telescopes, they tend to be not so sympathetic to building new ones until you can say you have run out of things to do with the old one. Furthermore, a lot of the programs are run by state institutions with lots of permanent staff, so you have to worry about things like redundancies, budget to hire new staff... a lot of work in bidding for new machines is structured around the idea of maintaining jobs etc. as well as doing new and interesting stuff. On top of that, the whole thing is taking place at arms length to the rest of science and engineering. New ideas can take a long time to permeate through (I remember telling someone about the idea of using diamond as a plasma facing material about six years ago, and being laughed at "you are going to embed rocks in the divertor?"... the guy knew nothing of chemical vapour deposition!).
This tends to lead to the engineering being relegated to "here is a budget to make this kind of device", which works well for scientific experiments, which can be large, costly, but one off ventures, but not so well for ensuring the concept that is eventually evolved is truly viable for commercial use. So much is dependent on things like machine geometry, I wonder if what we learn from ITER will be generally applicable, or contain hidden requirements that the machine look substantially similar to ITER, and not be easily translatable to, say, a compact spherical tokamak-stelarator hybrid that might actually represent the peak of the design space. I worry that in the total parameter space of viable fusion reactors, we could be missing a giant mountain as we twiddle around to find the local peak.
That said, the plasma physics really is nightmarishly complex on it's own, irrespective of the device or scheme. I am highly skeptical that there is a particular device that overcomes that complexity through a clever trick. Working with these small ideas, with the potential for looming complexity when you scale up, is tantamount to banging around in the dark and hoping for the best, though by all means, try them out. With funding restricted though, it makes more sense to plug on with what we are most advanced in, which is stellarators and tokamaks rather than throwing money at lots of small machines and hoping for something new.
Those still pushing fusors and RF configurations are in part the same people, who just never gave up on them (which is not necessarily a good thing!) and/or people who do not have access to lots of the original research which can be difficult to get hold of as it is often not on-line, may never have been published as it was a null result or because the people working on it were not so concerned with publication as they are now. A lot of it may still be locked up in the minds of senior or now-retired researchers, most of whom are working in the MCF programmes, who's response to such ideas is to chuckle to themselves and say "everyone knows that won't work! We tried it back in the 50's and it was a disaster." but not really bother with debunking it as they don't take programmes outside the main fusion programme seriously. And if that information is conveyed to the researchers, it can look a lot like "not invented here" and "we don't take you at all seriously".
Even within the MCF mainstream, the publication and "new machine" problem means you find things you think are new and then discover results similar to your own that are older than you are, but which have been forgotten due to technical limitations. In the 70's they lacked the computational power to do so, so assumed it is micro scale turbulence model it as an effective diffusion, even though the experimental evidence shows transport at the edge was and still is intermittent, coherent bursts of plasma being shot out of the machines. People just smoothed the data to remove the spikes, fitted an effective diffusivity, called it anomalous and moved on. This is fine for some purposes, as long as you remember that the transport is NOT laminar and diffusive, and the "effective" diffusivity is just a crude parametrisation suitable for some tasks but not for others. Over twenty to thirty years though, people tend to forget.
4. The stuff you actually have to worry about in big machines like ASDEX etc. and larger is not best understood from a strictly particle view. It's stuff like turbulence. MCF plasmas tend to be quasineutral, with electric fields effectively screened after a few micrometers. Even when at temperatures sufficient for fusion, their gyro-orbits (10cm) are way smaller than the plasma dimensions (meters). These problems were overcome decades ago, and part of the reason why people thought that Fusion would be a lot easier than it has turned out to be.
The reality is most transport of heat and particles in modern large scale devices is "anomalous" which is the name we give to stuff that we don't understand. A mix of instabilities; drifts we hadn't taken into account arising from, well, all sorts of things from small electric field perturbations to ripples, wells, and islands in the magnetic field; and properly nasty fluid stuff like turbulence. These problems kick in at different sale lengths, which is why Fusion has been "20 years away" for 60 years. Cautious grounds for optimism then that we might not find too many nasty surprises (and perhaps some nice ones!) in ITER as the plasmas are now big and hot enough not to expect us to be looking at plasmas through the wrong kind of model (particle rather than fluid, for example).
But that is still a lot of nasty nonlinear bits of physics interacting with each other, and a lot of self organisation is going on in the plasma, so it is an analytical nightmare, and computationally horrible. Only recently has the resources to model more than a bundle of flux tubes in 3-D. I am hoping that GPGPU computing is really going to help here. Further, they are very difficult to properly measure (when it is happening in the centre of a plasma), hard to do fully repeatable experiments. Your control knobs are rather indirect, and the precise way the plasma in a given scenario evolves can be highly dependent on the precise condition of the machine, including impurity levels and cleanliness of the wall). Experimental MCF physics is pretty horrible!
A lot has been done with empirical scaling laws (beware!) but if we really want to design good reactors, my feeling is we need to understand the physics of the plasmas and exploit all the tricks we can. Though it is possible to design a machine that will ignite in ohmically heated machines without such tricks, it would have to be huge. And probably far far too expensive to ever be a competitive reactor design I would have thought. On the other hand, this might have been a smarter way to go for ITER. The reduced costs in magnetic volume have been replaced with the complexity of a design that has higher requirements in other areas, like materials, required control diagnostics and complicated plasma scenarios, restricting the explorable parameter space.
Now, I'm an experimentalist and worked primarily in the edge (though I've just started a bit of work on neoclassical transport in stellarators), so I'm not the best person to ask about the significance of this work. I don't follow the rhyme or reason of NBF seizing on this particular paper (there are plenty of others of a similar bent). I would guess incremental as another bolt towards making *better* plasma scenarios that achieve higher confinement times, densities and temperatures, rather than something that suddenly "lets us do fusion". Actually, we already probably know enough to "do Q=10 fusion" in a very large L mode tokamak. Nobody has done it (though it was the original ITER design), largely because people keep cutting the budgets every ten years. Nevertheless, it would not make for a good power plant and would be completely unoptimised.
The key areas in progressing from what we currently know how to do, which is make big physics experiments, to moving onto something more compact, simple and suitable for commercialisation is:
1. Transport barriers.
Transport barriers (which I think I am correct in saying we still don't
fully understand) in simple terms: the dominant transport mechanism seems to be
turbulent, and if you get a velocity shear layer in the plasma, you can create
a local barrier in the plasma that blocks the transport. This leads to things
like H-mode (High confinement mode), that blocks transport at the edge and
pushes up the core pressure quite dramatically, and "advanced mode",
involving an internal transport barrier that does the same again. These are
things that allow us to get to higher densities, temperatures and confinement
times in the core, which in turn means smaller devices with less auxiliary
heating, and possibly less stored energy, which is a good thing as a disrupting
plasma can dump a fantastically high power loading onto a very small area of
the wall, which can pollute the machine with lots of nasty heavy metal atoms
that can be very difficult to remove, but ensure all your future plasmas
radiate their energy away in line emission and bremsstrahlung.
What would be the holy grail in this area is understanding the exact causal relationship between turbulent transport and the shear layers, how the shear layers form, and if we can design plasma scenarios where they form spontaneously or can be induced by outside methods.
What would be the holy grail in this area is understanding the exact causal relationship between turbulent transport and the shear layers, how the shear layers form, and if we can design plasma scenarios where they form spontaneously or can be induced by outside methods.
2. Indirect current drive. Tokamaks use the poloidal magnetic field to confine the plasma, the toroidal component just adds stability. The poloidal magnetic field is generated through a toroidal current in the plasma, which is driven initially by induction. Your plasma is essentially a single turn secondary winding. This means that your machine is intrinsically pulsed. Just about fine for very large physics experiments, not so fine for a reactor, as high pulse currents mean your machine is being continually put under stress and strain. A requirement for incredibly high vacuum and low levels of elements like oxygen and nitrogen contaminating the plasma, combined with a vacuum vessel that is being whacked every few hours or so (or less) with pretty hefty impulses from high power coils is not a great combination. Indirect current drive through radio waves, or self-organising currents in the plasma, offer the opportunity for the shot to continue after the solenoid swing has been exhausted, either allowing an extended duty cycle or possibly continuous operation.
The particular reason for Culhams interest is that the
This is where this work fits in... understanding the theory better now means we can start to design scenarios and experiments to run on ITER and, ultimately, design machines that are more credible power plants in the future.
3. Materials.
100 displacements per atom over the life time of the machine from neutron damage. Tritium bred in situ. Power loadings on the divertor of several megawatts per square meter (which must be both conducting and non-porous), with much higher peak loadings for transient instabilities like Edege Limited Modes (a depressing side effect of the H-mode). Enough said really.
On top of that, some people are starting to suggest that some of the advanced plasma control methods that are required to operate in these advanced scenarios (analogy: high performance fighter jets are no longer aerodynamically stable and rely on active feedback controls) rely on diagnostic techniques that may no longer work in the kinds of plasmas ITER is supposed to run. Fun and games all around.
Make no mistake, these are huge challenges... my overall my feeling for fusion is not overly optimistic (though not necessarily totally pessimistic) and I do worry that the political and public understanding of ITER is radically different from what the physicists think, and also that not enough work has been put into thinking through the feasibility of these things in a market place where politicians no longer sign a piece of paper and institute national infrastructure. I recently met a researcher for the EU Parliament who had been working on ITER funding. There is a dangerous mix here of Physicists pushing an agenda that fits what they imagine fusion power should look like, and EU politicians looking for "an equivalent to the apollo programme" who actively yearn for the days when big state infrastructure programmes existed. This seems to me to be a recipe for bad strategy for the direction of the programme as a whole. I do think we might be better off opting for a staging point with fission-fusion hybrid machines, but generally the community is now locked into supporting ITER. Anyone hoping to realistically jump from ITER to a proof of principle commercial reactor producing electricity that is also suitable to compete with fission, gas or coal plants is in for a nasty shock I reckon. Naturally, Physicists tend to act as though a thermal Q=10 means the work is done. In reality, it means the work is just starting. But this is slightly off topic, you can read more about my views on that on a comment here: http://metamodern.com/2010/01/.../
As for the start ups with their RF configurations and polywells, well, I think they are treading the well worn path of massive optimism followed by the harsh realities of nasty physics kicking in at larger powers and scale lengths.
==
Two minutes after posting this: "I am highly sceptical that there is a particular device that overcomes that complexity through a clever trick." I read that MIT have built a machine (Large Dipole Experiment) that somehow makes turbulence, the bane of all controlled fusion research, work to confine the plasma... :)
So, perhaps there are new thoughts to be had, but a lot of the stuff reported here on polywells, reversed field configuration and beam-beam fusion is stuff that was tried before and was discarded in favour of Stellarators, Tokamaks and ICF... so I would take with a big pinch of salt that smaller teams with less funding (though it buys better technology now) are going to crack the problem. I am particularly sceptical when such claims are backed by nice straight line graphs that don't appear to be subject to any regime changes as the devices scale with power density or spatial scale lengths.
5. Goatguy
Sebtal, a magnificent reply. I feel humbled, and pleased that you took the time to enlighten, obviously using the watered-down, yet still almost tangible argot of the physics you practice. Gyroradius, line emission, Bremsstrahlung, ohmic heating, turbulent/shear modes, ... I had no idea that the ion mean excursion outside individual flux tubes was on the order of microns. Stuff there to chew on.
But I also want to point out to the few brave readers that have made it this far - note that the discussion, necessarily, edged from science toward design goals, then from there to the realities (and unrealities) of the financing of the research, the politics of the financing, and the impetus of the political parties to politic the issues. It is starkly clear that as one wag said some 25 years ago, "the only thing standing between us now and jaw-droppingly successful fusion... is the political will to fund the science to resolve the issues that stand in the way."
On this I agree.
Secondly, Sebtal notes that the (relative) microfunding of alternate plasma research is likely, when scaled up, to hit the walls of the unpredicted effects of scaling itself. We shall see. I too remain not very optimistic about most of the alt.fusion proposals. Farnsworth fusor is magnificent in its simplicity and present-tense ability to generate copious neutrons from relatively pedestrian ("high school geek home-made") apparatus, but "copious neutrons" is an enormous distance from practical power-generator levels of fused nucleons. As in 10 to 15 orders of magnitude more. 10,000,000,000× to 1,000,000,000,000,000× That's a lot of scaling.
6. Sebtal
-
Er, sorry, it did get a bit long and wonkish.
Er, sorry, it did get a bit long and wonkish.
No comments:
Post a Comment