Nuclear power, it’s no contest

The electricity needed for any country to successfully replace fossil fuels, both for transportation and everyday use, will have to come from nuclear generation.

As the world’s population and standard of living continues to climb, demand for more – and cleaner energy – grows alongside the pressures we continue to put on our environment.

Today, there is an almost global wide move to develop higher levels of nuclear energy production. This is because nuclear energy works, it’s safe and recognition is slowly dawning it’s going to be impossible to meet the global, growing demand for energy and cut carbon dioxide emissions without nuclear energy.

There is simply no other logical alternative.

Critical Mass – a point or situation at which change occurs – support for the measure has reached critical mass.

Nuclear No-Contest

By: James P. Hogan
September 19, 2009

jamesphogan.com

Before the 1950s, the future confronting the human race was bleak. With the global population increasing and becoming more dependent on energy-dense technologies to sustain its food supplies and rising living standards, there seemed no escape from the catastrophe that would come eventually when the coal and the oil ran out. But few worried unduly. It was only after an escape from the nightmare presented itself with the harnessing of nuclear processes and the prospect of unlimited energy that people began to worry. People can be very strange.

Toward Higher Energy Densities

For reasons that have mainly to do with politics and the media’s thirst for sensationalism, nuclear energy has been a subject of much disinformation and alarmism for several decades. In fact, nuclear is safer, cleaner, and potentially cheaper and more abundant than any other proven source of energy that the human race has come up with. But beyond this, it’s real significance is that it represents the next natural step in the evolutionary progression that has marked the history of energy development.

From unaided muscle power, through the use of animals, wood, wind and water, to coal, and oil, finding better ways of doing the work involved in living has reflected the harnessing of more concentrated energy sources. A lot is written about how much energy can be obtained from this source or that source. But if you really want to do things more easily and efficiently – and open up ways to doing new things that were inconceivable before – what counts is energy density. How much can be packed into a given volume. It’s easy to calculate how much energy it takes to lift three hundred people across the Atlantic, and how much wood you’d need to burn to release that much energy. Okay, now try building a wood-burning 757. It won’t work. The mountain of logs will never get itself off the ground. You need the concentration of jet fuel.

Some people argue that we don’t need nuclear power because we already have other ways to generate electricity. This misses the whole point. It would be like somebody in an earlier century telling Michael Faraday that we didn’t need electricity because we already had other ways to heat water. What made electricity so different was its ability to do things that were unachievable to any degree with existing technologies, and the whole field of electrical engineering and electronics that we take for granted today was the result. A similar distinction sets nuclear processes apart from conventional sources. All forms of hydrocarbon and other chemical combustion involve energy changes in the outer electron shells of atoms. The energies associated with transitions of the atomic nucleus are thousands of times more intense, and hence represent a breakthrough to the next regime of energy control that the growth of human populations and wealth creation require. The so-called alternatives do not.

Our present use of nuclear energy, as a replacement for conventional heat sources to generate electricity by steam turbines, is just a first, exploratory step into a qualitatively new realm of capability, opening up prospects of obsoleting most of today’s cumbersome and polluting industries in much the same way as the introduction of electricity revolutionized the coal-based methods of the nineteenth century.

The availability of cheap, high-temperature process heat opens the way to desalinating seawater inexpensively in large quantities and pumping it to where it needs to go to irrigate currently useless land. Furthermore at such temperatures, water “cracks” thermally into its constituent atoms, yielding a potentially unlimited supply of hydrogen as a base for a whole range of synthetic liquid fuels to replace gasoline. My guess is that, given the will and the vision, we could be making our own hydrocarbons more cheaply, and possibly with higher efficiency, long before the last barrel of the natural stuff is pumped out of the ground.

What About Safety?

One of the fears implanted in the public mind has been that nuclear power is inherently dangerous. Every form of human activity carries some attendant risk. The only meaningful way that society can judge the acceptability of a given risk is by weighing it against the benefits, and comparing the result with those obtained by similarly treating the alternatives.

Despite the hysterical media reactions to the accidents at Three Mile Island and Chernobyl, nuclear power remains probably the least threatening to human life of all major industrial technologies. Because the energy density of nuclear fuels is much greater, the amount needed to achieve the same result is correspondingly less. Over five thousand times as much coal has to be mined, transported, and processed as uranium to produce the same amount of energy – 200 trains a year, each consisting of 100 cars, to feed a 1,000 Megawatt plant, compared to four car loads of uranium oxide. Fatalities resulting worldwide from mining operations alone every year are numbered in thousands, but like automobile accidents they occur in ones and twos spread over time in many places, and are largely invisible. In the Western world, nuclear power generation has never killed anybody.

At Three Mile Island no one was killed, no one was hurt, and no member of the public was ever in any danger. We were not at the brink of a major catastrophe. A bizarre set of circumstances coupled with inappropriate operator responses led to a loss of coolant and damage to the reactor core that included the melting of some fuel. The safety systems responded the way they were supposed to by shutting the system down. The outer layers of containment were never challenged, let alone breached, putting conditions well within the worst-case scenario that the plant had been designed to withstand. At one point there was speculation that an accumulation of hydrogen gas might explode. Had it done so, there would have been simply a chemical detonation – certainly nothing of a thermonuclear nature as was suggested by the headline H-BLAST IMMINENT that appeared in some newspapers. It was established later that an explosion couldn’t have happened since there was no oxygen present; but even if it had, the shock would have been comparable to that imparted by a sledge hammer, which would hardly damage a reactor vessel with steel walls twelve inches thick. The engine block of a car absorbs more stress thousands of times every minute. Even if the vessel had cracked, any dangerous material would still have had to get through a four-foot concrete shield and an outer steel containment shell to reach the environment. Yes, some radioactive gas accumulated in the containment building and was subsequently vented to the outside. But the predictions of tens of thousands of cancer deaths as a consequence were absurd. The maximum increase in radiation dose measured immediately above the plant was in the order of eight millirems over the course of several days. A routine dental X-ray delivers three times as much in seconds. When a dam bursts, a drilling platform collapses, or a gas storage tank explodes, you don’t have days for the luxury of holding press conferences and talking about evacuating.

Chernobyl didn’t say anything about nuclear engineering. It did say something about priorities under a militarist totalitarian system in which public safety doesn’t figure highly in policymaking. What happened was that the reactor’s graphite core caught fire after the safety systems had been turned off for experimental work to be conducted, and the resultant explosion ejected radioactive material due to the lack of a comprehensive containment structure. Reduced containment suggests a design intended primarily to serve military needs, where the fuel has to be removed frequently to avoid the contamination by fission products that would prevent purification to the level that weapons-grade material requires. In civilian power reactors the fuel rods are changed typically every three years, and the obstruction caused by containment structures becomes less of a hassle. So what the circumstances point to is a facility built primarily for defense purposes being used to supplement the power grid at a time of low political tension and reduced military demand.

It’s difficult to see how anything comparable could happen with Western nuclear plants in the way that some critics have claimed. Besides operating inside multi-layer containment to ensure defense in depth, Western reactors don’t possess graphite cores – such a basis for design was expressly rejected by the U.S. in 1950, precisely because of the risk of one igniting in the way that happened. Two features are essential for a nuclear power reactor to function: a “moderator” substance, which surrounds the fuel and in effect keeps the nuclear chain reaction running; and a coolant to carry away the heat produced and deliver it to the steam generator that drives the turbines. The Chernobyl design used graphite as the moderator and water as the coolant. This means that if the coolant flow fails the reactor will continue to produce heat at full power (because of the presence of the moderator), with consequent rapid escalation to an emergency – as in fact occurred. Western designs, by contrast, use water as both the moderator and the coolant. So if the coolant should fail, the moderator is automatically lost also, and the chain reaction ceases, leaving only the residual fission products as sources of heat to be disposed of, which represents typically five percent of the normal power output.

The actual attributed deaths at Chernobyl numbered thirty-eight, from immediate effects and acute radiation poisoning among firefighters and rescue workers. The figure of hundreds of thousands of long-term cancers that was bandied around came not from any physical diagnoses but from statistical computer exercises using theoretical assumptions that have been shown to be wrong. Studies twenty years later show nothing to support these predictions.

But What If? . . .

More people seem to be realizing that a nuclear power plant cannot explode like an atom bomb. The detonating mechanism for a bomb has to be built with extreme precision for the bomb to work at all, and a power plant contains nothing like it. Besides that, the materials used are completely different. Natural uranium contains about 0.7 percent of the fissionable U-235 isotope, which has to be enriched to more than 90 percent for bomb-grade material. For the slow release of energy required in power reactors, the fuel is enriched to only 3.5 percent. It simply isn’t an explosive.

So what about a meltdown? Even if TMI wasn’t one, mightn’t the next accident be?

Yes, it might. The chance has been estimated – using the same methods that work well in other areas of engineering where there have been enough actual events to verify the calculations – as being about the same as that of a major city being hit by a meteorite one mile across. Even if it happened, simulations and studies indicate that it wouldn’t be the calamity that most people imagine. If the fuel did melt its way out of the reactor vessel, it would be far more likely to sputter about and solidify around the massive supporting structure than continue reacting and burrow its way down through the floor. The British tested an experimental reactor in an artificial cave in Scotland for over twenty years, subjecting it to every conceivable failure of the coolant and safety systems. In the end they switched everything off and sat back to see what happened. Nothing very dramatic did. The core quietly cooled itself down, and that was that.

But what if the studies and simulations are flawed and the British experience turns out to be a fluke? Then, mightn’t the core turn into a molten mass and go down through the floor?

Yes, it might.

And then what would happen?

Nothing much. We’d have a lot of mess down a hole in the ground, which would probably be the best place for it.

But what if there was a water table near the surface?

In that case we’d create a lot of radioactive steam that would blow back up into the containment building, which again would be the best place for it.

But what if some kind of geological or structural failure caused it to come up outside the containment building?

It would most likely expand high into the sky and dissipate.

But what if . . .

Now we’re beginning to see the kinds of improbability chains that have to be dreamed up to create disaster scenarios for scaring the public. Remembering the odds against any major core disintegration in the first place, what if there happened to be an atmospheric inversion that held the cloud down near the ground, and if there was a wind blowing toward an urban area that was strong enough to move the cloud but not enough to disrupt the inversion layer? . . . Then yes, you could end up killing a lot of people. The statistical predictions work out at about 400 fatalities per meltdown. Perhaps not as bad as you’d think. And that’s if we’re talking about deaths that couldn’t be attributed directly to the accident as such, but which would materialize as a slight increase in the cancer rate of a large population over many years, increasing an individual’s risk from something like 20.5 percent to 21 percent. Since air pollution from coal burning is estimated to cause 10,000 deaths annually in the U.S., for nuclear power to be as dangerous as coal is now would require a meltdown somewhere or other every two weeks.

But if we’re talking about directly detectable deaths within a couple of months from acute radiation sickness, it would take 500 meltdowns to kill 100 people. On this basis, even having 25 meltdowns every year for 10,000 years would cause fewer deaths than automobiles do in one year.

What About The Radiation, Then?

It’s true that even an un-melted down nuke under normal operation and in proper working order releases some radiation into the environment. But then, so does a shovelful of dirt from your back yard, the air you breathe, everything you eat, the water you drink, and even your body tissues. There’s hardly anything that doesn’t emit some radiation from trace elements that it contains, all of which adds up to a natural background thousands of times stronger than anything contributed by the nuclear industry. The emission from the granite that Grand Central Station is built from exceeds the permissible limit set for industry. Grand Central Station wouldn’t get a license as a nuclear plant.

This is not meant to suggest that large doses of radiation aren’t harmful. Napalm bombs and blast furnaces are not very healthy either, but it doesn’t follow that heat in any amount is therefore hazardous. You wouldn’t last long at the no-dose temperature of absolute zero.

The science of toxicology has long recognized the phenomenon of “hormesis,” in which substances that are lethal in high doses, turn out to be beneficial, if not actually essential to health, in small doses, as a result of stimulating the body’s immune and repair mechanisms. In the last few decades it has become increasingly clear that this applies to ionizing radiation as well. By just about every measure that biologists use to assess the well-being of living things – vitality; longevity; number of offspring; the number of them that survive; healing of injuries; susceptibility to disease and speed of recovery – everything from bacteria through plants, bugs, invertebrates, to mammals and people fares better when the environmental radiation is moderately increased. Depending on the type of organism, the optimum seems to be around ten times the natural background; beyond that the effects become less benign, then harmful, and eventually lethal. And this makes intuitive sense. When it comes to temperature, pressure, humidity, light, internal and external chemical concentrations, and just about everything else that makes up their environments, living things are designed, created, evolved – whatever you subscribe to – to exist within a distinct comfort zone, beyond which too little can be as bad as too much. It would seem odd if the same didn’t apply to radiation too.

Nevertheless, we are constantly being told that any level of radiation is harmful, however small. A simple prediction from this would be that cancer in areas with higher background levels ought to be greater. But the fact is, they’re not. The cancer rate in Colorado, for example, with twice the nation’s average radiation, due to the cosmic rays at that altitude and the high radioactivity of the rocks that occur there, when corrected for such factors as age and occupation, is only 68 percent of the average. The relationship remains negative – i.e. the higher the radiation background, the lower the cancer rate – across the country as a whole, with a spectacular correlation coefficient of minus 39 percent. That’s about the same as the correlation of lung cancer with cigarette smoking – but the other way around.

What About The Waste?

Well, after the foregoing heresies, would it come as a complete surprise if I were to suggest that the ease of getting rid of the waste is one of nuclear power’s major benefits? Because the amount of fuel needed for the same amount of energy is much smaller, so is the amount of waste produced. And the waste that is produced isn’t as hazardous as people are led to believe. It’s considerably less dangerous than many other substances that are handled routinely in far greater quantities with far less care, which the world accepts as a matter of course.

Around 95 percent of the spent fuel that comes out of a power reactor can be reprocessed into new fuel and put back in – saving in a typical plant’s 40-year lifetime the equivalent of eight billion dollars’ worth of oil. Burning it up in this way is the sensible thing to do, and the industry was designed on the assumption that this would be the case. What’s left after reprocessing constitutes the “high level” waste that needs to be disposed of. A large, 1,000-MW plant produces about a cubic yard of it in a year – small enough to fit under a dining-room table. A coal plant of equal capacity produces ten tons of waste every minute. A facility to reprocess spent nuclear fuel in the U.S. was commenced as a joint venture by government and industry at Barnwell, South Carolina. But work was halted in early 1977 essentially for political reasons, while at the same time the utilities were cut off from the military reprocessing facilities that had been handling domestic wastes safely for twenty years. Thus, 100 percent of what comes out of reactors is having to be treated as if it were high-level waste, to be stored in ways that were never intended, and this is what gets the publicity. It’s a needlessly manufactured political problem, not a technical one. The rest of the world continues to reprocess its spent fuel regardless.

But isn’t it true that the high-level waste remains radioactive for tens of thousands of years? So what do you do with that?

Yes, the high-level waste contains fission products that have long half-lives. But these are not what constitute a possible biological hazard. They just provide big numbers that get the public’s attention. For obviously, if the energy release is spread out over that long a time, its intensity can’t be very great. Rusting iron has a long half-life; TNT has a short one. The principal danger is from the short-lived isotopes, such as iodine 131, with a half-life of eight days. To allow these to decay to levels that can be safely handled, the spent fuel is put into cooling ponds at the reactor site for six months before being shipped for reprocessing.

So what do you do with what’s left?

Current proposals are to reduce it to a powder, vitrify the powder into a highly stable glass, seal the glass into steel canisters, and bury them in a concrete repository two thousand feet underground – although some scientists have urged that the repository be made accessible, since the “wastes” contain many rare isotopes that could be invaluable after the current phobias have abated. Beyond this somewhat mundane approach, more recent theoretical and research developments point to the feasibility of artificially stimulating these long-life fission products to decay instead in ways that will take only minutes, using low-cost equipment that can be operated on-site, without need for costly transportation and long-term bulk storage. By definition these are unstable nuclei, after all, like rocks balanced on the edge of a precipice, waiting for a nudge to send them in a direction that they’re already set to go. Such a solution has a feeling of “appropriateness” about it – using nuclear technology to resolve an issue that is of an inherently nuclear nature.

Let’s make no bones about it. We are talking here about a significant concentration of radiation that would have to be confined and handled with great care. If all the electricity used in the United States were produced by nuclear power, the high-level waste produced each year would be enough to kill ten billion people – more than the present population of the planet. Sounds scary, doesn’t it? But the U.S. also produces enough barium to kill a hundred billion people, enough ammonia and cyanide to kill six trillion, enough phosgene to kill twenty trillion, and enough chlorine to kill four hundred trillion. There’s no doubt enough gasoline around, too, in cars, garages, storage refineries, and under filling stations to kill us all several times over, and enough pills in hospitals, pharmacies, and family medicine closets. But we don’t worry about it, because there’s no way in which the population is going to line up to be administered their dose or otherwise be evenly exposed to any of these substances. This is even more true of nuclear waste sealed deep underground.

Every foot of overlying rock reduces the radiation by a factor of ten, which means there’s no hazard to anyone above ground from the buried material. What danger there is comes from the risk of some of it finding its way out of the repository and into a person through being ingested or inhaled. Unlike chemical toxins, which remain lethal forever, radiation from nuclear waste decays with time. After ten years of burial, it would be about as toxic as barium if ingested; if inhaled, a tenth as toxic as ammonia and a thousandth as toxic as chlorine. After a hundred years these figures fall to one ten-thousandth, one hundred-thousandth, and one ten-millionth respectively. Nature’s biological waste-disposal program puts a thousand million tons of ammonia into the atmosphere every year, and we use chlorine liberally to clean our bathtubs and swimming pools.

For comparison, a year’s operation of a 1,000-MW coal plant produces 1.5 million tons of ash – 30,000 truck loads, or enough to cover one and a half square miles to a depth of 40 feet – that contains large amounts of carcinogens and toxins, and which can be highly acidic or alkaline depending on the sulfur content of the coal. Also, ironically, more unused energy is thrown away in the form of trace uranium in the ash than was obtained from burning the coal. Getting rid of it is a stupendous task, and it ends up being dumped in shallow landfills that are easily leached out by groundwater, or simply piled up in mountains on any convenient site. And that’s only the solid waste. In addition there is the waste that’s disposed of up the smokestack, which includes 600 pounds of carbon dioxide and ten pounds of sulfur dioxide every second, and the same quantity of nitrogen oxides as 200,000 automobiles. So in answer to questions about the “unsolved problem” of nuclear waste, is this supposed to be a solved one?

An equivalent-size nuke, by contrast, produces nothing in addition to its cubic yard of high-level waste, because there isn’t any chemical combustion. No ash, no gases, no smokestack, and no need for elaborate engineering to generate and control enormous air flows. Because of its compactness, nuclear power is the first major industrial technology for which it is actually possible to talk about containing all the wastes and isolating them from the biosphere. A study of the consequences of the U.S. going to all-nuclear electricity concluded that the total additional health risk that the average citizen would be exposed to, covering everything from uranium mining through transportation, power generation, to final disposal of the wastes, would be equivalent to that of raising the speed limit by six thousandths of one mile per hour. The risks eliminated, of course, would be far greater.

What About Terrorists?

Fears are expressed that the spread of nuclear power would make available the resources and materials for politically unstable nations and terrorists to make bombs. To whatever degree such possibilities may exist in today’s world, domestic nuclear power is pretty much irrelevant. Any group that has the determination and funds to make a bomb can do so in any of at least a half-dozen ways that are cheaper, simpler, faster, and less hazardous than going through the complications of using new or used power plant fuel, and require no access to civilian generating technology. Expertise is available that can be bought for a price, and with laser isotope separation techniques the materials to produce bomb-grade materials exist in rocks everywhere. Slowing the introduction of nuclear power to developing nations does nothing to reduce potential weapons threats. It does, however, retard their economic development and thus help perpetuate the differences in health and living standards that perhaps make resorting to such threats more likely.

Solar Dreaming

If the way forward into the future calls for higher energy densities, the notion that we can depend on solar or wind (which is another form of solar) represents a move backward. To get an idea of just how dilute a source solar is compared even to coal, consider a lump of coal capable of yielding a kilowatt-hour of electricity, which would weigh about a pound, and ask how long the Sun would have to shine on it to deposit the same amount of energy that the coal will release when burned. The area of its shadow, which measures the sunlight intercepted, would be about fifteen square inches. In Arizona in July, with a 24-hour annualized average insolation of 240 watts per square meter, it would take 435 hours, or almost three weeks , for this amount of surface to receive a kilowatt-hour of sunshine. For the average location in the U.S., allowing for bad weather and cloud cover, a reasonable estimate would be twice that. But to obtain a kilowatt-hour of electricity, at the ten to twenty percent efficiency attainable today, which appears to be approaching its limit, we’d be talking somewhere between thirteen and seven months.

The Sun shining on forests for tens or hundreds of years affords an enormous concentration of energy over time that Nature performs for free. Subsequent geological compaction into coal adds another dimension of concentration in space, which humans carry further by their activities of mining and transportation. Hydroelectric power is another form of highly concentrated solar. The Sun evaporates billions of tons of water off the oceans, which fall on wide areas of land and drain through river systems to strategic points suitable for building dams. Once again, most of the work involving the concentration of energy in time and space on enormous scales is done for nothing by Nature.

I wonder if the people who talk glibly about attempting to match such feats artificially really comprehend the scale of the engineering that they’re proposing. A 1,000-MW solar conversion plant, for example – the same size as I’ve been using for the comparisons of coal and nuclear – would cover 50 to 100 square miles with 35,000 tons of aluminum, two million tons of concrete, 7,500 tons of copper, 600,000 tons of steel, 75,000 tons of glass, and 1,500 tons of other metals such as chromium and titanium – a thousand times the material needed to construct a nuclear plant of the same capacity. These materials are not cheap, and real estate doesn’t come for nothing. Moreover, these materials are all products of heavy, energy-hungry industries in their own right that produce large amounts of waste, much of it toxic. So much for “free” and “clean” solar power.

The comparison doesn’t end there. When a power engineer talks about a one-thousand-megawatt plant, he means one that can deliver a thousand megawatts on demand, anytime, day or night. A nuclear plant can do this; so can a conventional fossil-fuel plant. But a solar plant can only operate when the Sun is shining, which straightaway gives it a maximum availability of 50 percent – low enough to be considered prohibitively uneconomic for any other type of power plant. To ensure supply when the demand is there, some kind of regular supply would have to be available as a backup anyway, making the whole idea of solar as a replacement unrealistic.

The only other way would be to provide some kind of storage system that the solar plant would be able to charge up during its operating period, and then draw on when demand exceeds supply. At present there isn’t any really satisfactory way of storing large amounts of electrical energy. What’s usually proposed instead is to convert it to potential energy by pumping water up to a high reservoir, and letting the water flow back down through turbines in the nonproductive periods. A sleight-of-word commonly slipped in by solar advocates when pushing for this kind of option is to continue referring to the facility as a “thousand megawatt” solar plant. However, the power industry’s normal criterion expects a practicable storage system to be capable of recharging at five times the nominal rating. This means that for “thousand megawatt” to mean the same as it does for every other kind of plant, the solar facility would have to have a peak capacity of six thousand megawatts, adding vastly to the size, complexity, cost, and environmental effects implied by the figures above.

Decentralizing by putting solar panels on everyone’s roofs wouldn’t reduce the cost or the amount of materials, but simply spread them around. In fact things would get worse, for the same reason that McDonalds use less oil to cook two tons of fries than eight thousand households that make a half a pound each. The storage problem wouldn’t go away either, but would become each homeowner’s responsibility. In a battery just big enough to start a car, gases can accumulate that one spark can cause to explode – sometimes with lethal consequences, as some unfortunates have demonstrated when using jumper cables carelessly. Imagine the hazard that a basement full of batteries the size of grand pianos would present, which a genuinely all-solar home would need to get through a bad spell in, say, Minnesota in January. And who would do the maintenance and keep the acid levels topped up?

Then we have the problem of keeping the roof panels clean and free from snow and wet leaves, not in the summer months, but when the roofs are slippery and frozen. Even today, the biggest cause of accidental deaths in the country, after automobiles, is falls. If we build all those houses with bombs in the basements and skating rinks on the roofs, it seems to me we’d better add in a lot more hospitals and emergency rooms too, while we’re at it.

As a science-fiction writer, I’m certainly enthusiastic about the thought of our expanding into space – for the right reasons. Solar power satellites has never struck me as one of them. The intensity of solar radiation outside the atmosphere is about six times that on the surface, which isn’t a lot really. I don’t see how it could justify the expense of putting huge amounts of technology into orbit to re-concentrate energy diluted by ninety-three million miles’ worth of the inverse square law, when we can generate it at the Sun’s original density right here. One study that I read estimated 10,000 shuttle launches to build a satellite capable of powering New York City – and on top of that would be the cost of ground equipment to receive the beamed power.

Similar considerations apply equally to wind power, which seems to be the current fad of the political savants who would lead us into the twenty-first century. The picture above shows the South Korean nuclear park at Yongwang, which has six one-thousand-megawatt reactors. Matching that capacity with wind generators would require a wind farm 175 miles wide extending from San Francisco to Los Angeles. Direct solar would require somewhere around 20 square miles of collector area alone, i.e. without allowing any spacing for steerable geometry or the maintenance access that would be necessary for a practical plant design.

This isn’t to say that solar doesn’t have its uses. It can be beneficial in remote places far from a supply grid, such as isolated farms or weather stations, and if somebody who lives in the right place finds it worthwhile to shave something off his electricity bill, there’s nothing wrong with that. But the problem that matters isn’t simply a domestic one of keeping the living room at 75 degrees and heating the bath water. The real issue is that of running the aluminum smelters, steel mills, fertilizer plants, cement works, factories, and transportation systems that keep a modern industrial society functioning. Solar and its variants can never make a significant contribution. And that is precisely the reason why those who don’t want a modern industrial society are so much in favor of it and would like to see everything else forcibly shut down.

All of the world’s peoples would like to think that a century from now their children will be living that way. They could be, too. The human race possesses the knowledge and the ability to ensure that every child born on the planet could look forward to a healthy and well-fed body, an educated mind, and the opportunity to become the best that he or she is capable of. But when the demand is translated into energy needs – providing a globally stabilized population of, say, ten billion with energy per person probably greater than that of the U.S. today – the amount is utterly beyond any approaches that are merely variations of what we have. Only continued evolution into the next logical realm of energy control can do it.

So, can we make nuclear energy work, safely, cleanly, and efficiently? Sure we can. When we take a long, hard look at the alternatives, we see that we have to. Fortunately for all of us, the Neanderthals who first learned how to tame fire thought the same way.

James P. Hogan, a former digital systems engineer and computer sales executive, has been a full-time writer since 1980. He was born in London, moved to the USA for many years, and now lives in the Republic of Ireland. His web site is at jamesphogan.com

If you’re interested in learning more about the junior resource sector, bio-tech and technology sectors please come and visit us at www.aheadoftheherd.com

Legal Notice / Disclaimer

This document is not and should not be construed as an offer to sell or the solicitation of an offer to purchase or subscribe for any investment.

Richard Mills has based this document on information obtained from sources he believes to be reliable but which has not been independently verified; Richard Mills makes no guarantee, representation or warranty and accepts no responsibility or liability as to its accuracy or completeness. Expressions of opinion are those of Richard Mills only and are subject to change without notice. Richard Mills assumes no warranty, liability or guarantee for the current relevance, correctness or completeness of any information provided within this Report and will not be held liable for the consequence of reliance upon any opinion or statement contained herein or any omission.

Furthermore, I, Richard Mills, assume no liability for any direct or indirect loss or damage or, in particular, for lost profit, which you may incur as a result of the use and existence of the information provided within this Report.

Comments