free html hit counter Peak Oil Debunked: 295. THE NUCLEAR CONUNDRUM

Tuesday, April 25, 2006

295. THE NUCLEAR CONUNDRUM

One of the problems I have with the peak oil community is its obsession with a near term peak in conventional petroleum. I don't really see this as the main problem because there are other forms of energy which can and will take up the slack, primarily coal and nuclear.

The larger problem, I believe, lies farther in the future, at the point when oil and gas have peaked and become seriously exhausted. So let's take the longer view and assume we're already at that point. Maybe the year is 2050 or 2100 -- pick your own number -- but for all practical purposes, we're very low on oil and gas. What does the world look like?

Some say: No problem, we'll just switch over to coal. But that's screwy. If we're going to switch to coal in the post peak oil/gas period, everybody is going to be switching and sucking down the coal of coal-rich nations (like America and Australia), not just the coal-rich nations themselves. Or, alternatively, the coal-rich nations will be powering their grids and driving on coal, while the rest of the world will be struggling to keep the lights on.

So coal really isn't the long-term answer, and in the end (barring other developments) preservation of the energy status quo will lead us into one of two scenarios:

Scenario 1) A world carpeted with tens of thousands of nuclear power plants. The question here then is security. Clearly the risks of dirty bombs, terrorists acquiring nuclear weapons, accidents due to mismanagement etc. increase greatly in this situation. We could have centralized enrichment and reprocessing/waste management, but then you've got a sprawling logistical network of nuclear/radioactive materials criss-crossing the earth in routine shipments, much like oil does today. Which will clearly increase the risk of theft, terrorist attacks, acts of war etc. A good question to ask here is: When and where will the first dirty bomb be detonated? And whose facility will the radioactive material come from?

On the other hand, you could have localized enrichment and reprocessing. That would eliminate the logistical risk, but introduce the risk of proliferation of nuclear weapons. That doesn't seem like such a good idea either.

In any case, you're going to need a massive worldwide security/emergency-response apparatus to police the situation, and this is an externality whose cost the nuclear power companies themselves should be forced to bear.

Scenario 2) Due to the risks involved in 1), 2nd tier countries are forcibly barred from nuclear development, and thus from electricity. This, however, seems likely to lead to conflict, and surges of refugees into nuclear countries where the power is still reliable. This too will necessitate huge investments in security.
-- by JD

16 Comments:

At Tuesday, April 25, 2006 at 7:39:00 PM PDT, Blogger JD said...

avo, I'm discussing the status quo in this case -- i.e. the case where space-based solar is off the table because it's not a realistic option. Nuclear is cheaper. I personally believe space-based solar is a good option which should be pursued, but nobody else does. This post assumes that the situation basically stays that way.

Ground-based solar and wind are also off the table because no one has produced a credible scenario where those two can fuel the status quo. I'm talking cars -- billions of them. Long commutes. Huge neon signs blazing all night long. Hot tubs. Houses built with brain-dead architecture requiring massive power for air-conditioning.

This is the realistic, down-to-earth scenario where people will have little interest in conservation, and will regard space-power as an expensive pipe dream. In this scenario, people will be primarily concerned with their cars, not sophisticated concepts of long-term sustainability.

 
At Tuesday, April 25, 2006 at 8:28:00 PM PDT, Blogger Mel. Hauser said...

Is there even enough uranium on the planet to power that many nuclear plants?

 
At Tuesday, April 25, 2006 at 9:27:00 PM PDT, Blogger Fat Man said...

The arguments aginst any form of energy are insupperable. Fortunately they are just that -- arguments.

 
At Tuesday, April 25, 2006 at 10:21:00 PM PDT, Blogger JD said...

Excellent link, dc. Thanks.

 
At Wednesday, April 26, 2006 at 12:04:00 AM PDT, Blogger JD said...

More likely is the scenario JD left out: Concentrated Solar Power

This is a sustainable technology and economic at oil prices of 35 dollars per barrel.


It's economic to some degree right now, at least in very sunny areas. A plant was just commissioned in Arizona, although it is extremely small (power for 200 families). LINK

More plants are on the way (#270).

The German Aerospace Centre (DLR) has done large studies into this field on behalve of the German government. One of them (300 page report) can be found here:

In terms of storage, the report mentions an approach I haven't previously heard of: thermal storage. And that's about all it does is "mention" it. No description, no cost analysis -- nothing. Just: "We'll take care of it with thermal storage."

How much faith do they have in thermal storage? Not a whole lot, as you can see from this passage:

"Each of these technologies can be operated with fossil fuel as well as solar energy. This hybrid operation has the potential to increase the value of CSP technology by increasing its power availability and decreasing its cost by making more effective use of the power block. Solar heat collected during the daytime can be stored in concrete, molten salt, ceramics or phase-change media. At night, it can be extracted from the storage to run the power block. Fossil and renewable fuels like oil, gas, coal and biomass can be used for co-firing the plant, thus providing power capacity whenever required."

It's the same fraudulent game that they're playing in Denmark (see #213). The title of the report says "SOLAR", and the fine print says "fossil fuel".

 
At Wednesday, April 26, 2006 at 12:32:00 AM PDT, Blogger JD said...

businesses up and running only when the sun is out will not fly

Last time I checked, most businesses already are running when the sun is out -- 9 to 5. ;-)

Ensure the same quantity/quality of service and we can re-address the issue.

This is at the heart of it. The same quantity/quality of service means the status quo, 100%, unchanged. Like I said to Avo: Billions of cars. Long commutes. Huge neon signs blazing all night long. Hot tubs. Houses built with brain-dead architecture requiring massive power for air-conditioning. Nuclear must be pressed into service to keep this going because changes in lifestyle are not acceptable. That's the core argument for massive nuclear, as I see it.

My question for you, ES, is this: If we're going to be going whole-hog nuclear, filling up one Yucca Mountain every couple of years, why do we need to conserve, at all? I mean, if it's asking too much to ask people to operate during the daytime, why criticize people who drive Hummers? Those Hummers provide a quantity/quality of service for them which must be maintained. Right?

In fact, isn't "Ensure the same quantity/quality of service and we can re-address the issue." just another way to say "our way of life is not negotiable"?

 
At Wednesday, April 26, 2006 at 4:01:00 AM PDT, Blogger Markku said...

JD,

in your long-term scenario, you're ignoring the explosive growth trend in computing power and the emergence of strong nanotechnology.

Global demand for energy may plummet towards the end the end of this century because people can and will - and probably may have to in order to protect themselves from highly advanced bio-/nanoweapons - give up for biological bodies altogether. By the end of the century, our minds, expanded beyond wildest imaginations, will run on ultra-powerful computers requiring very little or no energy not readily available in renewable forms in overwhelming abundance with respect to need.

The point: in about the middle of this century, what seems to us in 2006 a fundamental discontinuity in the history of intelligent life on this planet will occur. Any predictions depending on the current facts of life beyond that point are utterly meaningless.

 
At Wednesday, April 26, 2006 at 5:22:00 AM PDT, Blogger JD said...

markku,
That's certainly one possible outcome. It's just not the one I'm considering in this thread. Here I'm looking at the scenario of a massive world-wide roll-out of nuclear power.

 
At Wednesday, April 26, 2006 at 8:10:00 AM PDT, Blogger Markku said...

Last year I considered the singularity idea quite seriously (as might be obvious in this post I wrote for POD at the time). I'm not really so sure about it now. I think that advanced nanotech will be developed eventually and have wide effects, but I'm leaning more towards people like Ian Pearson, Jaron Lanier or Douglas Mulhall than with Ray Kurzweil.

The concept of technological singularity was invented in the 1960's, I think, and later popularized by Vernor Vince in the 1990's. I haven't read anything written by the people you mention.

I think there's a huge philosophical conondrum about strong AI that has kind of been sidestepped here when you talk about brain uploading.

What philosophical problem do you mean? Just put the right molecules in the right order. Presto, you have an intelligent machine.

Whether or not a computer or an uploaded brain is conscious, is a philosophical question indeed, but I think it has nothing to do with whether or not strong AI is possible.

These days I don't think we'll be computers by 2050. Or 2100. Or even 2150.

Why? I think no yuck-reaction will prevent people from getting themselves uploaded if the alternative is death. Consider the eventuality that anyone can use a nanofactory to produce any objects like, for instance, self-replicating destructive nanobots or airborne viruses. I'd say mere survival will require abandoning biological bodies at some point.

On the other hand, the global energy situation in 2050 doesn't worry me. Technology is progressing blindingly fast (though I don't know about accelerating exponentially).

Some technologies are indeed accelerating exponentially. According to industry roadmaps, Moore's Law will stay on course for the next decade or so based on photolitography alone. Enough time for it to be (gradually) replaced by carbon nanotubes. Or some other technology under development. Read MIT Technology Review.

It can't create Gods

So, you're saying that superhuman level intelligence is never achievable (on a non-biological platform)? I wouldn't know, but I'm guessing that it eventually will be. I don't believe intelligence requires anything special. The requisite raw computational capacity is already there and pretty soon it will be available cheaply. The problem is, of course, inventing the software of intelligence. But we do have an example between our ears that we can reverse-engineer.

... but if it can't produce solar technology effective enough to power the planet, I'll eat Matt Savinar's hat. Euchh.

Soon enough, fossil fuels will be expensive enough for renewables such as photovoltaics competitive. I hope that happens before fossil fuels peak (though, I suppose coal won't for a long time).

 
At Wednesday, April 26, 2006 at 9:32:00 AM PDT, Blogger Joe said...

If you use 2100 as a date then you should take into account what the population of the world will be at that time. World population will peak around 2050 (9 - 9.5 billion)and decline thereafter. By 2100 it will probably be back around where it is today and the population will be decreasing every year.

Currently it is difficult for efficiency gains to compensate for the 74 million people that the world is adding every year. If you add the current 1.1% population growth to 4.5% economic growth, then efficiency gains get swamped.

But by 2050, with population growth at 0% and with a much higher standard of living around the world,we should have no problem using less energy every year and still have a growing economy. By 2100, energy use could easily decline 2-3% per year - if we need it to.

 
At Wednesday, April 26, 2006 at 8:27:00 PM PDT, Blogger Jon said...

JD, sorry this is a little off topic for this post, but I thought you might find it intersting.

I was watching the news this evening here in Calgary and they had an article about a company called Whitesands Insitu that is testing an extraction process for the Athabasca oil sands which sounds pretty interesting.

What they are planning to do is light a slow burning, controlled smoldering fire in the ashphalt like Bitumen which will heat the material in the direction of the burn so that it can be pumped out.

They said that they project they can get between 60-80% of the recoverable oil this way versus only up to 50% of the other methods. I am guessing this also mitigates the use of natural gas and water as you are using oil sands themselves to do the heating.

Pretty spiffy...

See:

http://www.petrobank.com/ops/html/cnt_heavy_white.html

 
At Thursday, April 27, 2006 at 4:50:00 AM PDT, Blogger Markku said...

Douglas Mulhall is an author. Jaron Lanier is a computer scientist and composer who invented virtual reality, and wrote the most intelligent (and scathing) refutation of the singularity ever. Ian Pearson is an AI scientist who doesn't believe in "hard takeoff".

Lanier wrote:

Cybernetic Totalist Belief #1: That cybernetic patterns of information provide the ultimate and best way to understand reality.

What else is there? Is there understanding without knowing patters of information? Is there another way to use the word "understand" while making sense?

Belief #2: That people are no more than cybernetic patterns

Every cybernetic totalist fantasy relies on artificial intelligence. It might not immediately be apparent why such fantasies are essential to those who have them. If computers are to become smart enough to design their own successors, initiating a process that will lead to God-like omniscience after a number of ever swifter passages from one generation of computers to the next, someone is going to have to write the software that gets the process going, and humans have given absolutely no evidence of being able to write such software. So the idea is that the computers will somehow become smart on their own and write their own software.

My primary objection to this way of thinking is pragmatic: It results in the creation of poor quality real world software in the present. Cybernetic Totalists live with their heads in the future and are willing to accept obvious flaws in present software in support of a fantasy world that might never appear.


It seems to me that basically Lanier is saying here that since humans have so far never written software than comes anywhere near humans in general intelligence, it will never be done, which proves that humans have a mystical quality that sets them apart from mere information processes and that is somehow relevant to capacity for intelligent behavior. I'd call that a premature conclusion.

Belief #3: That subjective experience either doesn't exist, or is unimportant because it is some sort of ambient or peripheral effect.

There is a new moral struggle taking shape over the question of when "souls" should be attributed to perceived patterns in the world.

Computers, genes, and the economy are some of the entities which appear to Cybernetic Totalists to populate reality today, along with human beings. It is certainly true that we are confronted with non-human and meta-human actors in our lives on a constant basis and these players sometimes appear to be more powerful than us.


The existence of subjective experience is an immediately obvious fact that only a jester would deny. However, there is no evidence of subjective experience being anything but a peripheral effect. There is no documented case of mind without a brain. I believe in jackhammer psychology. My personal intuition is that it arises from certain type of information processes that take place in the higher regions of the human brain. But that's merely a guess and is as good as anybody else's.

Belief #4: That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all possible creativity and culture.

Cybernetic totalists are obsessed with Darwin, for he described the closest thing we have to an algorithm for creativity. Darwin answers what would otherwise be a big hole in the Dogma: How will cybernetic systems be smart and creative enough to invent a post-human world? In order to embrace an eschatology in which the computers become smart as they become fast, some kind of Deus ex Machina must be invoked, and it has a beard.

Unfortunately, in the current climate I must take a moment to state that I am not a creationist. I am in this essay criticizing what I perceive to be intellectual laziness; a retreat from trying to understand problems and instead hope for software that evolves itself. I am not suggesting that Nature required some extra element beyond natural evolution to create people.


What evidence is there for a special force of creation? On the other hand, genetic algorithms have proven capable of generating very complicated novel solutions to well formulated problems that no human has ever thought of. To me, it looks as if the difference between human creativity and machine creativity is merely that currently we have little idea what goes on under the hood in human creativity. Would such a detailed understanding of the neurophysiology of human creativity take the fun out of human creativity? I don't think so. Again, Lanier seems to be jumping into conclusing prematurely.

Belief #5: That qualitative as well as quantitative aspects of information systems will be accelerated by Moore's Law.

I've never read anyone suggest that this be the case, the least of all Kurzweil.

Belief #6, the coming cybernetic cataclysm.

Here Lanier discusses the increase of problems caused by ever more complex but flawed software. He also mentions how in the field of biotechnology vast databases are collected but how they are likely to remain fragmented, unstandardized, and heavily reliant of overworked human scientists to be useful. I think Lanier is correct in that integrating all that jumbled mess will require a lot of intelligence.

"What philosophical problem do you mean? Just put the right molecules in the right order. Presto, you have an intelligent machine."

Put the molecules in the right order, and you have a starship or a dinosaur - it's just really hard. The brute force power of computers may be increasing exponentially, but their intelligence is not. We can create tools, but not agents. Maybe eventually we will have them, but I think we're getting there in a linear way rather than an exponential way.

Before making any conclusions, I'd wait until the architecture of the entire human brain is mapped and understood in detail.

If computers are self-improving, then obviously we'll get to superintelligence eventually. But why does a computer need to be intelligent before it can self-improve itself?

You mean improve it's own intelligence? It's a matter of taste, but I'd prefer calling such computer intelligent.

"Whether or not a computer or an uploaded brain is conscious, is a philosophical question indeed, but I think it has nothing to do with whether or not strong AI is possible."

Well it does, because nobody's going to upload themselves if they're not sure they'll be conscious on the other side!

Some people are sure. And those who are about to die anyway are very likely to take their chances.

Moreover, you're confusing two issues here: the philosophical problems of uploading and creating artificial intelligence (a machine whose outward behavior is firmly in the realm of what humans would call "intelligent").

By the way, would you personally agree to the Moravec transfer, that is, having your brain cells replaced by functional equivalents while you are fully conscious and capable of observing the process?

"Some technologies are indeed accelerating exponentially. According to industry roadmaps, Moore's Law will stay on course for the next decade or so based on photolitography alone. Enough time for it to be (gradually) replaced by carbon nanotubes. Or some other technology under development. Read MIT Technology Review."

I just don't think you can apply that to the whole of human society. It's too subjective.

What do you mean? What is subjective?

"So, you're saying that superhuman level intelligence is never achievable (on a non-biological platform)? I wouldn't know, but I'm guessing that it eventually will be. I don't believe intelligence requires anything special. The requisite raw computational capacity is already there and pretty soon it will be available cheaply. The problem is, of course, inventing the software of intelligence. But we do have an example between our ears that we can reverse-engineer."

Even if it is intelligent, what would the implications be?

Well, if a machine were to be built whose cognitive capacity matched that of humans, it could be instantly copied as many times as necessary. Raw processing power could quickly be used to amplify it's memory capacity and speed. Unaugmented humans would be quickly out of work for starters.

What if the technology to integrate humans and computers doesn't progress as fast as AI does? When was the last time you saw God?

I've never seen God. It's very hard to predict what would happen.

Ray Kurzweil:
"What will the Singularity look like to people who want to remain biological? The answer is that they really won't notice it, except for the fact that machine intelligence will appear to biological humanity to be their transcendent servants."
(Source).

Here's a wacky idea for you: I think superhuman intelligence exists now, as we all become more interconnected on the web, and as software gives us more power to make calculations and assess things. As we sit here on this blog we're part of a growing global consciousness that's greater than the sum of its parts — sorting through ideas, drawing conclusions, finding the middle point between Kurzweil and Deffeyes, whatever. This network has existed as long as there have been humans, but modern communication makes it far more effective. Imagine if this problem-solving network also included the billions of people in the third world, thanks to cheap IT? We already have strong AI — the internet is the artificial part and people are the intelligent part.


Well, that's nothing new. Human culture has always been way smarter than any individual human.

Anyway, this is all off topic. I think we need backup plans for everything. For example, we could all be wrong and nanotech, biotech, solar and wind power could all fall through ... then what? The nuclear conundrum. That's what I was trying to say in that article for POD: it wasn't just about the singularity, it was about how complicated the future is. It will exceed our expectations, but probably not in the way we imagine.

I'm in favor of building more nuclear power plants. Fossil fuels should all be replaced by nuclear power in electricity production. The use of fossil fuels in transportation should be minimized by taxing motor fuels more heavily. Plug-in hybrids are a great idea. If people can afford SUVs instead of cars, they can afford plug-in hybrids.

 
At Thursday, April 27, 2006 at 7:19:00 AM PDT, Blogger Markku said...

"What else is there? Is there understanding without knowing patters of information? Is there another way to use the word "understand" while making sense?"

Well no, if you define human understanding in a cybernetic sense (see belief 2). You can listen to a piece of music and understand it on a deep level. Having it in a digitalized form doesn't make it any easier to understand if you lack the capacity to understand it.

I think you narrowly understand "knowing a pattern of information" as possessing a formula describing a regularity. When I listen to music, I receive information via the auditory sense. Then I come to know the features and structures of the piece of music. I'm not able to enjoy music in written form because I can't read notes fluently. Clearly, there can be no "understanding" of the music without knowledge of the patterns of information therein.

"It seems to me that basically Lanier is saying here that since humans have so far never written software than comes anywhere near humans in general intelligence, it will never be done, which proves that humans have a mystical quality that sets them apart from mere information processes and that is somehow relevant to capacity for intelligent behavior. I'd call that a premature conclusion."

I believe a human will eventually be convincingly simulated. In fact they have been already. As Lanier points out, a "reverse Turing test" occurs when someone lowers their own intelligence to that of a bad piece of software.

I just don't agree that a human simulation is that significant, or that it is a prerequisite for self-improving intelligence.


I never said it's necessary to simulate a human for that particular purpose.

It may be too early to rule such a computer out, but it would be just as premature to assume that we understand consciousness just because we understand the brain.

No such assumption is necessary.

"Nobody's going to upload themselves if they're not sure they'll be conscious on the other side!" —Some people are sure. And those who are about to die anyway are very likely to take their chances.

I don't agree that we're all likely to die because of bio- or nano-technology. And of all the future technologies I've read about, uploading seems the most implausible. We're more likely to go into space.

I'm saying is that each one of us is very likely to die at some point in the future owing to the fact that our bodies become fragile as they age. Suppose you are lying in your deathbed. Would you give permission to the hospital staff to scan your brain after you die to run a simulation of you in a computer? I would, since it would be the best hope available at that point to continue living.

"Human culture has always been way smarter than any individual human."

So, why wait around for a global superintelligence when we've got one already? Who cares if a computer can simulate the human mind, when real human minds are doing the job so well? If you like the idea of being inside a computer, then just think a thought and your wish is granted. :-)

I don't want to die. I know I will eventually become very frail and ill and die (in a matter of about half a century should life expectancy remain constant) if I remain in my original biological body. I'd like to body (including the brain) to be kept in perfect working order indefinitely. Should biological immortality result in some kind of adverse psychological consequences in the long-term, I'd like to have to ability to remedy them.

 
At Thursday, April 27, 2006 at 10:47:00 PM PDT, Blogger Joe said...

Sorry Glen but you said:

People have this blind faith in technology because of the rate of change we've seen in computers, but not all technology moves at that rate. Computers can not make our cars go 1000 miles to the gallon. They can't create fertilizer for our crops. They can't fuel power plants.


I just chatted with a co-worker in Singapore via my computer. I definitely got more than 1000 miles to the gallon.

GPS technology, using computers, is being applied to reduce the amount of fertilizer applied to crops. Genetic research being done, using computers, is increasing crop yield wihout requiring additional fertilizer.

Finally, we can use computers/technology to monitor and reduce our energy usage. These "negawatts" are better than any power plant.

 
At Friday, April 28, 2006 at 7:44:00 PM PDT, Blogger Mel. Hauser said...

Not to pretend like I know what I'm talking about, but.. couldn't a replicator just replicate fuel for itself?

 
At Monday, November 17, 2008 at 3:13:00 AM PST, Anonymous Anonymous said...

"Before dismissing this work as crap, think about the similarities between Jan Willem Storm van Leeuwen (now) and Colin Campbell (say 10 years ago). Both retired experts with a message everyone thinks is rubbish."

Why would I go through all the trouble when I can simply point out that he's a recalcitrant douché bag?

Energy consumption and uranium concentration for the production of yellowcake uranium is known for the Rössing mine in Namibia. If you plug in the numbers into Storm van Asshats formula you get a result that is 2 orders of magnitude too large; in fact it's more than the entire energy consumption of the nation of Namibia. He is well aware of this, but unfortunately he choose to reject reality and substitute his own formula.

He's unwilling to consider any other kind of mining technology than directly excavating and crushing the rock into a fine powder; an energy intensive process that is generally only performed when there are other minerals of economic importance that can be co-mined with uranium. One example of a much more efficient alternative method is in-situ leaching. He has been made aware that it exists but persists in not considering it.

Enrichment with gas centrifuges is over an order of magnitude more efficient than gaseous diffusion and has all but replaced diffusion as the means by which commercial nuclear fuel is enriched. Storm van Asshat considers only gaseous diffusion; even when the fact that it's obsolete and what little remains is being quickly replaced is pointed out to him.

Enrichment uses only electrical power. Since nuclear power provides baseload there is no inherent need to use ANY fossil fuels here. Instead of simply deducting a few MW from the output of a 1 GW nuclear plant as being dedicated to enrichment Storm van Asshat picks the grid mix as used in the US, which involves a whole lot of dirty coal power(which is exactly what nuclear replaces) and exclaims that enrichment produces a lot of CO2.

Storm van Asshat is also unwilling to consider heavy water reactors that run on natural uranium without enrichment like CANDU.

His unwillingness to correct or respond to any of these obvious mistakes justified dismissing the lying sack of crap with contempt.

The lying dutchman is forever doomed to wander the shady reaches of the internet where even the most basic sanity checking is beyond the drooling morons that constitute his target audience.

 

Post a Comment

<< Home