What Happened To The Future?

Deviation Actions

default-cube's avatar

Literature Text

Those of a certain age (like myself) remember the era of techno-optimism in the 1960s and even earlier: by the year 2000, we would have flying cars and personal jetpacks, and taking holidays on the Moon, and manned exploration of the other planets of our solar system would be well underway.

For one idea of what this future would look like, just watch the classic movie 2001: A Space Odyssey from 1968. There are other portrayals from around that era along similar lines.

So what actually happened? Instead of routine rocketry and widespread space travel, we got computers and the Internet, mobile phones, high-definition TV--the whole panoply of digital technology. We did get some manned space travel (the International Space Station has been permanently manned since November 2000), but no-one has left low Earth orbit since the last Apollo Moon landing in 1972: manned space travel still remains complicated, expensive, and dangerous. Unmanned probes have at least made fly-bys of every planet, and a few comets, asteroids and Pluto, as well as landings on several bodies. While these have returned an amazing wealth of information (just look at the pictures), one comment I read pointed out that the work that the Curiosity rover managed in its first month on Mars could have been achieved in an hour by a human on the scene.

And closer to ground level, intercontinental passenger flight has become commonplace--so commonplace that, far from being the exotic rarity it was in the 1960s (the province of a privileged few referred to by the quaint phrase “the jet set”), nowadays the banality of the experience is summed up in a phrase like “cattle class”.

To sum it up, the predicted future was based on the assumption of the ready (and cheap) availability of large amounts of controlled energy, in the form of rockets or jets or whatever other propulsion systems those vehicles would use. While some lower-energy part of the vision was realized, what we mainly got instead was large amounts of information-processing capability, in the form of the integrated computer microchip.

Where would the energy have come from, to power the predicted future? I think the answer is obvious: it would have been atomic power. Nuclear fission power plants were already in deployment, and nuclear fusion (in a controlled form, not in the form of bombs) had at least started development. There was this assumption that atomic power plants would become as cheap, small and light as internal-combustion engines are today (or even smaller, cheaper and lighter), while affording orders of magnitude greater power output.

We all know that didn’t happen. Safety and pollution concerns soon reared their ugly heads. Nuclear fission plants remain large and expensive, not to say complicated to run. Nuclear fusion is still far from reality, thought steady progress is being made.

Why didn’t nuclear fission fulfil its original promise? I think a big factor is that the development of fission plants took a wrong turn. Their origins in the Cold War led to the decision to adopt designs based on the use of isotopes of uranium and plutonium as fuel--the same energy sources that made up nuclear bombs. Furthermore, many reactors were “breeder” designs--that is, they actually produced more fuel than they consumed (the byproduct of their radiation emissions turning less useful isotopes into ones capable of more energetic reactions). Clearly, one important point of this technology was to provide support for the manufacture of nuclear weapons.

Then there was the safety issue: all these designs required active cooling and damping to keep the reaction under control. While there was no chance of a reactor exploding like a bomb (a very difficult thing to make happen even when you want it to), nevertheless a massive-enough failure of the control systems would still lead to a runaway reaction causing a meltdown of the reactor core. Modern reactor designs spend a lot of effort ensuring containment of any potential failure, so an explosive incident like Chernobyl is rather less likely; nevertheless, if a meltdown were to happen, the entire reactor becomes a writeoff, not to mention a radiation hazard that has to stay off-limits and sealed off for decades or centuries.

There are other ways that nuclear fission could have been developed. Currently a certain amount of work is going into designs using thorium as fuel. This has the political advantage that the fuel and its by-products are useless for bomb-making. Also, the designs are based on recirculating hot, molten thorium salts through the reactor. If there were any failure of the control system, there would be no runaway reaction: instead, if all power goes, the recirculation would stop, and the molten salt would lose heat and solidify; instead of a meltdown, you would get a “freezedown”, with minimal safety implications, and all power output (controlled or otherwise) would simply stop.

Would the reactor still be a writeoff? I don’t know what the technological limitations are; but perhaps techniques could be developed to get a frozen reactor working again; certainly the radiation hazards that might prevent this should be a lot less.

As for nuclear fusion, there has been a joke for many decades that its use as a practical power source is always “30 years away”. But it could be that that distant horizon is gradually getting closer...

So what does the future hold from here? Computer technologies have exploded like mad in capabilities and applications over the last half-century. From a time when illustrious luminaries proclaimed that the entire world might need “five or six computers” to a point when more ARM processor chips (the kind used in mobile phones and other common devices) are manufactured per year than the entire population of the Earth, has been less than a human lifetime.

A principle called “Moore’s law”, first enunciated in 1965, said that the number of transistors that could be packed into a single microchip would double every 18-24 months. That managed to remain true for a much longer time than anyone could imagine, but the exponential curve is now flattening out, as the size of the transistors just becomes too small to work reliably, given the limits imposed by the laws of physics. The amount of processing power that a single processor on a microchip could offer (not quite what Moore’s law was addressing, though often confused with it) has grown almost as fast, but that has plateaued for some years now.

Our mobile devices rely heavily on chemical batteries, and a lot of ingenuity has gone into coming up with more efficient designs for these. But the amount of energy available to run them still remains a major limiting factor.

Even sedentary machines that can remain plugged into mains power are hitting energy limits: here the main factor is the cost of the energy, not just for powering the equipment but also for the air-conditioning to carry away that spent energy in the form of waste heat. Companies offering “cloud” services run hundreds of thousands of servers; supercomputer installations can now have a million separate processing units; the main expenditure in running all of these nowadays is the electricity bill.

Which brings us back to where we came in: remember how I said that the original predictions about the future at the turn of the 21st century were built on the assumption of abundant energy availability? And what we got was abundant information-processing capability instead? Well, now that information-processing capability has grown to the point where it starts to be constrained by the energy availability.

In other words, to make further progress along the information-processing path, we will need to address the issue that we failed to do so before: how to exploit much greater sources of controlled energy. Which may very well have the nice side-effect of bringing about the realization of some of those other long-delayed predictions.

Of course, another option is that we stop making progress along that path, and turn in some other direction instead. Who says the history of technology is a linear progression? There was no straight line leading to where we are now, and there won’t be one leading onward...
Where did the flying cars, personal jetpacks and holidays on the Moon go? What did we get instead?
© 2016 - 2022 default-cube
Join the community to add your comment. Already a deviant? Log In