Matt Ridley: Does Science Drive Innovation?

M

Science is the daughter of technology

 

Politicians believe that innovation can be turned on and off like a tap. It starts, you see, with pure scientific insights, which then get translated into applied science, which in turn become useful technology. So what you must do, as a patriotic legislator, is ensure there is a ready supply of money to scientists on the top floor of their ivory towers, and lo and behold, technology will come clanking out of the pipe at the bottom of the tower.

 

This ‘linear model’ of how science drives innovation and prosperity goes right back to Francis Bacon, the Jacobean Lord Chancellor who urged England to catch up with the Portuguese in their use of science to drive discovery and commercial gain. Supposedly Prince Henry the Navigator in the fifteenth century had invested heavily in map-making, nautical skills and navigation at a special school at his villa on Portugal’s Sagres peninsula, which resulted in the exploration of Africa and great gains from trade. That’s what Bacon wanted to copy.

 

“The West Indies had never been discovered if the use of the mariner’s needle had not been first discovered . . . There is not any part of good government more worthy than the further endowment of the world with sound and fruitful knowledge.”

 

Yet recent scholarship has exposed this tale as a myth, or rather a piece of Prince Henry’s propaganda. Like most innovation, Portugal’s navigational advances came about by trial and error among sailors, not by speculation among astronomers and cartographers. If anything, the scientists were driven by the needs of the explorers rather than the other way around.

 

Professor Terence Kealey, a biochemist turned economist, tells this story to illustrate how he believes the linear dogma so prevalent in the world of science and politics – that science drives innovation, which drives commerce – is mostly wrong. It misunderstands where innovation comes from. Indeed, it generally gets it backwards. Again and again, once you examine the history of innovation, you find scientific breakthroughs as the effect, not the cause, of technological change. It is no accident that astronomy blossomed in the wake of the age of exploration. The steam engine owed almost nothing to the science of thermodynamics, but the science of thermodynamics owed almost everything to the steam engine. The flowering of chemistry in the late nineteenth and early twentieth centuries was driven by the needs of dye-makers. The discovery of the structure of DNA depended heavily on X-ray crystallography of biological molecules, a technique developed in the wool industry to try to improve textiles.

 

And so on, through case after case. The mechanisation of the textile industry was at the very heart of the Industrial Revolution, with its jennies, frames, mules, flying shuttles and mills going down in history as milestones in the industrialisation of Lancashire and Yorkshire, leading to Britain’s sudden enrichment and power. Yet nowhere among the journeymen and entrepreneurs who drove these changes can you find even a hint of science. Much the same is true of mobile telephony in the late twentieth century. You will search in vain for major contributions from universities to the cellphone revolution. In both cases, technological advances were driven by practical men who tinkered till they had better machines; philosophical rumination was the last thing they did.

 

As Nassim Taleb insists, from the methods used by thirteenth-century architects building cathedrals to the development of modern computing, the story of technology is a story of rules of thumb, learning by apprenticeship, chance discoveries, trial and error, tinkering – what the French call ‘bricolage’.

 

Technology comes from technology far more often than from science. And science comes from technology too. Of course, science may from time to time return the favour to technology. Biotechnology would not have been possible without the science of molecular biology, for example. But the Baconian model, with its one-way flow from science to technology, from philosophy to practice, is nonsense. There’s a much stronger flow the other way: new technologies give academics things to study.

 

An example: in recent years it has become fashionable to argue that the hydraulic fracturing technology that made the shale-gas revolution possible originated in government-sponsored research, and was handed on a plate to industry. A report by California’s Breakthrough Institute noted that microseismic imaging was developed by the federal Sandia National Laboratory, and ‘proved absolutely essential for drillers to navigate and site their boreholes’, which led Nick Steinsberger, an engineer at Mitchell Energy, to develop the technique called ‘slickwater fracking’.

 

To find out if this was true, I spoke to one of hydraulic fracturing’s principal pioneers, Chris Wright, whose company Pinnacle Technologies reinvented fracking in the late 1990s in a way that unlocked the vast gas resources in the Barnett shale, in and around Forth Worth, Texas. Utilised by George Mitchell, who was pursuing a long and determined obsession with getting the gas to flow out of the Barnett shale to which he had rights, Pinnacle’s recipe – slick water rather than thick gel, under just the right pressure and with sand to prop open the fractures through multi-stage fracturing – proved revolutionary. It was seeing a presentation by Wright that persuaded Mitchell’s Steinsberger to try slickwater fracking. But where did Pinnacle get the idea? Wright had hired Norm Wapinski from Sandia, a federal laboratory. But who had funded Wapinksi to work on the project at Sandia? The Gas Research Institute, an entirely privately funded gas-industry research coalition, whose money came from a voluntary levy on interstate gas pipelines. So the only federal involvement was to provide a space in which to work. As Wright comments: ‘If I had not hired Norm from Sandia there would have been no government involvement.’ This was just the start. Fracking still took many years and huge sums of money to bring to fruition as a workable technology. Most of that was done by industry. Government laboratories beat a path to Wright’s door once he had begun to crack the problem, offering their services and their public money to his efforts to improve fracking still further, and to study just how fractures propagate in rocks a mile beneath the surface. They climbed on the bandwagon, and got some science to do as a result of the technology developed in industry – as they should. But government was not the wellspring.

 

As Adam Smith, looking around the factories of eighteenth-century Scotland, reported in The Wealth of Nations: ‘a great part of the machines made use in manufactures . . . were originally the inventions of common workmen’, and many improvements had been made ‘by the ingenuity of the makers of the machines’. Smith dismissed universities even as a source of advances in philosophy. I am sorry to say this to my friends in academic ivory towers, whose work I greatly value, but if you think your cogitations are the source of most practical innovation, you are badly mistaken.

 

Ridley, Matt. The Evolution of Everything: How New Ideas Emerge (Kindle Locations 2190-2246). HarperCollins. Kindle Edition.

Science as a private good

 

It follows that there is less need for government to fund science: industry will do this itself. Having made innovations, it will then pay for research into the principles behind them, as it did with microseismic imaging and fracking. Having invented the steam engine, it will pay for thermodynamics. This conclusion of Terence Kealey’s is so heretical as to be incomprehensible to most economists, as well as scientists. It has been an article of faith for decades in both of their professions that science would not get funded if government did not do it, and economic growth would not happen if science did not get funded by the taxpayer. This received wisdom has been handed down for more than half a century. It was the economist Robert Solow who demonstrated in 1957 that innovation in technology was the source of most economic growth – at least in societies that were not expanding their territory or growing their populations. It was his economist colleagues Richard Nelson and Kenneth Arrow who explained in 1959 and 1962 respectively that government funding of science was necessary, because it is cheaper to copy others than to do original research. This makes science a public good, a service, like the light from a lighthouse, that must be provided at public expense, because nobody will supply it for free. No private individual will do basic science, for the insights that follow from it will be freely available to his rivals.

 

‘The problem with the papers of Nelson and Arrow,’ writes Kealey, ‘was that they were theoretical and one or two troublesome souls, on peering out of their economists’ eyries, noted that in the real world there did seem to be some privately funded research happening.’ Kealey argues that there is still no empirical demonstration of the need for public funding of research, and that the historical record suggests the opposite. In the late nineteenth and early twentieth centuries, Britain and the United States made huge contributions to science with negligible public funding, while Germany and France, with hefty public funding, achieved no greater results either in science or in economics. ‘The industrialised nations whose governments invested least in science did best economically,’ says Kealey, ‘and they didn’t do so badly in science either.’

 

To most people, the argument for public funding of science rests on a list of the discoveries made with public funds, from the internet (defence science in the United States) to the Higgs boson (particle physics at CERN in Switzerland). But that’s highly misleading. Given that government has funded science munificently, it would be odd if it had not found out something. We learn nothing about what would have been discovered by alternative funding arrangements. And we can never know what discoveries were not made, because government funding of science inevitably crowded out much of the philanthropic and commercial funding, which might have had different priorities.

 

After World War II, Britain and the United States changed tack and began to fund science heavily from the public purse. With the success of war science and of Soviet state funding that led to Sputnik, it seemed obvious that state funding must make a difference. The true lesson – that Sputnik relied heavily on Robert Goddard’s work, which had been funded by the Guggenheims – could have gone the other way. Yet there was no growth dividend for Britain and America from this science-funding rush. Their economies grew no faster than they had before.

 

In 2003, the OECD published a paper on ‘sources of growth in OECD countries’ between 1971 and 1998, finding to its explicit surprise that whereas privately funded research and development stimulated economic growth, publicly funded research had no economic impact whatsoever. None. This earthshaking result has never been challenged or debunked. Yet it is so inconvenient to the argument that science needs public funding that it is ignored.

 

In 2007, Leo Sveikauskas of the Bureau of Economic Analysis concluded that returns from many forms of publicly financed R& D are near zero, and that ‘many elements of university and government research have very low returns, overwhelmingly contribute to economic growth only indirectly, if at all’. As Walter Park of the American University concluded, the explanation of this discrepancy is that public funding of research almost certainly crowds out private funding. That is to say, if the government spends money on the wrong kind of science, it tends to stop people working on the right kind of science. But, given that the government takes more than one-third of a nation’s GDP in most countries and spends it on something, it would be a pity if none of that money found its way to science, which is after all one of the great triumphs of our culture.

 

Innovation, then, is an emergent phenomenon. The policies that have been tried to get it going – patents, prizes, government funding of science – may sometimes help, but are generally splendidly unpredictable. Where conditions are right, new technologies will emerge to their own rhythm, in the places and at the times most congenial to them. Leave people free to exchange ideas and back hunches, and innovation will follow. So too will scientific insight.

 

Ridley, Matt. The Evolution of Everything: How New Ideas Emerge (Kindle Locations 2246-2284). HarperCollins. Kindle Edition.

Add comment

Recent Posts