Economic models can’t deal with radical uncertainty

Earlier this year, Andy Haldane, Chief Economist at the Bank of England (BoE), blamed “irrational behaviour” for the failure of the BoE’s recent forecasting models. The failure to spot this irrationality had led policymakers to forecast that the British economy would slow in the wake of last June’s Brexit referendum. Instead, British consumers have been on a heedless spending spree since the vote to leave the EU. And, no less illogically, construction, manufacturing and services have recovered.

Haldane offered no explanation for this burst of irrational behaviour. Nor can he: to him, irrationality simply means behaviour that is inconsistent with the forecasts derived from the BoE’s model.

Act rational
It’s not just Haldane or the BoE. What mainstream economists mean by rational behaviour is not what you or I mean. In ordinary language, rational behaviour is that which is reasonable under the circumstances. But in the rarefied world of neoclassical forecasting models, it means that people, equipped with detailed knowledge of themselves, their surroundings, and the future they face, act optimally to achieve their goals. That is, to act rationally is to act in a manner consistent with economists’ models of rational behaviour. Faced with contrary behaviour, the economist reacts like the tailor who blames the customer for not fitting their newly tailored suit.

The assurance that tomorrow will be much like today has vanished. Our models of quantifiable risk fail when faced with radical uncertainty

Yet the curious fact is that forecasts based on wildly unrealistic premises and assumptions may be perfectly serviceable in many situations. The reason is that most people are creatures of habit. Because their preferences and circumstances don’t in fact shift from day to day, and because they do try to get the best bargain when they shop around, their behaviour will exhibit a high degree of regularity. This makes it predictable. You don’t need much economics to know that if the price of your preferred brand of toothpaste goes up, you are more likely to switch to a cheaper brand.

Central banks’ forecasting models essentially use the same logic. For example, the BoE (correctly) predicted a fall in the sterling exchange rate following the Brexit vote. This would cause prices to rise – and therefore consumer spending to slow. Haldane still believes this will happen; the BoE’s mistake was more a matter of timing than of logic.

This is equivalent to saying that the Brexit vote changed nothing fundamental. People would go on behaving exactly as the model assumed, only with a different set of prices. But any prediction based on recurring patterns of behaviour will fail when something genuinely new happens.

New unknowns
Non-routine change causes behaviour to become non-routine. But non-routine does not mean irrational. It means, in economics-speak, that the parameters have shifted. The assurance that tomorrow will be much like today has vanished. Our models of quantifiable risk fail when faced with radical uncertainty.

The BoE conceded that Brexit would create a period of uncertainty, which would be bad for business. But the new situation created by Brexit was actually very different from what policymakers, their ears attuned almost entirely to the City of London, expected. Instead of feeling worse off (as ‘rationally’ they should), most Leave voters believe they will be better off.

Justified or not, the important fact about such sentiment is that it exists. In 1940, immediately after the fall of France to the Germans, the economist John Maynard Keynes wrote to a correspondent: “Speaking for myself, I now feel completely confident for the first time that we will win the war.” Likewise, many Brits are now more confident about the future.

This, then, is the problem – which Haldane glimpsed but could not admit – with the BoE’s forecasting models. The important things affecting economies take place outside the self-contained limits of economic models. That is why macroeconomic forecasts end up on the rocks when the sea is not completely flat. The challenge is to develop macroeconomic models that can work in stormy conditions: models that incorporate radical uncertainty and, therefore, a high degree of unpredictability in human behaviour.

Keynes’s economics was about the logic of choice under uncertainty. He wanted to extend the idea of economic rationality to include behaviour in the face of radical uncertainty, when we face not just unknowns, but unknowable unknowns. This, of course, has much more severe implications for policy than a world in which we can reasonably expect the future to be much like the past.

The challenge is to develop macroeconomic models that incorporate radical uncertainty and, therefore, a high degree of unpredictability in human behaviour

Fresh angle
There have been a few scattered attempts to meet the challenge. In their 2011 book Beyond Mechanical Markets, the economists Roman Frydman of New York University and Michael Goldberg of the University of New Hampshire argued powerfully that economists’ models should try to “incorporate psychological factors without presuming that market participants behave irrationally”. Proposing an alternative approach to economic modelling that they called ‘imperfect knowledge economics’, they urged their colleagues to refrain from offering ‘sharp predictions’ and argued that policymakers should rely on ‘guidance ranges’ based on historical benchmarks, to counter excessive swings in asset prices.

The Russian mathematician Vladimir Masch has produced an ingenious scheme of ‘risk-constrained optimisation’, which makes explicit allowance for the existence of a ‘zone of uncertainty’. Economics should offer “very approximate guesstimates [requiring] only modest amounts of modelling and computational effort”, he said.

But such efforts to incorporate radical uncertainty into economic models, valiant though they are, suffer from the impossible dream of taming ambiguity with maths, and (in Masch’s case) with computer science. Haldane, too, seems to put his faith in larger data sets.

Keynes, for his part, didn’t think this way at all. He wanted an economics that would give full scope for judgment, enriched not only by mathematics and statistics, but also by ethics, philosophy, politics and history – subjects dropped from contemporary economists’ training – leaving a mathematical and computational skeleton. To offer meaningful descriptions of the world, economists, he often said, must be well educated.

© Project Syndicate 2017

Geoengineering is an essential bridge to true sustainability

Even climate activists increasingly recognise that the lofty rhetoric of the global agreement to reduce greenhouse gas emissions, concluded in Paris over a year ago, will not be matched by its promises’ actual impact on temperatures. This should make us think about smart, alternative solutions. But one such alternative, geoengineering, is a solution that many people refuse to entertain.

Geoengineering means deliberately manipulating the Earth’s climate. It seems like something from science fiction. But it makes sense to think of it as a prudent and affordable insurance policy.

Climate summit after climate summit has failed to affect global temperatures for a simple reason: solar and wind power are still too expensive and inefficient to replace our reliance on fossil fuels. The prevailing approach, embodied by the Paris Agreement, requires governments to try to force immature, uncompetitive green technologies on the world. That’s hugely expensive and inefficient.

Engineered reality
The Bill Gates-led Breakthrough Energy Ventures fund announced last year provided reason for hope. At the centre of any response to global warming, we need to focus on making renewable energy cheaper and more competitive through research and development. Once innovation drives the price of green energy below that of fossil fuels, everyone will switch. Much more research funding is needed, but such innovation will take time. That’s where geoengineering could play a role.

This year, for the first time, the US Government office that oversees federally funded climate studies is formally recommending geoengineering research. The move had the backing of Barack Obama’s former science advisor John Holdren, who said that geoengineering has “got to be looked at”.

Geoengineering promises to be exceptionally cheap, making it much more likely than expensive carbon cuts to be implemented

Last year, 11 climate scientists declared that the Paris Agreement had actually set back the fight against climate change, saying: “Our backs are against the wall and we must now start the process of preparing for geoengineering.” The crucial benefit of investigating geoengineering is that it offers the only way to reduce the global temperature quickly. Any standard fossil fuel reduction policy will take decades to implement, and a half-century to have any noticeable climate impact. Geoengineering can literally reduce temperatures in a matter of hours and days. That is why only geoengineering, not investments in renewables, can be an insurance policy.

Moreover, geoengineering promises to be exceptionally cheap, making it much more likely than expensive carbon cuts to be implemented. This also means that it is more likely to be deployed by a single country or even a rogue billionaire. Given this, it is essential that we seriously investigate its effects beforehand, to ensure that it works and doesn’t deliver unexpected, negative results.

To be clear, I am not advocating that we should start geoengineering today or even in this decade. But it merits serious research, especially in view of the limitations of the Paris Agreement.

Saviour selector
So, what exactly should be studied? Many methods of atmospheric engineering have been proposed. The most talked-about process takes inspiration from nature. When Mount Pinatubo erupted in 1991, about 15 million tons of sulphur dioxide was pumped into the stratosphere, reacting with water to form a hazy layer that spread around the globe. By scattering and absorbing incoming sunlight, the haze cooled the Earth’s surface for almost two years. We could mimic this effect through stratospheric aerosol insertion, essentially launching material like sulphur dioxide or soot into the stratosphere.

Perhaps the most cost-effective and least invasive approach is a process called marine cloud whitening, whereby seawater droplets are sprayed into marine clouds to make them slightly whiter and reflect more sunlight. This augments the naturally occurring process by which salt from the oceans provides the condensation particles for water vapour, creating and boosting the whiteness of clouds.

Spending just $9bn on 1,900 seawater spraying boats could prevent all of the global warming set to occur this century

Research for the Copenhagen Consensus, the think tank I direct, has shown that spending just $9bn on 1,900 seawater spraying boats could prevent all of the global warming set to occur this century. It would generate benefits – from preventing the entire temperature increase – worth about $20trn. That is the equivalent of doing about $2,000 worth of good with every dollar spent.

To put this in context, the Paris Agreement’s promises will cost more than $1trn annually, and will deliver carbon cuts worth much less – most likely every dollar spent will prevent climate damage worth a couple of cents.

People are understandably nervous about geoengineering. But many of the risks have been overstated. Marine cloud whitening, for example, amplifies a natural process and would not lead to permanent atmospheric changes – switching off the entire process would return the world to its previous state in a matter of days. It could be used only when needed.

The case for serious research into geoengineering is compelling. As a growing number of scientists are recognising, the planet needs more opportunities to address global warming. With a climate outcome as weak and costly as the one implied by the Paris Agreement, such opportunities cannot arrive too soon.

© Project Syndicate 2017

Data and the digital advertising dilemma

For all that’s changed since the advertising boom of the 50s, marketing is still an imperfect science. The arrival of the internet, and with it big data and targeted advertising, was supposed to herald a new golden age for advertising, but so far many brands remain in the dark about how best to use these new tools. However, one industry in which digital advertising seems to have found a natural home is politics, with recent surprise election victories for candidates who invested heavily in online marketing suggesting firms may have something to learn from what is going on in political advertising.

While the traditional advertising media of television and billboards trumpet announcements of a new product far and wide, this approach is the marketing equivalent of taking an iron sledgehammer to a walnut shell. It gets the job done, but the potential customer base for most products is only a small slice of the batch audience who end up seeing the ad. Money used to advertise to the remaining chunk, who have zero chance of buying it, would be better spent elsewhere.

The ideal solution is to show adverts for a product only to the subset of people who would likely want to buy it, and this is exactly what online advertising was supposed to help brands do. However, so far, despite spending increasingly large portions of the advertising budget on digital channels, many firms struggle to unlock its benefits.

Since digital advertising uses the personal data that is the debris of the internet explosion, it is a relatively new concept

Big data, big potential
Since digital advertising uses the personal data that is the debris of the internet explosion, it is a relatively new concept, but due to its massive potential universities have been quick to develop courses to teach it. As a Professor of Marketing at DePaul University in Chicago, Bruce Newman teaches one such course. He is also well versed in digital advertising in politics, having authored a book on former US President Barack Obama’s campaign techniques.

Newman described how companies have struggled to master digital advertising: “Because the digital world is so fragmented with noise and misinformation and ‘fake news’, brands struggle to get across a message in a digital way that resonates with customers.”

Fresh personal data is spawned every time we visit a website, send a message, or make a purchase online and, as we spend greater portions of our day roaming the internet, our data trail proliferates. Tech firms harvest this data and sell the fattened product on to companies for advertising purposes since, like lots of tiny pixels merging to form an image, the aggregation of many data points forms a valuable profile of each user’s likes and interests.

Armed with all this data, Newman described how companies can tailor advertising specifically. A firm can “create a segment of the brand going to each individual, as opposed to what was done 30 or 40 years ago with one commercial on television going to millions of people,” he said.

The ability to curate adverts by customer is not the only benefit of internet marketing. The way people interact online could also make advertising much more efficient.

The ‘bubbles’ of like minded people that form on social media are usually discussed as an unpleasant facet of online life, keeping everyone sealed in nice cosy echo chambers, safely insulated from any opposing views. However, they could be very useful in advertising; as far back as 2012, computer scientists studying network effects realised messages spread far more quickly in bubbles, since like minded people are inclined to share each other’s content. Embed an advert in the right online article and consumers will do the work of circulating it to their friends, and so the walls of a thought bubble on Facebook or Twitter can make for a much cheaper, more effective, screen on which to advertise than a billboard or television.

Bad ads
However, despite all the potential of the virtual world, in reality digital advertising is often still done very, very badly.

Evan Neufeld is head of Intelligence at L2, a New York firm that specialises in research into the effectiveness of digital marketing. He attested to the unfulfilled potential of data. “We talk a big game about targeting but at the end of the day Amazon perpetually sells me crap I already bought. The reality of it is we’re at this very early stage where a lot of targeting is not very good,” he said.

Badly used data leads to badly spent advertising dollars and, as Neufeld pointed out, can have very counterproductive side effects. “If I’m looking at something on Amazon and I go to The Wall Street Journal and it appears again, a lot of people have no idea what that’s about and it freaks them out… Nothing stinks quite so much as bad targeting,” he said.

Political affiliation is easier to extract from the type of articles you consume and share than, say, your preferences for shoes.

Overt targeting creates a negative association with the brand, since most people find it invasive, according to Neufeld. “The number one thing people are concerned about is their privacy. Apart from Millennials [that is] – they have a very different bar about privacy online, they don’t think they’re going to receive a lot of privacy online,” And Millennials, with their heightened sense of individuality and lack of brand loyalty, create their own set of problems.

The politics of data
One field where data is being used impressively is in political campaigning. Micro-targeting (the practice of identifying a voter’s key areas of policy concern and directing advertising to address it) has come to define campaign policy. This is true for traditional advertising also, since digital data is used to decide how to allocate billboard and TV spend geographically.

Newman described how Obama’s campaign team pioneered this practice for his 2012 presidential run. “What the Obama people did was they collected data on thousands of variables for individuals, and created statistical models that forecast the likelihood a person would cast a ballot for Obama. Hence they could identify a particular area with people who fit the profile of an Obama voter. The logical choice says take the advertising dollars [spent] both on television and digital and target those areas where those people live,” he said.

Other high profile political examples that used micro-targeting include the recent successful campaign to have the UK leave the EU, Donald Trump’s US presidential run, and Emmanuel Macron’s bid to become President of France. In the recent UK elections, the Labour Party attained a much more positive result than expected after reportedly outspending the Conservative Party on digital advertising. According to Newman, Labour were probably successful in part because they “were able to determine that young people didn’t take the opportunity to cast a ballot in the Brexit referendum and they leveraged that information and targeted young people through Facebook, encouraging them to vote”.

Newman stressed how using the right sort of data out of the masses available is key to effective advertising. “The best data available is data that tells the organisation about the behaviour of the individual. The next best is data that tells you about people’s attitudes and what they think about things,” he said.

While political affiliation correlates neatly with choice of online articles and friend circles, consumer choices usually do not

Arguably, then, part of the reason digital advertising works better in politics than other industries is because political affiliation is easier to extract from the type of articles you consume and share than, say, your preferences for shoes.

New data, old problems
Despite the imperfections of digital advertising, companies continue to spend high sums on it, at the expense of traditional methods. “If you look at the advertising dollars that have been spent worldwide in the last 20 years it’s kept up with the cost of living. What has happened is venues like print and TV have seen decreased allocation as more of that money has flowed to digital so it is very much a kind of Peter steals from Paul to pay for digital advertising,” said Neufeld.

As Neufeld pointed out, partly this is a numbers game. To take a TV example, since now ”Netflix has more subscriptions in the US than cable, advertising dollars go where the people are,” he said. This has seen the hollowing out of media industries that relied on advertising for revenue.

However, while at first glance the advertising industry seems to have changed along with media, scratch the surface and it is clear this follows a pattern. “The top ten agencies for the era of radio in the 20s, print in the 30s, TV in the 40s and 50s, and cable in the 70s and 80s have changed, so this has a historical precedent. An agency that’s great at doing a 15 second branded TV spot is not going to be great at digital,” Neufeld said.

It seems that while it was tempting to see data as the new dawn, the industry remains fundamentally unchanged – all the data in the online world is no substitute for a good advertising strategy. Companies can learn from how political campaigning carefully mines useful data, but it is important to remember that while political affiliation correlates neatly with choice of online articles and friend circles, consumer choices usually do not.

As Newman put it: “This is the challenge to the digital world, how do you define your brand to the people you’re targeting? Despite the technology involved, be that big data or targeted marketing, it all comes down to a fundamental marketing question, how do you define yourself in the digital world?”

French government to invest €10bn in innovation

During his campaign, recently elected French President Emmanuel Macron set out a host of pledges that he claimed would cement France’s position as a ‘start-up nation’. On July 5, French Finance Minister Bruno Le Maire reaffirmed one of the central tenets of this strategy, announcing that, over the coming months, the government will begin selling its stakes in large companies in order to free up funds for investing in innovation.

In a speech promoting business opportunities in Paris, Le Maire said: “We will make €10bn available to finance innovation… I will announce in the coming months significant stake sales in public companies that would enable the taxpayer to see its money is being spent on the future and not the past,” according to Reuters.

The French Government owns stakes in several large companies through its state-controlled asset management agency

Currently, the French Government owns stakes in several large companies through its state-controlled asset management agency, with assets coming to a total value of around €90bn. Among others, the agency holds significant stakes in telecoms giant Orange, energy group Electricité de France and carmaker Renault. Le Maire has not yet announced which firms he is planning to pull the funds from.

The fund, according to Macron, is geared towards giving visibility to innovation in France and making the country a more attractive environment for entrepreneurs. Addressing the divestment strategy on July 5, Le Maire said that while investing in disruptive innovation doesn’t generate quick profits, it would help France to compete with other large economies such as China and the US.

As it stands, France is already gaining traction in the start-up sphere. According to research by EY, it is ranked second in Europe in terms of value and volume of venture capital fundraising. Paris’ growing clout as an entrepreneurial hub was also recently underscored by Facebook’s decision to choose the city for its first ever start-up campus.

Having painted himself as a powerful proponent of innovation, Macron has stoked high expectations for his pro-business policies. “France will become the leading country for hyper-innovation… I want France to be a start-up nation. A nation that thinks and moves like a start-up. Entrepreneur is the new France,” he said as he announced his plans for the innovation fund. As such, the success of such policies in further propelling France’s entrepreneurial potential will be seen as key test for the new President.

Facebook, Twitter and Snapchat compete for rights to screen 2018 World Cup

Social media giants are seeking to buy the US rights from Fox to screen online highlights of the 2018 World Cup. As reported by Bloomberg on July 6, Facebook, Twitter, and Snapchat have each offered tens of millions of dollars for permission to show clips of football’s biggest tournament in the US.

Online coverage may prove particularly popular with Millennials, who already spend large chunks of their time on social media, and among whom TV ownership is in decline. Also, since this round of the quadrennial event will be held in Russia, time differences mean games will be live screened at awkward hours in the US, so highlights may attract increased interest from older viewers.

Fox has struggled to stay relevant to the Netflix generation, and its median viewer age has crept up to 68

Fox reportedly paid around $400m for multi-year US rights to football’s flagship tournament, which had an audience of tens of millions for the 2014 final. This interest from social media companies provides Fox with a potentially lucrative new revenue stream, as well as a means to promote itself among younger audiences. Fox has struggled to stay relevant to the Netflix generation, and its median viewer age has crept up to 68. It will have to diversify its services to maintain audience levels over the coming years. According to Bloomberg, Fox has not yet decided whether it will grant rights solely to one company or make a deal with several.

This report comes amid a wider climate of social media companies expressing increasing interest in the world of high-quality production. Last month, Facebook began talks with Hollywood to create its own scripted TV series. Twitter is now focusing on driving live video content covering news events on its site, and Snap has invested in new shows produced by traditional media companies.

The rights to screen the World Cup come with the opportunity to cash in with advertisers drawn to the tournament’s massive audiences, while allowing social media platforms to show they can produce high-quality content to compete with TV.

Volvo to go fully electric from 2019

Volvo Cars has announced that every car in its range will have en electric motor from 2019 onwards, making it the world’s first major car manufacturer to go all electric. The move will see Volvo begin to phase out cars powered purely by petrol or diesel, and put electrification at the core of its future operations.

“This announcement marks the end of the solely combustion engine-powered car,” said Håkan Samuelsson, CEO of Volvo Cars. “Volvo Cars has stated that it plans to have sold a total of one million electrified cars by 2025. When we said it, we meant it. This is how we are going to do it.”

Last year, electric vehicles had their best year to date, but still made up less than one percent of global car sales

From 2019, the premium car manufacturer will launch five fully electric cars across its range, two of which will be released under the company’s high-performance sub-brand, Polestar. In addition to these fully electric cars, Volvo will also start manufacturing a range of hybrids, which will combine a small petrol engine with an electric battery. This wide range of plug-in hybrid models will represent one of the broadest electric car offerings in the modern market.

By 2019, no new Volvo cars will be manufactured without an electric motor, and the company will be gradually phasing out internal combustion engines. In a further commitment to environmental sustainability, the carmaker also aims to have completely ‘climate-neutral’ manufacturing operations by 2025.

“This is about the customer,” said Samuelsson. “People increasingly demand electrified cars, and we want to respond to our customers’ current and future needs.”

Indeed, the global electric car market is steadily growing, but it still only accounts for a small fraction of car sales worldwide. Last year, electric vehicles had their best year to date, but still made up less than one percent of global car sales. The market has, however, shown significant progress in China, which is now the biggest automotive market in the world.

Approximately 265,000 fully electric vehicles were sold in China in 2016, far outpacing the 110,000 models sold in Europe over the same period. Volvo was purchased by Chinese car company Geely in 2010, and the new owners have wasted no time in electrifying operations at the premium carmaker.

Volvo’s announcement comes hot on the heels of Tesla’s confirmed release date for its low-cost Model 3 electric car. On July 2, Tesla founder Elon Musk announced that the firm’s mass-market Model 3 vehicles would go on general sale on July 28, and that the company is on track to produce 5,000 vehicles a week. As of yet, Tesla has had no major competitors in the fully-electric car market, but Volvo’s ambitious electrification efforts may introduce a new level of competition to the fledgling industry.

Samsung plans $18.6bn investment in South Korea to preserve memory chip dominance

On July 4, Samsung announced it would invest $18.6bn in South Korea, in a bid to hold on to its edge in the smartphone market. This sizeable investment will focus particularly on research and development for memory chips to safeguard Samsung’s lucrative status as the world’s biggest chipmaker.

As smartphones have become increasingly powerful, demand for suitable memory chips has soared. This has been a particular boost to Samsung which, as well as powering its own products, makes application processors designed by Apple and Qualcomm for phones.

The rising popularity of smartphones, and the accompanying decline of the personal computer, has seen Samsung emerge as the chief rival to world’s biggest maker of chips (not just memory chips), Intel. This quarter, Samsung was estimated to have knocked Intel off of the top spot for chip sales, generating $15.1bn to Intel’s $14.4bn, as reported by the Financial Times.

As smartphones have become increasingly powerful, demand for suitable memory chips has soared

The meteoric rise of the smartphone shows no sign of slowing, so this trend will likely continue, since Intel has struggled to make inroads in the mobile market. Mounting memory chip sales have helped Samsung generate estimated record profits this year.

The $18.6bn earmarked is a considerable increase from Samsung’s already hefty yearly chip spend of $10bn and, on top of this, earlier this year Samsung announced it would build a $380m plant in the US. After this US spend, the plan for South Korea, which could create up to 440,000 jobs by 2021, demonstrates Samsung’s commitment to maintaining the domestic arm of its production.

To read more about how Samsung has shrugged off a catastrophic 2016, complete with flaming devices and allegations of bribery, to emerge triumphantly into a very profitable 2017, see The New Economy’s special report.

The oil and gas industry must take cybersecurity seriously

Though the threat of cyber-attacks is pertinent for all sectors, it remains particularly dangerous for the oil, gas and petrochemical sector. First, there is the fact transactions made within the industry involve highly sensitive information, often pertaining to potential new sites and end-user consumption. Second, there is the endemic threat that relates to the very nature and relevance of the industry in geopolitical terms.

Given the wealth and, in turn, power that comes with oil and gas reserves, refineries have long been prime targets for terrorist groups: their capture has been a dominant feature of ISIS’ strategy. Consequently, the industry has been more proactive than others in terms of bolstering its cybersecurity. That said, many companies are still only at the start of a long road, particularly as hackers continue to find increasingly sophisticated ways to infiltrate even the most prepared victims’ systems.

Double trouble
There are obvious physical dangers inherent in the oil and gas sector, but the process of digitalisation raises a whole new type of threat. This is because processes and data become vulnerable to external forces as corporations digitally connect their industrial components.

“A major trend in oil and gas technology is the application of automation and machine learning to address the cybersecurity skill and manpower shortfall in the energy industry. However, as more connected devices move into the sector, so do the opportunities for more risk”, said Edgard Capdevielle, CEO of cybersecurity firm Nozomi Networks.

Artificial intelligence and machine learning can help organisations detect threats through the scrutiny of networks in real time, allowing them to flag any variances from ordinary baseline behaviour

A further issue is the tendency for oil and gas firms to use third parties for their operational technology (OT) management, which means they often have insufficient OT-specific knowledge of their equipment. “As a result, they have less control of the infrastructure and its security”, Capdevielle explained. “Historically, oil and gas companies have focused on strengthening IT security and isolating OT from IT. Today, that approach is no longer enough as the Industrial Internet of Things (IIoT) makes it possible for cyber-attacks to go straight to OT subsystems.”

As Capdevielle explained, operations, productivity and employee safety can all be affected by cyber-incidents. And, while planned assaults are the biggest threat facing the industry (as evidenced by recent attacks on the Ukrainian power grid), unintentional incidents are also perilous. For example, infected USB drives or third-party laptops can accidentally introduce malware, while device traffic storms are also dangerous. Unfortunately, such incidents increase apace as IIoT devices migrate to OT environments that are traditionally siloed. “Expect these cyber-attacks to grow in frequency and sophistication”, warned Capdevielle.

Winds of change
Although this combination of threats has created a challenging environment for the sector, new equipment has begun to incorporate far more sophisticated security software to give firms greater protection.

“Newer technologies use advanced visibility tools; technologies that document and visualise systems and detect intrusion”, Capdevielle said. “This means that there is good security hygiene – something that current practices lack.

“Control system traffic is predictable, so, by establishing a baseline of network communications and conducting active monitoring for anomalies, anything that detracts from expected behavioural patterns is identified as malicious.”

What’s more, artificial intelligence and machine learning can help organisations detect threats through the scrutiny of networks in real time, allowing them to flag any variances from ordinary baseline behaviour. “They also speed up the investigation of incidents, allowing firms to contain attacks before significant damage can occur, without needing to add additional staffing”, Capdevielle added. This new technology is giving firms in the oil, gas and petrochemical sector more than a fighting chance – it’s fundamentally changing the game. Hackers will always innovate, but for now the future looks hopeful for the industry.

GlaxoSmithKline signs $43m deal with AI start-up

In evidence of the growing interest in using artificial intelligence (AI) as a means to speed up drug development, only July 2, pharmaceutical giant GlaxoSmithKline (GSK) unveiled a new $43m deal with Scotland-based AI start-up Exscientia.

In a statement discussing the deal, Andrew Hopkins, CEO of Exscientia said: “This alliance provides further validation of our AI-driven platform and its potential to accelerate the discovery of novel, high-quality drug candidates.”

Interest in AI start-ups from big pharmaceutical companies has heated up as the application of relatively new software techniques to narrow down the list of potential candidate molecules for new drugs becomes apparent. Accordingly, this is the second such deal Exscientia has signed in recent months, having partnered with Sanofi in May.

Until recently, pharmaceutical research has relied on software that checks if any molecules fit a drug profile written by chemists

AI cuts down the initial process of selecting molecules to take through further trial stages by using algorithms trained to sift through huge masses of data detailing the chemical profiles of molecules, and develop rules detailing the profile of a molecule that may make a suitable drug.

Until recently, pharmaceutical research has relied on software that checks if any molecules fit a drug profile written by chemists, or has used simulations to predict useful structures. Now, techniques such as deep learning use algorithms trained on massive data sets to develop their own sets of rules that evolve as more data is added.

This new type of drug discovery has been made possible by rapid improvements in the sophistication of machine learning techniques, but also by the collection of genetic information on a level not previously possible, creating banks of data for use as training sets.

Hopkins said in his statement that using these methods had allowed Exscientia to reduce the time taken to find molecules to take to further trial stages to around a quarter of the time, for a quarter of the cost, compared to traditional methods.

In a statement discussing the partnership, John Baldoni, Senior Vice President at GSK, said he hoped the partnership would “accelerate the discovery of new molecules against high value GSK targets with speed and without compromising quality”.

Big pharmaceutical companies are following in the footsteps of AI start-ups, who have recently woken up to the potential uses of their software as a drug detection tool, other start-ups working in this field include Deep Genomics, UK firm Benevolent AI, and Calico, an Alphabet subsidiary.

Under the deal, Exscientia will receive research funding from GSK to undertake drug-finding missions for 10 ‘target’ diseases, with the full £33m ($43m) amount paid on successful delivery of preclinical drug candidates for all 10 targets.

Iran to sign landmark gas deal with French energy giant Total

A consortium led by France’s Total is today set to sign a multi billion-dollar contract to develop an Iranian offshore gas field, in what will be the country’s first major energy deal since sanctions were lifted in 2016. French energy giant Total will take a 50.1 percent stake in the $4.8bn project, while China’s National Petroleum Corporation (CNPC) will own 30 percent, and Iran’s Petropars the remaining 19.9 percent.

“The international agreement for the development of phase 11 of South Pars will be signed on Monday in the presence of the oil ministry and managers of Total, the Chinese company CNPC and Iranian company Petropars,” a spokesperson from Iran’s oil ministry told AFP.

Total initially signed a preliminary deal with Iran in November 2016, but the agreement has since been delayed

Total initially signed a preliminary deal with Iran in November 2016, but the agreement has since been delayed, as the French company waited to see how the Trump administration would approach relations with Iran. While Trump has long criticised the international response to Tehran’s nuclear programme, his administration has continued to approve sanctions waivers, allowing deals with Iran to go ahead. However, the White House is now in the midst of a 90-day review on whether to uphold the nuclear deal, with some cabinet members showing support for Trump’s campaign pledge to renew sanctions on Iran over its nuclear activities.

The offshore South Pars gas field is Iran’s section of the world’s largest gas deposit, which it shares with Qatar. The site was first developed in the early 90s, and has often been the focus of bitter dispute between several Gulf nations.

The deal will mark Total’s return to Iran, where it focused much of its business before international sanctions were imposed in 2006. Iran has the second-largest gas reserves and the fourth-largest oil reserves in the world, making it an extremely attractive investment location for energy companies.

As part of the agreed 20-year project at South Pars, the offshore field will receive 30 new wells and two well-head platforms, with Total pledging to invest an initial $1bn for the first stage of the project. Upon completion, the field will supply Iran’s national grid with 50.9 million cubic meters of gas per day.

Chinese telecoms firm to double spending on 5G research

In an indication of China’s determination to lead the world into the next phase of connectivity, telecoms giant ZTE Corp has announced it will double spending on research into fifth-generation mobile network (5G).

The firm now plans to spend at least $295.5m on 5G research every year, and said it would consider ramping investment up even further to assist China’s aim of having a 5G network up and running by 2020. The US is also investing heavily to develop its own 5G network.

5G development is still so far back in the pipeline that there is not yet an international standard for what coverage will look like

The 5G network will provide significantly faster data transfer than current 4G coverage, delivering a marked speed improvement for internet users, but the real business case for 5G lies in the Internet of Things. Tech companies are increasingly focusing development on smart electronic home appliances; these will need constant, very rapid internet connections to function and communicate with each other. Some estimates suggest the rate of the technology boom is such that there will be 50-100 billion devices by 2020, and 5G will be essential to powering them.

5G development is still so far back in the pipeline that there is not yet an international standard for what coverage will look like, but some standards for speed and efficiency have been agreed. The speed upgrade of 5G would allow even a full HD film to download in seconds, and do away with patchy network coverage, giving users the perception of limitless bandwidth and continuous availability.

Building the infrastructure required to support a 5G network is one of the most costly challenges associated with getting it off the ground. The super fast internet of 5G will be facilitated by higher frequencies than those currently used, but since these are more easily blocked by trees and buildings, a network of base stations will have to be built to avoid gaps in service.

Both China and the US are committed to meeting the initial costs of 5G to reap the benefits later, and a recent research paper by the Ministry of Industry and Information Technology forecast China’s cumulative 5G capital spending is expected to rise to $243bn by 2025. Not to be outdone, the US will spend between $130bn and $150bn over the next five to seven years in fibre infrastructure alone, to get its own 5G network up and running, a study by Deloitte earlier this month found.

Merging humans with robots might be the only way to survive in the future

From 1927’s Metropolis to 2015’s Ex Machina, doomsday scenarios for artificial intelligence have been a silver screen favourite for decades. In these nightmarish worlds, advanced robots supersede humans as the dominant form of intelligent life, with catastrophic consequences. Human enslavement, experimentation and eradication are frequent tropes in these sci-fi futures, with our robot overlords rarely showing mercy to their human creators.

However, as AI takeover scenarios continue to fascinate moviegoers, a growing number of industry experts have begun to voice concerns over the real-world development of AI technology. “Success in creating AI would be the biggest event in human history”, wrote Stephen Hawking for The Independent. “It might also be the last, unless we learn how to avoid the risks.” With Hawking giving voice to the growing concerns within some segments of the scientific community, the relationship between man and machine has never been so fraught.

Rise of the robots
In 1993, sci-fi writer Vernor Vinge published The Coming Technological Singularity, an influential essay credited with popularising the titular term. The ‘technological singularity’, as theorised by Vinge and his peers, is the point in time at which robots become self-sufficient, and are thus able to independently upgrade themselves. This would, theoretically, lead to a runaway cycle of self-improvement, resulting in superintelligence that outshines human thought.

In a future dominated by AI, some are warning there will only be one way for humans to stay relevant: merging with machines

In the years since the essay’s publication, the world has hurtled towards that singularity. Real-world AI first came to public prominence in 1997, when IBM supercomputer Deep Blue made history by beating reigning world chess champion Garry Kasparov under tournament conditions. For AI sceptics, this proved a machine could indeed think both logically and strategically, effectively emulating human thought processes. “I had played a lot of computers but had never experienced anything like this”, Kasparov later said in an essay for Time. “I could feel – I could smell – a new kind of intelligence across the table.”

Having conquered the world of chess, AI programmers turned their attention to the ancient Chinese board game Go. Capable of producing more possible outcomes than there are atoms in the visible universe, Go is far more complex than chess. In March 2016, Google’s DeepMind achieved an AI milestone when its supercomputer beat Go world champion Lee Sedol at the 3,000-year-old game. Prior to the monumental victory, experts confidently declared it would take at least another decade for AI to catch up with human players. With DeepMind outsmarting Sedol, however, it appears machine intelligence is advancing faster than previously anticipated.

Man machine
In a future dominated by AI, some are warning there will only be one way for humans to stay relevant: merging with machines. In March, serial entrepreneur Elon Musk launched Neuralink, a venture aimed at fusing the human brain with advanced computing. The start-up is focused on creating a ‘neural lace’, which, when fitted over the brain, would give the wearer improved cognitive function and high-speed computing abilities.

As Musk gets to work on his radical new device, some scholars argue AI is already on its way to becoming the next step in the human evolutionary process. For evidence of a blossoming fusion of biology and technology, we need look no further than online dating websites, where AI algorithms are rapidly shaping human pairing and reproduction habits. In the flesh realm, meanwhile, advances in genetic engineering are prompting intense ethical debate. In April 2015, Chinese researchers broke new ground by creating the world’s first genetically modified embryo, raising the prospect of fully lab-based reproduction.

With technology beginning to alter the very fabric of life itself, the human era could be fast approaching its end; it falls to us in the present to lay the framework for the future, if for no other reason than to ensure the end of our species is due to evolution, not destruction.