Power sharing is essential to creating a sustainable future

Amid the constant wrangling over making sure member states meet supposedly strict EU emissions targets, governments have tended to favour domestic policies over wider initiatives. While there have been calls to make governments collaborate in order to meet these targets, many administrations in Europe have backed domestic energy providers, rather than allow sharing of power across border.

Creating greater efficiencies between countries in how energy is generated and shared is an integral part of building a more sustainable world. In Europe, it would also greatly increase energy independence at a time when many EU countries are heavily reliant on Russian and Saudi Arabian oil and gas.

It is widely agreed that renewable energy is the future of the energy industry. Even those at the top of the oil industry will admit that, with fossil fuels depleting, the world will need to rely increasingly on renewable sources that are both sustainable and cheaper. However, although policymakers the world over have talked tough about the need for countries to sign up to strict targets on renewable energy, the reality has been quite different. Countries have done their level best to scale back renewable energy subsidies in the face of economic trouble and a raft of new fossil fuel discoveries (which are partly due to the proliferation of shale drilling). While the renewable energy industry has certainly been harmed by this trend, its importance as a long-term solution to the world’s energy troubles remains.

40%

Target reduction in EU emissions by 2030

€40bn

Potential savings from a common EU energy market

350 miles

Total Length of the NorGer cable

€2bn

Cost of HVDC Norway-Great Britain

€2.5bn

Potential cost of the UK-Spain link

Blocking progress
With many domestic industries reliant on one form of power, it is hard to force countries to buy their neighbours’ energy. This recently became apparent in the EU with a report that Spain and Portugal were having difficulty selling their abundant supplies of wind-generated power to neighbour France, which has invested heavily in its own nuclear power industry.

Spain currently produces far more energy through its wind turbines than it needs. However, for decades, the French have blocked proposals to build interconnectors across the Pyrenees: they claim such cables would be too expensive to build through the rock, and that they would harm the natural beauty of the mountain range if they were built overhead. However, many believe the true motivation of successive French governments has been the protection of their country’s domestic nuclear energy industry.

It is the importing of energy from abroad, however, that is Europe’s biggest problem. With Germany so heavily reliant on Russian oil and gas, it has found it very difficult to sign up to tough sanctions on Moscow after recent political troubles in Ukraine. Terrified of being pushed back into a recession if Russia turns off the power, Germany has tried to play the mediator in the dispute between Ukraine, NATO and Russia.

Poland’s Prime Minister Donald Tusk proposed one solution to this problem last April. A European energy union, he said, would cut the EU’s reliance on foreign power, while also creating greater efficiencies in how energy is used. This would in turn help countries meet their energy targets, which state greenhouse gas emissions must be cut by 40 percent by 2030.

Tusk’s plan would bring huge benefits to the EU, with consultancy group Strategy& predicting as much as €40bn could be saved each year as a result of such a strategic energy policy. It would considerably cut the EU’s reliance on Russian imports at a time when Moscow is meddling in the affairs of some of its eastern members. It would also greatly improve cross-border infrastructure, while standardising EU-wide energy prices for consumers.

Tusk said that, while the plan may seem like a fantasy, it must be implemented if the EU is to avoid the sort of troubles it experienced during the financial crisis. “The banking union also seemed close to impossible, but the financial crisis dragged on precisely because of such lack of faith,” he said. “The longer we delay, the higher the costs become.”

Infrastructure changes
EU policymakers believe one of the preconditions of energy trading within an integrated energy market is the availability of the necessary infrastructure. This means that, for renewable energy to be traded, there need to be interconnected grids between EU member states. The European Commission and Energy Union Vice-President, Maroš Šefčovič, is actively encouraging France to upgrade its electricity interconnection
capacity as a result.

Šefčovič believes France’s limited grid capacity with neighbouring countries such as Spain continues to inhibit the development of competition and constrains the security of supply. It’s also been argued France would benefit from the further liberalisation of its wholesale electricity markets, which is currently one of the most concentrated in the EU.

The EU is aiming to achieve a minimum target of 10 percent of existing electricity interconnections by 2020 for member states that have not yet reached a minimum level of integration in the internal energy market. This means countries such as Spain, which are relatively cut off from the rest of Europe’s energy grid, need to be allowed to connect through their main point of access to the internal energy market: in the case of Spain, that means France.

In order to encourage France to open its grid up to countries such as Spain, the European Commission has a number of priority schemes, dubbed ‘Projects of Common Interest’, which are eligible for EU funding. This will help France increase interconnection levels with the UK, Ireland, Belgium, Italy and Spain, while removing bottlenecks and integrating renewable energy into the network.

Šefčovič says: ”We need to have adequate energy infrastructure with good interconnections, in particular to integrate renewables into the grid and to unlock energy islands. Structural funds, the Connecting Europe Facility, joint investments and the future Juncker Investment Package can contribute to the financing of these energy infrastructure projects and modernise the EU’s energy system.”

Maros Sefcovic, the European Commission and Energy Union Vice-President, has called for a coherent strategy to connect Europe’s energy market
Maros Sefcovic, the European Commission and Energy Union Vice-President, has called for a coherent strategy to connect Europe’s energy market

Whether France will be willing to allow an influx of cheap renewable energy onto its grid, and therefore reduce the country’s reliance on nuclear power, remains to be seen. However, for the EU to be a successful in its mission to create a common market for energy and reduced emissions, France should be encouraged to open up. If it is unwilling, it should be forced.

Progress in the north
While France might be reticent to allow other countries to sell their power onto its grid, many within Europe and nearby are more amenable. Norway has been particularly active in connecting its energy output to the rest of Europe in the south, even though it is not a member of the European Union. As an integral member of the European Free Trade Association, however, it enjoys strong ties with its southern European neighbours.

Much of Norway’s generated power is being exported to its neighbours through the creation of a series of underwater interconnector cables. One project, dubbed ‘NordLink’, will provide a 1,400MW link between Norway and Germany, at a cost of nearly €2bn. Half of the project is owned by Norwegian operator Statnett SF, while the remainder is split between German operator TenneT TSO and Germany’s state-owned KfW.

NordLink is not the only interconnector project being built between the two countries. NorGer is another 1,400MW cable that will pass through the North Sea and run for 350 miles. It will transfer electricity generated by Germany’s wind power industry, while hydroelectric power from Norwegian reservoirs will travel in the other direction.

The UK and Norway are building a €2bn high-voltage direct current electricity cable under the North Sea, from Blyth in England to Hylen in Norway. Known as the HVDC Norway-Great Britain, it was first proposed in 2003 by Norway’s Statnett and the UK’s National Grid. The scheme was intended to be a 1,200MW interconnector, but the project has since grown to a 1,400MW cable that will run 442 miles: the longest underwater cable in the world. Set to be completed in 2020, the line will connect the two countries’ electricity, but could also them to connect the North Sea wind farms and offshore oil and gas platforms.

Another joint project between the UK and Norway is the Scotland-Norway interconnector known as NorthConnect. The £1.75bn project is also hoped to be completed by 2020, and will see a 350 mile HVDC cable run under the North Sea between Peterhead in Aberdeenshire and Samnanger in Norway.

Baltic connections
The Baltic states have also unveiled plans to build an extensive network of energy interconnections, known as the Baltic Energy Market Interconnection Plan (BEMIP). The countries taking part form the Member States of the Baltic Sea Region (Finland, Estonia, Latvia, Lithuania, Poland, Germany, Denmark and Sweden), as well as Norway, which will observe the plan. The plan includes a range of energy proposals that will tie up the region’s electricity, nuclear power and gas.

In a progress report released in 2011, the EU said such a scheme was an essential part of its energy policy: “Smart, sustainable and fully interconnected transport, energy and digital networks are a necessary condition for the completion of the European single market. Moreover, investments in key infrastructures with strong EU added value can boost Europe’s competitiveness. Such investments in infrastructure are also instrumental in allowing the EU to meet its sustainable growth objectives outlined in the Europe 2020 Strategy and the EU’s ‘20-20-20’ objectives in the area of energy policy and climate action. Considerable investment needs have been identified for the three sectors. The respective sectoral guidelines (such as the energy infrastructure guidelines) provide the policy framework for the implementation of European priorities.”

The report added: “For electricity, implementation of BEMIP Action Plan – despite the new issues and problems – is broadly on track and according to schedule. Full implementation will be significantly impacted by the findings of the Baltic synchronisation study and the final agreement with Russia and Belarus on Baltic system operation. Close monitoring of Baltic States synchronisation process is to be ensured.”

UK Secretary of State for Energy and Climate Change Ed Davey has said a range of interconnectors are needed around Europe
UK Secretary of State for Energy and Climate Change Ed Davey has said a range of interconnectors are needed around Europe

Such plans have worried Russia, however, because they may reduce Eastern Europe’s reliance on its energy. In light of Moscow’s meddling in Ukraine, this may not be a bad thing.

Plugging in
The need for a coherent strategy is essential to getting Europe’s energy market connected and running in an efficient manner. Many are echoing Šefcˇovicˇ’s calls for such a policy. The UK’s Secretary of State for Energy and Climate Change, Ed Davey, said in a report published in January 2014 that Europe required a giant network of electricity interconnectors in order to solve the problem of rising energy prices: “Literally in the last three or four years, there has been a complete change in the differential between the North American price for gas and energy and the EU price for gas and energy. That represents a strategic change in the terms of trade and is very significant. The EU needs to respond to this very quickly.”

Davey added that a range of interconnectors were needed, not least between Spain and France: “We need much better grid interconnectors around Europe to enable energy to flow across the EU. Connecting the UK with mainland Europe, and different parts of the periphery of Europe with Central Europe. We need Eastern and Central Europe to be better connected with Germany and France and we need the Iberian Peninsula to be better connected through France.”

The UK could, however, offer Spain an alternative to its woes with France. The Spanish Government is thought to be considering a mammoth 895km link across the Bay of Biscay to the UK, at a cost of around €2.5bn. Both Spain and Portugal have sought compulsory targets on building cross-border cables and pipes, allowing them to export at least 15 percent of their spare energy capacity.

Others in the private sector agree a properly connected grid throughout Europe will deliver the sort of energy independence and security the EU has been striving for. Ben Goldsmith, founder of clean energy investment firm WHEB Partners, says: “The development of a fully integrated European grid is a key tool for tackling the issue of intermittency of supply of electricity from renewable sources. An integrated European grid will allow the sunny south to share summer electricity surpluses with the north, and the windy north to share winter surpluses with the south. We cannot get to the promised land of a diversity of sources of electricity across Europe, bringing us true energy resilience, without a fully integrated European grid.”

C01 map
Europe’s infrastructure plans

Germany
Berlin has been reticent to back sanctions against Russia, dependent as it is on Moscow-supplied energy

Russia
Vladimir Putin and his government will be watching these plans with interest. Russian aggression in Ukraine has heightened European calls for energy independence. If achieved, it could rob Russia of a key market and draw countries on the European periphery further away from Moscow

1. NorthConnect
The £1.75bn Scotland-Norway interconnector is set for completion in 2020
2. Estlink2
As part of the Baltic Energy Market Interconnection Plan (BEMIP), this link to Finland will draw Estonia into the European shared energy market
3. HVDC Norway-Great Britain
At 442 miles, this will be the longest underwater cable in the world when completed in 2020. As well as linking Norway and the UK, the cable could potentially reach out to North Sea utilities
4. NordBalt
This Sweden-Lithuania interconnector is the second major component of BEMIP
5. Nordlink/NorGer
Though not an EU member state, Norway has strong economic and energy links with Europe. Two 1,400 MW cables will provide electrical links between Norway and Germany
6. Potential Spanish-British workaround
An 895km cable link across the Bay of Biscay to the UK could offer the Spanish a way of working around French intransigence
7. Trans-Pyrenees link
The French have blocked Spanish plans for a link between the two countries for decades, effectively locking Spain and Portugal out of the shared energy market. Ostensibly, the French refusal is due to cost and potential damage to the landscape, but many suggest it is actually to protect the country’s delicate domestic energy market

Can Japan restore its people’s confidence in nuclear?

The last of the major obstacles standing in the way of Japan’s nuclear revival was removed in November when politicians in the Kagoshima prefecture stamped their seal of approval on a pair of plant restarts. Scheduled to come online at some point in 2015, the reactors will be the first reintroduced since September 2013, when the last of the country’s 48 reactors was shut down.

Regional politicians have so far rallied behind the government’s support for a nuclear renaissance, citing the country’s reliance on foreign energy and widening trade deficit as the motivating factors. Speaking at a conference shortly after the Sendai restart was approved, local governor Yuichiro Ito said the move was “unavoidable” and that, without it, the country could struggle to regain its former level of competitiveness.

The idea of the foot in the door, with the first restart leading to automatic follow-up, is
an illusion

The sentiment is not one shared by the Japanese population, who still believe the economic priorities of central government are being placed before their own safety. Critics are calling the restarts a misguided attempt to bolster public finances at the expense of social wellbeing, and are unconvinced that the lessons of Fukushima have been learned.

In the shadow of Fukushima
It was on the afternoon of 11 March 2011 that a 15-metre high wave struck the now-infamous Fukushima Daiichi plant, tearing apart three of the six reactors and changing the face of the global nuclear industry. As a Level Seven incident, according to the International Nuclear and Radiological Event Scale, the seriousness of the disaster was matched only by Chernobyl. Although no deaths were tied to the event, over 100,000 people were displaced.

What followed was a mass retreat from nuclear energy. Concerned parties shut the doors at 50 plants and buried Japan’s nuclear industry. The disaster alerted the world to the ramifications – both social and financial – of nuclear failure, and put the Japanese population against what had once been thought of as a safe and reliable energy source.

One GlobeScan survey, conducted six months after the tsunami struck, showed that only 21 percent of a 23,231 sample agreed that “nuclear power is relatively safe and an important source of electricity, and we should build more nuclear plants”. Fast-forward to the spring of 2014 and little had changed: a survey conducted by the Asahi Shimbun newspaper found that, of a 2,109 sample of nuclear-related comments sent to the government between December 2013 and January 2014, 95.2 percent were opposed to the energy source.

Workers measure radiation within the Fukushima exclusion zone
Workers measure radiation within the Fukushima exclusion zone

“Public opposition has pushed traditionally timid newspapers like Asahi to take a clear stance and contribute rather acid editorials to the debate”, says Mycle Schneider, an international consultant on energy and nuclear policy, and lead author of The World Nuclear Industry Status Reports. “The government needs local consent to restart reactors, even if there is no legal requirement that translates this rule. Every single restart attempt will be a separate struggle. The idea of the foot in the door, with the first restart leading to automatic follow-up, is an illusion.”

Although the majority of the Japanese population see nuclear as an unsavoury option, analysts have said it could be a necessary evil. Nuclear once constituted approximately a third of Japan’s energy mix, and the shutdown has left the country with a critical energy shortage. At the beginning of 2014, authorities posted one of the worst annual trade deficits on record.

While Shinzo Abe’s Liberal Democrats were the only party that supported nuclear energy during the 2013 elections, the Prime Minister has been conspicuously quiet on the subject, for fear of the ill will that might be directed his way. Abe has deflected any questions about a return to nuclear onto local politicians, who are without the technical knowhow or political insight to present a meaningful case.

Yet pockets of support are beginning to emerge, principally in regions where jobs are scarce and opportunities slim. The Kagoshima prefecture, for example, where the revival is scheduled to kick off in the coming year, is indicative of this trend: the prospect of a nuclear restart offers a stimulus the region has lacked in recent years. As support for the revival gathers momentum, albeit ever so slowly, knowledge of the changes made to the industry since Fukushima will begin to garner the support they deserve.

Changed market
No doubt it will be some time before the industry shakes the stigma of the Fukushima disaster, which “was the result of collusion between the government, the regulators and [plant operator] TEPCO, and the lack of governance by said parties”, according to a 2012 report published by the Nuclear Accident Independent Investigation Commission. “They effectively betrayed the nation’s right to be safe from nuclear accidents. Therefore, we conclude that the accident was clearly ‘man-made’.”

Upkeep of the cooling pools at the plant has been a concern since the disaster
Upkeep of the cooling pools at the plant has been a concern since the disaster

However, the steps taken by government and regulators since have answered many of the questions put to the offending parties, and the industry today seems a far safer prospect. Key reforms introduced include the establishment of an independent regulatory authority, and measures to counter ‘the revolving door dilemma’, where the motivation of regulatory personnel is skewed by employment opportunities within the industry. In the case of Fukushima, shared interests between industry and regulatory bodies led both parties to place profitability ahead of public safety. Though efforts to prevent those circumstances from reoccurring, alongside a series of reforms to improve safety and security, have transformed the nuclear industry for the better, the public is less than convinced.

The problem then, is not that the government has failed to take action, but that it’s yet to adequately communicate to the public what it has done. Irrespective of whether there are any plants in operation or not, the nuclear industry of today, in terms of regulatory oversight, is utterly transformed from that which was hit in 2011 – not just in Japan but the world over.

The pressure on the Japanese government to steer clear of nuclear remains, yet the country has opted to resist public opposition for the time being and illustrate by example that the industry is changed for the better. “However, while it is difficult to assess the theory, any safety regulation is only as good as its implementation, which remains highly problematic”, says Schneider. Only after years of successful changes will nuclear regain the backing it so desperately needs, and the Japanese economy regain the measure of energy independence it has lost.

Ener-Core’s groundbreaking technology shakes up the energy market

Methane is well known for its environmentally damaging properties. It has a global warming impact that is 20 times higher than that of carbon dioxide. It accounts for a staggering 20 percent of the world’s greenhouse gas emissions and is the second biggest greenhouse gas emitted in the US from human activities.

What is often ignored is that methane has the power to generate energy – and money. For the past two centuries, that opportunity has literally been burnt out of existence through flaring practices all over the world. According to the World Bank Global Gas Flaring Reduction partnership, 150 billion cubic metres of gas is flared every year – going by 2011 figures for global production, that would make it the world’s sixth-biggest gas producer (just ahead of Qatar).

Flaring is both costly and dangerous, producing other harmful pollutants in the process – a reality that has led certain countries, such as Russia, to ban the practice. The other method still in use – venting – is yet more harmful. These issues have opened up a burgeoning market – and an urgent need – for alternatives.

New technologies
With that need in mind, new technologies have begun to emerge that utilise, rather than destroy, methane, paving the way for an exciting revolution in the energy world. Such practices are being encouraged by the Global Methane Initiative, an international partnership between governments and other organisations that focuses on advancing methane recovery.

50%

Potential savings of converting methane into electricity

But even those technologies can emit other potentially harmful gases, including mono-nitrogen oxides. California-based Ener-Core has found an innovative way of getting around that, with a unique system that uses gradual oxidation to generate electricity from methane without producing toxic gases. Its use is diverse, spanning from oil refineries and drilling sites, to landfills, chemical plants and coal mines.

Ener-Core’s technology caught the attention of the Honourable Stephen Johnson, former Administrator of the US Environmental Protection Agency (EPA), who led and conceived major environmental initiatives including the Methane to Markets Partnership while serving under President George W Bush. Convinced Ener-Core held the key to solving the problems he’d been trying to answer throughout his 27-year tenure at the EPA, Johnson invested in the company. “The principal investors called me and said… ‘We have come up with a technology which destroys methane, doesn’t produce other harmful gases, and will power a turbine that can be used for electricity’”, Johnson recalls. “My comment to the investors was: ‘If the technology does what you say it does, then I am very interested – I’m interested in helping the company, and I’m interested in investing.’”

Since then Ener-Core has been transformed from an idea into a reality, expanding from California to the Netherlands, where it recently installed its first commercial FP250 unit – a power generating system with a 250 kW turbine – at Attero’s Schinnen landfill.

That’s just the latest page in its history. Ener-Core began work in 2008 under the name FlexEnergy, before approaching Johnson as a potential advisor and investor in 2010. Following tests at the Lamb Canyon landfill, the company developed the Powerstation FP250 in 2011 – which saw the 100kW microturbine scaled up to 250kW. Its first operation was launched at a closed US military base landfill in Georgia in 2011, under a Department of Defense demonstration contract.

Unique system
In 2012, FlexEnergy became Ener-Core, focusing on further developing its gradual oxidation products – which now include, alongside the Powerstation FP250, the KG2-3GEF/GO. Designed for ultra-low BTU gas, the system operates on a 2MW turbine, ideal for power requirements from one to 12MW. In the near future, Ener-Core plans to scale up to large sizes, capturing more of the waste gas opportunity. It will do this by partnering with manufacturers of larger gas turbines, thereby enabling those turbines to generate power from low quality waste gases.

For example, Ener-Core just closed a global exclusive license with Dresser-Rand, one of the world’s largest manufacturers of rotating equipment for the oil and gas sectors. Under that agreement, Ener-Core will scale up its oxidiser technology to 2MW, and integrate it with Dresser-Rand’s 2MW KG2 turbine. Dresser-Rand will then commercialise this product globally under its brand, enabling its customers in the oil and gas sector, as well as many other sectors, to convert their waste gases into clean, useful power.

Ener-Core’s system works through a unique gradual oxidation vessel, which mixes fuel gas with air (to a 1.5 percent fuel-air ratio). The mixture is then pressurised and introduced into the oxidiser, causing a chemical reaction that produces heat. That energy turns a turbine, generating electricity.

The process takes place at a lower temperature than combustion and doesn’t produce flames, preventing the emissions and destruction of volatile organic compounds associated with burning. Designed to reach the Lowest Achievable Emission Rate, the unique system may also emit significantly lower amounts of nitrous oxides and carbon dioxide than competing technologies, when configured for ultra-low emissions.

Using what is otherwise a waste gas productively means generating power while simultaneously removing methane from the atmosphere. That otherwise-wasted energy is significant; it’s equivalent to over 65,000MW of potential power, according to Johnson. And if the 150 billion cubic metres of gas wasted through flaring every year were harnessed productively, it would be equivalent to replacing 89 million cars or 988 million barrels of oil.

Johnson recognises the ground-breaking implications the system has. He says: “Methane gas from a variety of sources can now be converted into electricity – electricity that can be used to power homes, buildings and, the most important part, without emitting virtually any additional pollutants and also providing significant economic advantage.”

That economic advantage is a substantial one. Converting methane into electricity can save a company up to 50 percent – or as much as $2m – on their annual energy costs, according to Johnson. Companies can expect to see a payback within three to five years of using the system.

Reliable renewable source
It’s surprising then that rising technologies such as Ener-Core’s still aren’t being talked about that much. Solar and wind power continue to dominate the renewable energy conversation, eclipsing these creative solutions with their out-there turbines and solar-panelled fields.

As important as those renewable sources are, what they don’t do is remove the toxic methane emissions penetrating our atmosphere. Nor do they ensure a constant supply of energy; their dependence on external factors makes their energy intermittent, which can of course be costly. “Obviously wind and solar power are important tools in our toolbox of energy production”, says Johnson. “The problem is the wind doesn’t always blow and the sun doesn’t always shine. So what’s important about our technology is that it offers baseload power.”

By ensuring a constant stream of energy, independent of external factors and easily controllable, Ener-Core’s technology offers businesses security and safeguards them from the losses other renewable sources can entail. That means they can compete with fossil fuels without having to pay the environmentally damaging consequences.

That goes against the widespread notion that methane regulations, if introduced, would mean heavy financial losses. “Regulations will cost large corporations – oil platforms, natural gas companies, manufacturers and other companies, literally billions of dollars to comply”, says Johnson. Ener-Core hopes to help companies avoid those potentially drastic implications.

“We’re showing a way to make money from this rather than putting them out of business”, explains Paul Fukumoto, also of Ener-Core.

Rather than seeing them as barriers, Ener-Core is using potential regulations to spur on creative, profit-generating developments. “I believe that regulatory action does drive innovation”, says Johnson. “We have to advance and generate economic opportunity – especially for those subjected to methane regulation.”

Stephen Johnson, former Administrator of the US Environmental Protection Agency and a  key investor in Ener-Core
Stephen Johnson, former Administrator of the US Environmental Protection Agency and a key investor in Ener-Core

Regulatory action
That emphasis on regulatory action is moving ever further to the foreground as discussions about methane – which traps radiation at a higher rate than carbon dioxide – heat up. In March, the EPA set out a strategy for regulating the gas, forming part of President Obama’s Climate Action Plan. The strategy included implementing voluntary measures for the oil and gas industry, following five white papers published in April that examined emissions from oil and gas sites.

The strategy aimed to encourage the recovery of methane for generating power, heating, or manufacturing – all of which, the EPA recognised, would drive investment and job creation. The initiatives also set out to make the air we breathe cleaner and reduce ground level ozone (which is responsible for smog). The strategy forms part of a plan to cut greenhouse gas emissions in the US by around 17 percent of 2005 levels by 2020.

But many believe the voluntary initiatives implemented under the strategy are not sufficient in cutting methane emissions. Such emissions are forecast to rise by 2030 if changes aren’t made, reversing the progress made over the past couple of decades (methane emissions have fallen 11 percent since 1990).

That was the sentiment 15 US senators expressed in September, when they sent a letter to President Obama pushing for methane regulation of oil and gas sites. “Voluntary standards are not enough”, the letter said. “Too many in the oil and gas sector have failed to adopt sound practices voluntarily, and the absence of uniform enforceable standards has allowed methane pollution to continue, wasting energy and threatening public health.” The call for stronger regulation isn’t unique to the US: under proposed EU guidelines, which could become binding in July 2015, European fracking firms would be legally obliged to monitor their own methane emissions.

Increased relevance
Johnson believes getting businesses on board before those regulations force action to be taken is one of the company’s key challenges. “There are early adopters, which is very exciting and necessary for raising sufficient capital in order to continue to advance technology”, he says. “But there are those that wait for regulations to be put in place to force them to do the right thing. And of course when regulations are put in place then people have to comply, and then they are very interested in new technologies.” Adopting the new technologies early could save businesses substantial amounts of money, time and, of course, energy.

Given the global push for a regulatory crackdown, Ener-Core’s clean oxidation system is moving to the foreground at a crucial moment. Its potential to help businesses comply with regulations and benefit the environment – while being rewarded financially – is becoming increasingly relevant.

It therefore seems it’s the very source that’s been forgotten about – methane – that holds the key to finally reconciling environmentalism with financial profit. Methane is a much-needed complement to solar and wind sources which, despite hogging the limelight, aren’t proving sufficient to deal with the scale of the problem. If enough businesses take up these new technologies then methane could prove a very real answer to the world’s most pressing environmental problems. Ener-Core could mark the start of an exciting transition that saves the day for businesses and, most importantly, the planet on which they operate.

The smart home revolution is coming

At the touch of a button, the door unlocks – which triggers the rooms to light up, the heating to come on, the blinds to draw, the kettle to boil and the TV to flicker. A CCTV camera detects an intruder and sends an alert to a tablet 2,000 miles away, while an empty fridge places an order to restock itself. This isn’t a scene from a 1980s thriller – it’s a house in the age of the smart home.

“For many, it’s still just a vision. But change is coming, and coming fast”, Samsung Electronics CEO Boo-Keun Yoon promised attendees at the IFA Electronics Show in September. According to Yoon, smart home technology could catch on as quickly as smartphones did – and his claims aren’t unsubstantiated. Strategy Analytics say 40 percent of homes in the US, and 12 percent across the globe, will have some form of smart home technology installed by 2019, with revenues expected to hit $115bn.

“Market research is showing very strong demand in terms of smart homes in the coming years”, says Kevin Wen, President of D-Link Europe, one of the latest in a string of companies to jump on the smart home bandwagon. But as the internet of things and fully automated homes move closer to becoming a reality, so privacy and security becoming increasingly important issues.

40%

Of US homes will have smart technology by 2019

$115bn

Smart home revenues by 2019

26bn

Smart connected devices by 2020

Rising market
This fast-growing uptake is being driven by the increasing accessibility and affordability of smart home technology, with all the major phone companies involved: in 2014 Apple announced HomeKit, a system which opens the door for all devices to be controlled from one app; Samsung acquired US start-up SmartThings; Google bought smart system maker Nest (for a hefty £3.2bn); and LG announced HomeChat, which lets users command their devices via text. Having a conversation with your fridge isn’t just LG’s idea: Samsung and Apple aim to let users physically talk to their devices using S Voice and Siri respectively.

These recent developments mean home automation is no longer limited to the type of swanky New York pad where James Bond might sit sipping a martini. “Nice, fairly normal homes now have a level of technology that would have only been in exclusive homes 10 years ago”, says Malcolm Stewart of Kensington AV, a luxury home technology consultancy and installer. “Nowadays, people have a lot of the control already in their pocket and that’s a big saving.” Whereas users once had to buy smart home devices from one company and pay a premium, they can now purchase from different companies, and whereas home automation technology once had to be hardwired, it can now be easily fitted without the need for a specialist.

That’s not to say having a fully operational smart home won’t still set the buyer back a fair amount. With the market-leading light set (Philips Hue) costing nearly £200 (for three wireless bulbs and a bridge), thermostats costing up to £200, and hefty price tags on things such as door locks and blinds (which currently need to be specially made), a fully operational smart home is likely to dent a homeowner’s budget by at least £1,000. That might explain why the smart home market is still somewhat niche, with tech enthusiasts making up a large part of those interested (32 percent, according to the 2014 State of the Smart Home Report by iControl).

But those prices are still a significant drop from the multi-thousand pound price tag just a few years ago and they are likely to continue falling, expanding the grip of invasive smart tech to a wider demographic. Smaller companies offering basic, simple-to-use alternatives are springing up: smart plugs costing as little as £20 can be connected to appliances. According to Wen, that simplification of smart home tech is likely to appeal to older consumers too: the impending revolution is likely to sweep over all ages and levels of society.

Peter Middleton, Research Director at technology advisory firm Gartner, believes component costs are likely to fall significantly by 2020 “to the point that connectivity will become a standard feature”, according to a statement. “This opens up the possibility of connecting just about anything… to offer remote control, monitoring and sensing.”

Internet of things
Connecting anything – thereby establishing a world dominated by the internet of things – is a daunting prospect but it’s an inevitable one. Gartner predicts the number of connected devices will increase to 26 billion by 2020: nearly 30 times the number in 2009. It seems a further cog in technology’s mission to dominate our lives, and a world where your fridge could be connected to a random appliance over the other side of the world is arguably a step too far.

But it’s a world some of the biggest tech firms are gleefully leaping into. Nest, Samsung and Arm have joined up to create networking protocol Thread. “We wanted to build a technology that uses and combines the best of what’s out there and create a networking protocol that can help the internet of things realise its potential for years to come”, says their website. Other companies, such as wireless firm Zigbee, are developing their own systems. Allseen Alliance, made up of some of the world’s biggest names (including LG, Panasonic and Microsoft) has produced AllJoyn protocol with the same goal in mind.

The biggest effort out there is Hypercat, an open protocol that, unlike the characteristically exclusive Apple HomeKit, seeks to allow all phone and hardware companies to be compatible by standardising their interfaces (which currently limit specific hardware to specific software). Comprising Intel, IBM, BT and other major players, the consortium wants to create a set of open APIs and data formats so that devices can effectively talk to each other. The UK Government’s tech strategy board has injected £6.4m into the project to drive smart home growth and spur the advancement of smart cities – something also being pushed by the EU.

Darker side
According to Dr Paolo Pirjanian, Executive Vice President and Chief Technology Officer of iRobot, a robot could be at the centre of that interconnectivity. “Leveraging the internet of things, I believe that robots will one day coordinate with other devices and technologies in the home”, he says. “We expect to see robots… playing an even larger role in our everyday lives”. That might seem an ‘out-there’ concept, but the type of specifications being developed by Hypercat (resembling those which created the web, according to the consortium) are fully plausible, paving the way for technology to effectively replace the human at the centre of the home: not necessarily a reassuring concept.

Nest’s Learning Thermostat memorises the user’s patterns and sets the temperature accordingly
Nest’s Learning Thermostat memorises the user’s patterns and sets the temperature accordingly

It’s also not necessarily a practical one; making a home entirely reliant on its internet connection is likely to pose problems. “If you live in the countryside and have a rubbish internet connection you’d find it very frustrating using a lot of these devices”, says Stewart. According to him, efficient smart home automation would likely require improved wireless systems.

That could mean further costs incurred by homeowners, ironically defeating the object of saving money that’s frequently cited as an advantage of things such as smart heating. Relying on one single device for all appliances is also risky, and it’s likely to make us more reliant on smartphones and tablets than ever before. That could intensify the health risks mobile phone usage already presents, which include disrupted sleep as a result of the melatonin-suppressing blue light emitted by digital screens.

Getting flooded with notifications for every single device in the house could also be both irritating and disruptive; something even Jack Schulze, head of Berg’s Cloudwash project, recognises. “We suddenly realised that there was this slightly miserable, dismal universe where all your machines were talking to you”, he said, according to a report by Wired. As a result, Berg added a button to turn off the notifications, ultimately de-smarting it: ironic proof that smart home technology might not be all it’s cracked up to be.

The main use of smart home technology seems to be providing another way for mobile giants to monopolise the market and dominate our lives. That’s particularly true of companies that only offer products for their own system, such as Apple. “Once you’re hooked in that far, it becomes simply another reason why everybody keeps buying an iPhone”, Eric Bodnar, co-founder and CEO of Velvetwire, told Forbes. “Samsung loses market share and more people buy the iPhone for that unified smart home experience.”

Security threat
There are also the significant issues of security and privacy. While having a phone stolen is already risky enough given the amount of private data it could present to a criminal, stealing a phone in the smart home era could mean seeing – and, more frighteningly, controlling – everything in the owner’s smart house, from the tumble dryer to the door locks.

The smart home also opens up opportunities for hackers: something the consortium behind Hypercat has recognised, indicating the need for additional software to protect privacy and avoid users being tracked – in everything they do – by prying apps. Even if such software is introduced, completely safeguarding it from hackers is nigh-on impossible; one only has to look at the extensive list of cybercrimes (such as the Chinese attack that stole 4.5 million US hospital records in mid-2014) to see the very real danger the smart home can present. The bigger its grip, the larger the cyber danger.

HP found in an investigation that eight out of 10 smart systems reviewed accepted logins that could be guessed and hacked fairly easily, while six did not encrypt data for software downloads. “Users are one network misconfiguration away from exposing this data to the world via wireless networks”, the report concluded.

In January 2014, smart fridge hackers sent 750,000 spam emails, according to TechRadar: 100,000 smart devices, including TVs and media players, were implemented in the attack. In July, Australian company Lifx was forced to modify its Wi-Fi light bulbs after it was discovered hackers could identify passwords and usernames with a device masking itself as a bulb.

“It’s slightly shocking to see these brand new internet-of-things devices being created with so many security holes”, Ian Brown, information security and privacy professor at the University of Oxford, told the BBC in response to the HP report.

British blogger DoctorBeet reported in 2013 that his LG Smart TV was monitoring habits and sending unencrypted data over the internet back to LG, despite his having turned off the ‘collection of watching info’ option. Sending data back to manufacturers is common practice and smart technology provides one more means of doing so, according to Stewart. “Companies like Apple and Google already know our birthdays, viewing habits and search histories – soon they will even know what time we go to bed and what time we leave for work”, he wrote in a TechRadar article. “These companies will no doubt try to use this information to gain control of an even bigger portion of the market.”

LG HomeChat allows users to control their appliances from their phone
LG HomeChat allows users to control their appliances from their phone

Marketing ploy
Given the security and privacy issues, it’s ironic security is one of the factors fuelling the growth of the smart home market. It was among the top priorities for 90 percent of those asked in the iControl survey. Companies are playing on that aspect, with firms such as D-Link offering CCTV monitoring systems and motion sensors which can trigger a light to scare off an intruder, and send a notification to alert a smart device user out of the house. But being alerted to a fire or burglar is unlikely to help a homeowner thousands of miles away – and no device (yet) would be powerful enough to physically stop either event.

Yet more worrying is the prospect of using smart home ‘security’ systems to replace human supervision. Nearly one-fifth of those asked in the iControl survey claimed they would be more inclined to leave their children at home alone from a younger age if home automation systems such as live CCTV, locks and lights were in place. Given the fallibility of this technology, that could actually mean decreasing personal and family security.

Energy saving is another frequently cited advantage. “Energy cost is going to keep going up, and I think that will definitely create a large market”, says Stewart, with users able to turn their heating on remotely just before returning home – but, again, that is actually likely to increase energy usage. Advanced heating systems such as Nest’s Learning Thermostat (which memorises the user’s patterns and acts accordingly) are certainly clever, but unlikely to save any more energy than simply programming a thermostat to come on at the right time.

The real purpose smart home technology seems to serve is convenience. Voice-controlled appliances, blinds drawing at the tap of an icon, appliances turning on or off based on your location, and washing machines controllable via text message (being made a reality by London start-up Berg and its impending Cloudwash machine) all seem to have convenience (and arguably laziness) as their ultimate driver. Jim Johnson, Executive Vice President of iControl Networks, said in a statement: “The convenience associated with a connected home will likely play a greater role as consumers realise how much easier automation makes their lives.”

It seems the security and energy aspects serve as an effective marketing ploy more than anything else, masking the real factor likely to drive growth: convenience. Whether that’s worth the security threats smart technology could paradoxically create is questionable. Linking the whole of the home to the internet means exposing it to wider dangers – and intensifying them by creating the possibility of every single aspect of somebody’s life being watched, collated and used. Regardless, the internet of things is coming ever closer. Juniper Research predicts the smart home market will be worth $71bn by 2018. Only time will tell how safe our homes are with a computer watching the door.

The problems with big data

It’s not just the government that is interested in our data. Big business has grown obsessed with it too. While they may lack the authority of the NSA to pilfer our most personal information, there is still plenty of it for companies to get their hands on. Personal information is freely given away during the course of our daily delvings into the internet.

But because we give up this information of our own accord via Google searches, Twitter posts and Facebook status updates, economist and journalist Tim Hartford prefers to distinguish these datasets by labelling them ‘found data’. The reason businesses wish to access that which most view as useless is that they believe it helps them to learn more about us: the consumer. The thinking behind this massive mining is simple: the more a company knows about consumers’ habits, the easier it is to create and market products they will be willing to buy. But just how useful is big data and does it have any limitations?

In his 2014 Significance Lecture for the Royal Statistical Society, titled ‘The Big Data Trap’, Hartford discussed the pitfalls of putting too much faith in what can be gleaned from the masses of found data that is continually being collected. He begins the talk with reference to his friend Dan Ariely, a professor of psychology and behavioural economics at Duke University, who draws comparisons between businesses’ fixation with big data and that of teenage sex: “Everyone is talking about it, no one really knows how to do it, everyone claims they are doing it when they talk to other people, [and] everyone assumes everyone else is doing it.” The point Ariely is making is that, while there is a lot to be discovered through big data analytics when applied properly, many companies are just joining in on the latest craze, and, like adolescents, have no idea what they are doing in the data department.

Theory free
A great example of a business dipping into the world of big data, only to discover it does not have a clue why it bothered in the first place, is the online dating site OkCupid. Those familiar with its mobile app will be aware that, once an account has been set up and a biography created, it quickly prompts users to disseminate further personal information by asking them a wide array of multiple choice questions. These may relate to serious lifestyle choices (such as whether or not you smoke) to more arbitrary lines of questioning (like if you’re a lover of cats or dogs). The justification dating sites such as OkCupid give for intruding into your personal preferences is that these vast sums of data assist them in doing a better job of matching you with other users looking for love. But as the site’s founder, Christian Rudder, explains on his blog OkTrends, when it comes to big data “OkCupid doesn’t really know what it’s doing. Neither does any other website.”

That is not to say there are no great opportunities out there when big data analytics are used correctly. As Hartford explains in his lecture, Google was able to achieve incredible things through its use of massive datasets. He brings up the example of Google Flu Trends, which managed to track the spread of influenza across the US more effectively than the Centre for Disease Control and Prevention (CDC). Google’s findings, explains Hartford, were based on an analysis of search terms: “What the researchers announced in a paper in Nature five years ago was that, [by] simply analysing what people typed into Google, they could deduce where these seasonal flu cases were.” He notes that what makes this so remarkable is that Google was not only able to beat the CDC (their tracking only had a 24 hour delay, whereas it took the CDC a week to do the same thing), but that it was “theory-free”. In short, Google just ran the search terms users were putting into their search engine and let the algorithms do the rest.

Data growth

Correlation is not causation
But using big data in this way has its limitations. Data-rich models like the ones employed by Google are great at finding patterns. In the case of tracking influenza outbreaks across the US by mapping where and when people searched criteria such as “flu symptoms” and “where is my nearest pharmacy”, Google was successful, but we should never become complacent about the results derived from this method. Never forget that correlation does not imply causation. Google fell foul of this. After successfully tracking flu outbreaks for a number of years, its big data model began to show signs of weakness, finding flu where there was none.

By just using search terms, the algorithms were treating all hits as hits, rather than as potential false flags. Just because people are searching flu symptoms, doesn’t necessarily mean they have it or are going to get it. There are countless examples available online of statistics being used to create obviously false connections, such as how the number of people who died falling out of wheelchairs correlates with the cost of potato chips. With massive datasets, these anomalies are not minimised: they are exacerbated.

The real problem with big data, which the Google Flu Trends example highlights, is that, while it is a very good tool for mapping what people are doing and where they are doing it, it is absolutely terrible at deciphering the reasons behind those actions. No matter how good we get at acquiring data, we can never get it all. Therefore the conclusions we derive from big data need to be looked at with an element of scepticism. Without a theory underpinning the cause of correlation, computer algorithms are just finding patterns in data and not much else. Big data is a clever new tool, but it is not a magic wand.

Google loses biggest US search share since 2009

According to market researcher StatCounter, Google’s portion of the US search engine market slipped to 75.2 percent in December, down from 79.3 percent the previous year – its biggest slide since at least 2008, when StatCounter began collecting data. Meanwhile, following the closure of a deal with Mozilla Firefox earlier in 2014 which made it the default search engine on all of Firefox’s browsers, Yahoo reported its largest share gain since 2009, jumping from 7.4 percent to 10.4 percent. But it’s not Google’s biggest competitor just yet, sitting 2.1 percentage points behind Bing, which makes up 12.5 percent.

Google still accounts for 88 percent of the search market

The browser market on personal computers in the US is dominated by Google Chrome and Microsoft’s Internet Explorer, with a 37 percent and 34 percent share respectively in 2014, while Firefox accounts for around 12 percent, according to data from StatCounter. “The move by Mozilla has had a definite impact on US search”, StatCounter CEO Aodhan Cullen told Bloomberg. “The question now is whether Firefox users switch back to Google.”

On a global scale, Google still accounts for 88 percent of the search market, while Bing and Yahoo take 4.5 and 4.4 percent respectively. However speculation that Apple is contemplating dropping Google as the default search engine on Safari browsers is rife as competition between the two companies intensifies – Google’s Android OS scooped up 84 percent of smart phones shipped during Q3 of 2014, while Apple’s iOS took just 12 percent, according to Strategy Analytics. If so, Yahoo or Bing would be presented with another opportunity to grab a bigger share from the current market leader.

Search engines comprise the largest digital advertising market raking in $50bn in 2013, with a growth rate of around 10 percent a year.

Xiaomi’s smartphones take the global market by quiet storm

Founded in 2010, Xiaomi and its CEO, Lei Jun, have sent ripples throughout the smartphone industry. Xiaomi is now the globe’s third largest smartphone producer and has been named as the world’s most valuable tech start-up. In 2014, the company sold 61 million phones, an astounding 226 percent increase from its previous year. Thus far, sales have been predominantly contained within China; with expansion into new markets, further growth is expected.

Xiaomi’s enviable success has been achieved by means of its relatively cheap prices and creative business model. With high-spec models such as the Mi5 that compete with Apple and Samsung, Xiaomi has made itself a serious contender in the industry. Simultaneously, the company’s lower-end phone, Redmi, allows it to penetrate emerging markets such as Russia and Brazil.

Xiaomi’s enviable success has been achieved by means of its relatively cheap prices and creative business model

Jun’s approach to marketing and achieving customer satisfaction is greatly attributed to his recent success. For example, users can post feedback online which influences software updates released each week; a proven approach for keeping fans engaged and eager. “They entered into this market with a very innovative launch, which is by going directly online”, says Tarun Pathak, Senior Analyst at technology market research firm, Counterpoint. By alleviating the expenditure of stores and costly advertisements, customers can conveniently buy products online while benefitting from Xiaomi’s pencil-thin profit margins.

Maintaining prices which are slightly above production costs is unlikely to maintain continual profit growth in the long term. For this reason, Xiaomi pays particular focus to its sales of software and internet content. In addition, the company is widening its range by launching products such as tablets, televisions and set-top boxes. “To further reinforce the business, Xiaomi builds its ecosystem via introducing the whole smart home solution. Xiaomi has launched Mi smart remote centre and smart gadgets to achieve the target”, comments Ivy Jiang, Market Analyst at Mintel. The success of Xiaomi’s new products is yet to be seen, but there is potential if the company can continue its approach of coupling comparatively low prices with high quality goods.

With such exponential growth comes a range of challenges for the tech start-up. There is the issue of intellectual property rights when entering new markets, as shown by the recent litigation battle with Ericsson in India. Public accusations by Apple representatives of Xiaomi copying their designs could harm the brand’s global reputation, as well as spell trouble for future plans to enter the US and Europe. Furthermore, Pathak explained to The New Economy how in 2014 Xiaomi became a principal rival to the top vendors in the industry, “but in 2015, they have to counter the competition from these players who are adopting a similar business model.” As well as imitating Xiaomi’s successful online strategy, a price war could ensue as other leading brands attempt to win back their market share. Also to contend with is increasing pressure for the Chinese economy given its recent decline in GDP growth, which could potentially lead to lower profits and fewer investments for Xiaomi.

With a home advantage and strong customer loyalty in the world’s largest mobile phone market, as well as a business model that can be quickly rolled-out in other developing markets, experts suggest that the company’s success is set to continue in 2015. Copyright infringements may be costly, but can be dealt with, while new products could expand Xiaomi’s customer base if proved popular. Xiaomi might be relatively new to the industry and has various challenges to offset, but it is full steam ahead for the company which is likely to become China’s first global brand.

Hotel Grischa opens its doors to the Davos elite

With the 2015 World Economic Forum (WEF) upon us, it’s important to acknowledge the part played by local hotels in facilitating an influx of world-renowned individuals to the mile-high resort of Davos. With many of the biggest names in business, politics, academia and the arts congregating in one space, the commercial opportunities for the local hospitality players are without compare. In fact, for some in the hotel business, between 10 and 25 percent of their annual turnover comes from this period, and the marketing of their product to attendees represents a key part of their strategic considerations.

As those at the event look to the prevailing challenges of the times and how they might have a hand in improving the social and financial wellbeing of billions across the globe, those in the hotel business are readying their rooms. The WEF boasts perhaps the most impressive guest list on the annual events calendar, and the buzz generated about the place represents a huge opportunity for hotels in the region.

Nowhere is this better exemplified than at Hotel Grischa, where “the Dutch proprietor family realised a unique concept in Davos after acquiring the two buildings”, according to Cyrill Ackermann, the hotel’s General Manager. “The result is a beautiful hotel that even exceeds the requirements of a luxurious first-class hotel. Guests can enjoy an informal and familiar atmosphere, year-round.”

10-25%

Of a Davos hotel’s annual turnover is from the WEF

Catering to Davos Man
Located in the centre of Davos-Platz, the four-star Hotel Grischa is a two-minute walk from the valley station of Jakobshorn, and the town’s convention centre is only a 10 to 15 minute walk away. Unlike so many towns of its size in Switzerland, Davos’s aesthetic strays from the traditional and rather resembles a modern metropolis of sorts.

Having only opened its doors in 2011, Hotel Grischa is still a relatively young establishment, though it has quickly succeeded in making a name for itself in the local hospitality business. By focusing on the customer profile the event typically brings to the region, the hotel has been able to carefully customise its offerings to match the demands of those in attendance.

Convenience is a key differentiator between Hotel Grischa and other hotels like it, which generally cater to those WEF attendees looking for little more than a hassle-free stopover and a comfortable room to call their own. Not content with convenience alone, the hotel is also home to a number of impressive meeting and convention facilities. Such hotels are seen by many attendees as a natural extension of the main event, so the establishment’s meeting rooms are each equipped with state-of-the-art technical infrastructure and free WLAN.

In all, the hotel offers a total of 93 rooms and suites in nine different categories; all furnished in the same style and rich in local character. Every room is equipped with all the modern amenities, and many of them offer a balcony with views that verge on the spectacular. Whether guests want to see the snow-capped mountains for which the resort is famed, or the sprawling town below, Hotel Grischa can accommodate an array of tastes. Aside from the rooms, the hotel is known for its no less than five on-site restaurants and diverse culinary options. They are, according to Ackermann, “unparalleled in Davos”, offering diverse culinary options from light snacks to gourmet dinners of several courses.

Networking in the snow
Although engaging with the most significant global trends of the day is at the heart of the annual meeting, there is another part of the WEF that mustn’t be neglected. Given the status of those in attendance and the scale of the event as a whole, it’s no surprise networking is a key part of daily proceedings. For many, this opportunity to talk to leading figures in business, politics, academia and the arts rarely presents itself again – if ever. The role of hotels in bringing these names together, therefore, mustn’t be underestimated, and it could even be said that those in the hospitality business are one of the many factors in the developments that stem from the WEF.

For every hotel in the vicinity, attracting the incoming WEF crowd represents a large part of their strategy, given the huge profit-making potential and the renown that can conceivably come with hosting a world-famous name. With individuals such as Bill Gates, Charlize Theron and Tim Berners-Lee all having made the trip last year, hotels in the region will likely be eager to attract celebrity custom again this time around, and only the best will win.

But Davos, of course, is not only known for the WEF annual meeting. The town is the largest mountain resort in the Alps and one of the region’s most-loved skiing locales. With hundreds of kilometres of slopes and ski tracks, the winter period is always heaving with international tourists eager to strap on their skis and snowboards. In the summer season, visitors are more inclined to take up mountain biking, hiking and even golf. It’s in this period that Europe’s highest town is transformed into a picturesque grassy landscape, bringing with it an entirely different breed of traveller. But, for the delegates at the World Economic Forum at least, Davos remains a snow-covered paradise.

Life in plastic, not so fantastic: the tale of Barbie’s decline

Born in 1959 in the unspectacular town of Willows, Wisconsin, Barbara Millicent Roberts’ story is one of glitz, glamour and the odd trip into outer space. At only six years of age she became the first woman to set foot on the Moon, and the all-American girl has since sampled 150-plus career paths and run for president on six separate occasions. Muse to Andy Warhol and proud owner of a horse named Dancer, Barbie’s seemingly infallible smile is, at over 55 years of age, finally beginning to sour.

When it became clear in October that Barbie’s faithful parents were without the funds to foot her extravagant lifestyle, whispers began to circulate about the death of the storied brand. In that same month, company Chairman and Chief Executive Bryan Stockton admitted, “third quarter results did not meet our expectations” on hearing that doll sales had slipped 21 percent year-on-year, following a 10 percent revenue slump in each of the previous four quarters.

Not the powerhouse she was in years passed, the blue-eyed, impossibly proportioned girl then suffered her ninth consecutive quarter of falling sales on home soil. Where once the doll was a certified staple in young girls’ toy boxes across the country and beyond, competition and consumer electronics have brought a raft of fresh challenges, and it will take more than a quick costume change to turn this slide upside down.

Still, the core company brand made up $1bn (approximately 15 percent) of total sales in 2013, and, according to Mattel, continues to sell at a rate of one doll every three seconds. With a presence in over 45 different consumer products categories, Barbie’s unmistakable smile can be seen across the toy industry – though the cultural relevance of the once untouchable plastic princess has taken a turn for the worse.

For years, the brand’s influence has fallen short of its heyday, and the lipstick started to crack in September when Lego toppled Mattel as the world’s largest toymaker as measured by sales. Whereas Lego’s first-half sales had climbed from a little over $1bn in 2010 to $2.03bn in 2014, Mattel’s have stagnated throughout, with little to no sign of respite. A cursory glance at Lego’s recent successes also shows the shrewdness with which decision-makers at the company have capitalised on new media opportunities and emerging market opportunities. Contrast this with the heavy-handedness of Mattel, and the difference between the two is clear.

21%

Year-on-year slump in Barbie sales

3 seconds

Average time between Barbie doll sales

45

Product categories in which Barbie is represented

In recent years, opportunities in the traditional toy market have started to dry up, and where once the US, Europe and mature markets like them were fertile grounds for growth, leading industry names are today looking to emerging markets as the next frontier. Although growth in the global toys and games market is expected to come in essentially unchanged on the year previous, vast regional variations mean toymakers must shift their emphasis accordingly if they are to capitalise on pockets of growth.

Struggling to settle in China
In a time when children are, increasingly, turning to mobiles, tablets and consoles for their after-school fix, Lego has still managed to post double-digit growth by releasing a successful film and breaking the Chinese market. Barbie, meanwhile, is wearing the same unmoving expression she always has, and in a market best characterised by diversity, a generic plastic doll with a bulging wardrobe is far short of what’s required.

Lego has quickly found favour with Chinese consumers, whose focus on educational child’s play lends itself well to the brick maker. Speaking in the company’s 2013 annual report on the success of Lego’s foray into the Asian market, CEO Jørgen Vig Knudstorp said: “We remain ambitious and expect to continue to grow our market share. We will do so by expanding our global presence – but also through a continued focus on developing and innovating our product offering so that we remain relevant to children all over the world.”

The Barbie brand, however, is one that has failed to chime with Chinese consumers, despite continued efforts on Mattel’s part to win their affections. “Mattel’s Barbie brand has been declining in value in dolls and accessories in Asia-Pacific, a region which is typically seeing the most dynamic value growth among the leading toys and games categories and brands”, says Robert Porter, Toys and Games Analyst at Euromonitor. “This has raised concerns as to whether the iconic doll has had its day and is now in decline.”

Mattel’s “arguably poor launch strategies in the region”, according to Porter, have cost the brand dearly, leaving consumers there with a hotchpotch impression of the doll, and Barbie with some distance still to cover before catching up with her closest rivals.

Dream house demolition
On the brand’s 50th anniversary, back in 2009, Mattel opened the Shanghai-based House of Barbie: a towering, hot pink, six-floor superstore that spanned 36,000-square-feet and sat on one of the city’s most expensive streets. It was here the product’s parent company gambled $30m on the all-American doll quickly becoming a favourite among young Chinese girls. By packing the palace with the largest collection of Barbie-affiliated products known to man, the experiment threatened its fair share of financial consequences in the event of failure.

Looking to the only other store like it, in Buenos Aires, the company forecast the brand’s success in Argentina would be replicated in China. However, only two years after opening, the House of Barbie was forced to shut up shop, proving that the all-in strategy was a misstep on Mattel’s part and that successful Western names cannot so easily replicate the success they have found on home soil. Put simply, Barbie’s renown in Argentina was far greater than it was in China, and the company’s decision to launch the Shanghai superstore was taken with little knowledge of how well the brand would be received.

The lessons learned from the store’s failure have prompted brand heads at Mattel to take note of China’s Tiger parents – so called for their focus on education and extracurricular activities. With the wide-eyed fashionista having failed to curry favour among China’s emerging middle-class, the company, at the tail end of 2013, unveiled a series of dolls aimed specifically at the country’s education-minded consumers. Violin Soloist Barbie, the first in the series, is essentially unchanged from her American counterpart, save for a violin and music stand, which should serve to arrest any concerns that the impossibly proportioned doll veers in any way from Chinese traditions. And to again reiterate the company’s commitment to what is the world’s fastest-growing toy market, the recommended retail price for the doll clocks in at a modest $13, far short of the $30 asked for in the US.

Barbie-3
Barbie’s flagship store in Shanghai closed only two years after opening

Barbie’s image problem
Barbie’s problems are far from excluded to a tepid reception in China, however, and stretch back far beyond the company’s (some-would-say failed) expansion into Asia. “Others have suggested that the doll’s sometimes-sexist image is finally catching up with current day, more gender-neutral views”, notes Porter.

From the outset, creator Ruth Handler insisted the Barbie doll would forever stand fast as a symbol of female empowerment and an advocate for gender equality. “I believe that the choices Barbie represents helped the doll catch on initially, not just with daughters – who would one day make up the first major wave of women in management and the professions – but also with their mothers, who absolutely flipped over Barbie when she was introduced”, wrote Handler in her memoir. “Most of these mothers were confined to a rigidly prescribed existence epitomised by June Cleaver [the archetypal stay-at-home suburban wife from the 1950s television sitcom Leave it to Beaver]. And most were pleased with the idea that their daughters could play with – aspire with – a doll who had many more choices in life than adult women had at that time.” However, it seems this message was lost somewhere along the line, and Barbie is today brandished by critics as a physical embodiment of the broken ideal of beauty facing young women.

“Mattel addressed Barbie’s image problem at its investor day just last week”, said Jaime Katz, an equity analyst for Morningstar, in November. “Barbie is still the number one girls’ property on a global basis. Richard Dickson, the Chief Brand Officer, noted that in recent periods the Barbie brand had been less focused and less successful at reaching consumers across multiple touch points, leading to confusing messaging and brand weakness. The company is trying to lead into 2015 with a consistent brand message, displaying Barbie as a hero, making her more modern.”

The brand’s commitment to changing perceptions of so-called male-orientated careers can be traced right back to the doll’s roots, most notably with the introduction of Executive Barbie in 1963. Since then, Barbie has gone on to become a Business Executive (1992), US Air Force Thunderbird Squadron Leader (1994), CEO (1999), Computer Engineer (2010), and Presidential Candidate on three separate occasions. Nonetheless, studies show the brand’s stated objectives have been to little avail, and research conducted by Oregon State University (OSU) found girls who play with Barbie see fewer career options for themselves than their male counterparts. “Playing with Barbie has an effect on girls’ ideas about their place in the world”, said Aurora M Sherman, an Associate Professor at the School of Psychological Science at OSU. “It creates a limit on the sense of what’s possible for their future. While it’s not a massive effect, it is a measurable and statistically significant effect.”

The company’s latest punt to smash the glass ceiling came at the mid-point of 2014, with the introduction of Entrepreneur Barbie. “Barbie doll is ready to make a bold business move and strike out on her own to achieve her career dreams! Entering the entrepreneurial world, this independent professional is ready for the next big pitch”, according to Mattel’s website. Though the doll comes equipped with smartphone, tablet, briefcase and Barbie-branded spreadsheet, she is little changed from previous iterations, and the hot pink get-up, slim waist and pert breasts remain her signature characteristics.

Contrast this with the success of Mattel’s own Monster High brand, and it appears Barbie’s appeal is beginning to wear thin with consumers who are, increasingly, favouring escapism over perfection. In contrast to Barbie’s flowing blonde locks and doe-eyed smile, the Monster High brand is a Gothic take on the traditional alternative and one that celebrates difference ahead of uniformity. Introduced in 2010, the brand is sapping the life from the world’s most popular doll, and, in only its first three years on the market, sales have surpassed the $500m mark and are slowly eroding Barbie’s market share.

Likewise, Mattel’s Transformers and My Little Pony products are performing well. Bolstered by a successful film franchise and an animated series respectively, the two are posting impressive results in a time when traditional toy sales are losing ground to digital alternatives. Without the same degree of cross-channel success to call its own, the Barbie brand could struggle for a footing in a market beset with fresh ideas and rich cross-platform opportunities.

Modern Barbie products attempt to link the character to female agency
Modern Barbie products attempt to link the character to female agency

The heroine with a thousand faces
Barbie’s diminishing popularity does not simply represent the fading influence of one brand over the global toy market, but a heavy burden on Mattel’s top line. The company’s recent earnings show doll sales are falling rapidly. Without first coming to terms with vast regional differences, technological developments and tighter competition, Barbie will weigh heavy on its parent’s profit-making potential.

“Barbie still has the largest global value share of dolls and accessories and it is still one of the most recognised brands on the planet”, says Porter. “With a new Barbie film scheduled to be released at the end of [2014], it is too early to discount the brand.” Already with a wealth of related books, films, games and clothes to its name, the success of the brand depends in large part on an ability to capitalise on opportunities aside from its core competencies. The appointment of Richard Dickson to the position of Chief Brands Officer in May was an important one, but it remains to be seen whether the accomplished brand strategist can turn Barbie’s fortunes around by refocusing the business.

“With themes for each season, Mattel is trying to keep interest levels high and 2015 will be a test to see whether a focused messaging and marketing campaign can reignite demand for the storied brand, making it a telling time frame”, says Katz. “The ability to drive Barbie sales across apps, licenses and more should help the brand at least stabilise.”

The biggest challenge for Mattel is determining exactly how the ageing doll is of consequence to today’s young girls. Without a focused and consistent message that stretches the gamut of the Barbie empire, the all-American doll’s reign will surely end, and in its place come a relatively inexpensive and culturally adept counterpart. For years, those working at the company have taken pains to position Barbie as a strong, confident and aspirational figure. The years ahead will finally determine whether they have been justified in doing so.

CES 2015 heralds new internet dawn

The annual gathering of tech industry heavyweights began this week in Las Vegas, with many observers predicting that the so-called ‘Internet of Things’ would take centre stage at the Consumer Electronics Show.

Among the shiny new televisions and near-identical looking iPhone cases are a huge amount of devices that are touting their smart capabilities. Connected everyday technologies will be shown off, including cars, home security systems, and even – ridiculously – a WiFi connected kettle.

Samsung has warned that companies need to be open and collaborative if connected devices are going to transform the world

The Internet of Things is the central theme of the tech industry currently, according to the Consumer Electronics Association’s Senior Vice President Karen Chupka. She told reporters that many of the firms exhibiting at the event were aiming to show how everyday devices could be connected to computers and smartphones.

“It’s all about the opportunity to connect everyday items like cars, home security systems and kitchen appliances to networked devices like PCs and smartphones for greater control and management of our everyday lives”, said Chupka.

Among the other products to be shown off was a connected gun from Texan firm TrackingPoint, which allows firearms to have a streaming video camera on their sites. This would help precision targeting from the safety of cover for hunters and soldiers. How many consumers that need such a thing remains to be seen.

While there has been a great deal of excitement for the internet of things during CES, some firms have sounded a word of warning about the issues the industry might face in the future. Samsung has warned that companies need to be open and collaborative if connected devices are going to transform the world.

Samsung CEO Boo Keun Yoon said in his keynote speech at CES: “The internet of things has the potential to transform our society, economy and how we live our lives.” However, he added, “It is our job to pull together – as an industry, and across different sectors – to make true on the promise of the internet of things.”

Shell agrees to an $83.5m settlement for oil spill

Royal Dutch Shell has agreed to an $83.5m settlement for residents of the Bodo community in the Niger Delta, after two oil spills in 2008 and 2009 ravaged the land and destroyed thousands of hectares of Mangrove. With $53.1m of the total going directly to the 15,600 affected fishermen and farmers, the remaining $30.4m will be given to the Bodo community in a bid to restore the land and improve local facilities.

The out-of-court settlement marks the end of a difficult period, in which neither party could agree to an appropriate sum

“From the outset, we’ve accepted responsibility for the two deeply regrettable operational spills in Bodo. We’ve always wanted to compensate the community fairly and we are pleased to have reached agreement”, said Mutiu Sunmonu, Managing Director of the Shell Petroleum Development Company of Nigeria (SPDC), in a statement. “Despite delays caused by divisions within the community, we are pleased that clean-up work will soon begin now that a plan has been agreed with the community.”

The out-of-court settlement marks the end of a difficult period, in which neither party could agree to an appropriate sum. And while the oil giant acknowledges full responsibility for the disaster, SPDC’s Managing Director again stressed the dangers associated with oil theft and the importance of clamping down on the issue. “Unless real action is taken to end the scourge of oil theft and illegal refining, which remains the main cause of environmental pollution and is the real tragedy of the Niger Delta, areas that are cleaned up will simply become re-impacted through these illegal activities.”

Still, the case marks the first instance of an oil company granting compensation directly to the affected individuals in Africa. In a country where the minimum monthly wage amounts to 18,000 naira – or just shy of $100 – the $3,340 awarded to each individual has been welcomed warmly by the people in the community.

Sharks are attacking the internet

For many, the internet is what pours out of their Wi-Fi router, allowing them to access the World Wide Web so they can laugh out load at cat pictures, stream the next big Netflix original, or get frustrated at being destroyed yet again by some random 12-year-old in the latest instalment of Call of Duty. But to do all this requires a vast physical network of cables that provides the backbone of the internet’s infrastructure; one that crosses oceans and connects continents.

Modern fibre optic submarine cables are the foundations of this information superhighway. They are installed along the seabed, linked together by an array of land-based stations, which send digital traffic to their designated destinations. Fitting these submarine cables is a big undertaking both logistically and financially, not least because of the risk of damage to the cables – both accidental and deliberate. But now Google, which is investing over $300m to build a trans-Pacific network that will connect the California coastline with the Japanese cities of Chikura and Shima, has to take out an insurance plan against shark attacks.

Taking a chunk out
Underwater surveillance footage has shown a shark tucking into a submarine cable similar to the one Google plans to install, leading the tech-giant to go to great lengths to protect its new project. Dan Belcher, a product manager on the Google Cloud team, explained how a Kevlar-like coating will encase the fragile glass fibres and prevent them breaking under the pressure of a bite. Sharks possess a sixth sense that enables them to detect electromagnetic fields: scientists believe it helps the fish navigate and track prey. It would explain why the sharks have an appetite for these cables, which they may mistake for floundering fish.

Despite the recent YouTube clip showing Jaws now has a taste for terabytes, the International Cable Protection Committee (ICPC), which is responsible for providing guidance on issues related to submarine cable security, says cable damage from such attacks is rare. The ICPC released a statement explaining that the first recorded shark bites of a deep ocean fibre-optic cable occurred off the Canary Islands in 1985, but three independent studies of databases show a marked decline in the number of faults caused by fish bites, including those of sharks. The latest analysis, covering 2008 to 2013, recorded no cable faults attributed to sharks.

Relieving the pressure
Google plans to have its cable system, which has been aptly named FASTER, fully functional by June 2016. The network is aimed at relieving some of the pressure on the existing infrastructure: pressure that has been created by the rise in smartphone usage. Google is not alone in the FASTER project, having joined forces with big players in the Asian telecoms market, including China Mobile International, China Telecom Global, Global Transit, KDDDI and SingTel.

“At Google we want our products to be fast and reliable, and that requires a great network infrastructure, whether it’s for the more-than-a-billion Android users or developers building products on Google Cloud Platform. And sometimes the fastest path requires going through an ocean,” said Google’s Senior Vice President of Technical Infrastructure and Google Fellow, Urs Hölzle. “That’s why we’re investing in FASTER, a new undersea cable that will connect major West Coast cities in the US to two coastal locations in Japan with a design capacity of 60 TBPS (that’s about 10 million times faster than your cable modem).”

insight