The market for medical technology is fiercely competitive and facing mounting challenges. According to the Ernst & Young Pulse of the Industry report: “The medical technology sector is weathering a perfect storm caused by three concurrent trends: the move towards value-based healthcare, growing regulatory pressures and resource constraints within the industry itself.” It takes a lot to succeed in this rapidly changing and increasingly demanding environment, but Stryker has broken ahead of the pack with the launch of its 1488 Endoscopic Camera. The camera has set a new standard in the industry, as has been Stryker’s form in this market.
10 million colours analysed and absorbed by the human eye: it’s truly a magnificent achievement – Sevinc Ozdogan, Stryker
Sevinc Ozdogan, Senior European Director for Visualisation at Stryker, credits the company’s vision and innovative approach for all its recent success. “As innovators in medical optics, we understand the power of sight, detail, colour and the strength of clarity that precise engineering brings,” she says. “How can we fix what we can’t see? The fundamental principle of seeing the real picture is what drives us forward and creates our visionary appetite for excellence.”
The new device is part of Stryker’s fourth generation HD video systems, and is the culmination of more than three decades of product evolution. The new 1488 platform is likely to have a huge impact on healthcare practices through the advancement of imaging performance: it should enable better diagnosis and patient outcomes. “Our ability to see and interpret our surroundings in minute detail is one of life’s many wonders,” says Ozdogan. “10 million colours analysed and absorbed by the human eye: it’s truly a magnificent achievement. To see and interact with the world around us is something many of us take for granted, but at Stryker we live and breathe the ability to see. That is what drives us forward and it’s what makes our optics what they are.”
Virtually reality
What sets the 1488 Endoscopic Camera apart from its predecessors is its improved image and advanced colour separation, which provide levels of detail never seen before in an endoscopic camera. The camera is also unique in that it has nine distinct modes to cater for a variety of surgical specialty settings, from laser and microscope, to ENT-Skull and arthroscopy settings. Stryker has pushed the envelope even further by integrating the hardware and software design, ensuring a better overall performance of its products.
“Huge advancements in technology combined with an open mind allow us to see further than the constraints of our industry,” explains Ozdogan. “The use of CMOS chips, found in smartphones and in our cameras, represents a paradigm shift in the way we design and construct medical equipment, enabling better clarity and patient outcomes. How? By making what we see more real, we deliver a realistic vision of the surgery as if it weren’t viewed through a screen at all.”
The medical technology industry values innovation and development more than anything else: there is no place in the market for out-dated products. Stryker aims to design products which continually increase patient safety, and this is the driving force behind its innovative designs. The 1488HD Platform utilises a LED Light Source with Safelight Technology that provides cooler light emissions and incorporates an automatic shut off when the light cord is removed from the scope. It also features tailored camera head options for all specialties, including an integrated camera head and coupler (inline, pendulum and angled) with an Optical Zoom Coupler solution. As a result, the platform is designed to increase patient and caregiver safety, and is customisable to the surgeon’s needs while performing procedures with premium optics.
Seeing clearly
Ozdogan is keen to emphasise that innovation is the name of the game when it comes to Stryker’s future. Even though the 1488 Endoscopic Camera has been a game-changer, the company has no intention of resting on its laurels and instead is already working on their next home-run product. “It’s the ability to really see the detail that creates the perfect environment for a successful patient outcome,” she says. “Will we see other advancements in the future? Yes, of course. But for now, we’re focusing our attention on one thing: clarity. We want to create simply the best image in the world for surgeons to see the subject in minute detail. It’s what allows a surgeon to determine tissue, which could be the difference between the complete removal of a tumour or not. This is why, to us, nothing is more important.”
For patients all over the world who rely on technology such as the 1488 Endoscopic Camera to get the best possible outcome in the face of terrifying and potentially lethal diseases, Stryker’s commitment to the continual improvement of its products is inspiring. When an image is the main weapon a surgeon has to tackle a life-threatening tumour, clarity suddenly becomes of the utmost importance. With the 1488, Stryker has provided yet another powerful weapon doctors and surgeons can count on when saving and improving lives.
The idea of companies giving back to society is not new, but the motivations and modalities for social investment have evolved over time. Cloaked under several titles – such as ‘corporate social responsibility’, ‘sustainability’, ‘corporate accountability’, ‘sustainable development’ and ‘corporate philanthropy’ – the expectations and demands for companies to do good have intensified, even as the shape and form in which it is done remain contested.
Corporate social responsibility (CSR) is the more pervasive terminology in Ghana, and companies across different sectors have adopted CSR initiatives in their many different variations, with the mining sector being very visible in that space. Long before the government, through its regulator, outlined guidelines for carrying out CSR, the country’s largest gold producer and highest taxpayer, Gold Fields Ghana (GFG), had already been playing a leading role in defining the CSR landscape in the country.
As a new entrant into the Ghanaian mining sector in the early 1990s, GFG began its community spending on a more or less ad hoc basis, responding directly to specific community needs and requests, as and when they came up. A formalised structure of funding and carrying out community development was yet to be established, as the company was more focused on establishing and stabilising its mining operations in the country. School blocks, water, health and other infrastructure were provided for communities that were affected by mining activities or relocated as a result of mining operations. And even though such support is still provided, it now takes place within a far more structured development framework.
In 2002, GFG set up a fund to finance its community investments, and also began a process of integrating community development into the company’s business and strategic thinking. In 2004, it became the first mining company in Ghana to set up a foundation to fulfil the social development aspirations of its stakeholder communities. Today, social responsibility is a fully integrated component of the company’s operations, and central to its business philosophy. Indeed, it is as critical to get ‘social consent’ from a community for a mining operation as it is regulatory approval.
It is as critical to get ‘social consent’ from a community for a mining operation as it is regulatory approval
To ensure a regular and reliable stream of funding for the foundation’s activities, GFG makes a donation of $1 per ounce of gold produced plus 0.5 percent of the company’s annual pre-tax profit to the foundation. Even though the approach guarantees a steady flow of funds as long as the company is in production, it also ties the foundation’s survival to the fortunes of the company. When the company eventually folds, it is envisaged that the foundation will cease to exist unless a new identity can be developed.
GFG is aware mining operations will eventually cease once the resources in its lease area are depleted or are no longer economically viable. Therefore, thinking ahead, the company is putting 10 percent of its contribution to the foundation into a separate investment instrument to be used by communities for social development – but only after the lives of the mines have expired. This will help the communities continue development programmes and be protected against the impact of a sudden loss of support.
Development plan and governance
Drawing from its own past lessons and activities, as well as global best practice, Gold Fields Ghana, in 2005, created a five-year development programme for its adjacent communities in consultation with local government authorities, government agencies, chiefs, opinion leaders, and other developmental organisations. The Sustainable Community Empowerment and Economic Development (SEED) plan served as the roadmap for social development in adjacent communities, with the aim of providing necessary infrastructure and empowering communities by setting up profitable economic enterprises. With agriculture being the mainstay of several such communities, there was a focus on developing the agricultural supply chain, facilitating the setup of agribusinesses, and acting as a catalyst for the growth of cottage industries. The SEED programme was reviewed and evaluated in 2010, and subsequently extended, making adjustments to ensure an even greater community impact and greater responsiveness to emerging societal needs.
GFG considers stakeholder engagement to be critical to the maintenance of its social license to operate, and the success of its community development initiatives. It has therefore developed a Society and Community Charter that guides the company’s conduct with, and pledges its commitment to, stakeholder communities. Through the charter, GFG: builds strong relationships with community stakeholders, based on trust, open, honest and frequent engagement; ensures the company leaves an enduring, positive legacy for communities in which it operates by working with stakeholders (investors, employees and communities) to create shared value; and measures its actions and impacts on communities and the environment.
In keeping with the tenets of this charter, and to promote meaningful involvement and participation in community development, the membership of the foundation’s board of trustees includes Members of Parliament of host constituencies. The board meets quarterly to review and approve all projects and expenditures. The types and nature of the projects to be undertaken are, however, determined by the communities themselves through a comprehensive consultative process involving chiefs, opinion leaders, unit committee heads, members of the district assemblies, representatives of government agencies and, most critically, members of the respective communities.
This bottom-up approach ensures only relevant projects that address key community needs are undertaken. It also guarantees a strong community ownership and buy-in of projects and programmes.
Key development areas
About a third of the over $26m spent on community development so far has been on education; this takes the form of providing infrastructure such as school buildings, early childhood development centres, furniture, learning aids, and accommodation facilities for teachers. But beyond the ‘hardware’, the company has, under the SEED programme, also provided scholarships and bursaries to more than 1,100 beneficiaries in stakeholder communities, and sponsorship for the training of teachers. Gold Fields Ghana, for example, undertook a teacher-incentive programme in stakeholder communities, whereby it provided a 30 percent top-up of teachers’ salaries in selected schools. For the duration of this programme, there was a marked improvement in the performance of the schools, with one of the teachers receiving a National Best Teacher Award – the most prestigious award for teaching excellence in Ghana.
$1
Is donated to the foundation per ounce of gold produced
0.5%
Of annual pre-tax profits are also donated
As is typical of such communities, a significant number of youths fall outside the formal education system. The company has therefore extended its sponsorship and support beyond the formal educational sector. It enrols community members in apprenticeship programmes to acquire practical training and technical skills. To maximise the value and benefit of such programmes, the company provides tools and equipment for programme beneficiaries during training, as well as fully equipped workshops upon graduation. It also encourages programme beneficiaries to acquire professional certification in their chosen fields by paying the fees for them to take the National Vocational Training Institute examinations.
Infrastructure, qualified health personnel, and health education have been the main drivers for maintaining quality healthcare in stakeholder communities. To increase access to healthcare, GFG has ensured health facilities are within about 15 minutes’ drive of each other in the company’s stakeholder communities. Some of these facilities are equipped with modern theatres for safe child delivery, as well as accommodation for medical personnel to attract and retain quality health professionals. Through effective communication and public education, the company has helped reduce the rate and spread of communicable diseases, and increased awareness about HIV/AIDS, as well as other public health diseases and issues.
Access to potable water is a concern for many towns and rural communities in Ghana, and GFG has been working with local government agencies and other key stakeholders to address this challenge. Having started off with boreholes, communities are now also beneficiaries of ‘Small Town Water Supply Systems’, which make it possible for water to be pumped, piped and distributed throughout towns and villages. All GFG’s stakeholder communities have access to potable drinking water. The company has instituted an annual ‘Cleanest Community’ award, which has generated healthy competition among communities, and increased genuine interest in sanitation-related issues.
GFG has so far spent about $5.5m on agriculture, in the form of: direct inputs and technical assistance to farmers; strengthening of the agriculture value chain; and development of microenterprises. Support from GFG has been extended to over 1,000 oil palm farmers, 500 livestock farmers and more than 400 members involved in micro-enterprises. These activities have improved local economies significantly, and are helping communities move from economic dependence to independence.
While the importance of electricity in growing local industry is widely known, access to power is a significant challenge. Under the Government of Ghana’s Self Help Electrification Project, communities are required to contribute towards their own electricity access by purchasing electricity poles for transmission. As its contribution towards increasing electricity access to rural communities, GFG provided poles to stakeholder communities to facilitate their connection to the national grid. The company’s road construction and rehabilitation programme has also helped improve farmers’ access to markets. And, in building community and social centres, GFG is providing facilities that promote social engagement within communities.
Shared Value initiative Gold Fields Ghana continues to review its projects and programmes, and keeps exploring ways to make the most positive impact possible in stakeholder communities. The company is currently adopting a ‘Shared Value’ initiative, in addition to its existing SEED commitments, by which communities will be playing an even greater and more active role in improving their living conditions sustainably.
The company is looking at supporting the development of SMEs around the waste by-products of the mine, such as waste rocks, scrap steel, used oil, and scrap tyres. The company believes the local employment generation and economic viability of such SMEs holds considerable potential for communities. Also under consideration under the Shared Value initiative is a vocational centre of excellence that will train local youths to international vocational skill standards – they could then find employment with GFG or other companies.
In pursuing its policy of local procurement and employment, GFG also strongly encourages its suppliers and contractors to sources goods and services locally, and provide preferential employment opportunities to community people where feasible. The company regularly informs communities about the job opportunities available and the qualifications desired. To fill the relevant skills gaps, it trains and certifies community members in specific job competencies, providing a skills pipeline in the communities for GFG and extractive companies.
Typically, one of the four million fast food workers in the US will receive between $7.50 and $9 an hour. When Nancy, a long-term full-time McDonald’s employee earning $8.25 an hour – which, with her two children, put her below the US poverty line – called the company’s McResources helpline in 2013, the operator suggested she visit food banks. “Are you on SNAP?” her corporate councillor asked. “It’s Supplemental Nutritional Assistance Programme, or food stamps.” With her two children, Nancy would “most likely be eligible for SNAP. It’s a federal programme”. It turned out the company’s official policy was to direct employees struggling to make ends meet to federal assistance programmes.
With a single line in an already bombastic ruling, the General Counsel of the National Labour Relations Board (NLRB) potentially changed the archaic nature of US labour relations – and the lives of workers such as Nancy – irrevocably. The US, for all its talk of freedom and modernity, has a shockingly unfair and outdated set of labour laws – or rather, a lack of appropriate legislation in the area. So when workers employed by McDonald’s franchises in low-skill low-wage jobs decided to hold the multinational behemoth accountable for their shockingly low pay, it seemed unlikely they would prevail. But, in a historic decision, they did, ushering in a new era for workers’ rights in the US.
The key phrase is ‘joint employer’: a term that applies to corporate entities and potentially links them to franchisees. If the decision is ratified, it would mean McDonald’s being held jointly responsible for matters of hiring, firing, wage disputes and benefits – potentially challenging its well-established franchise model. Today, up to 90 percent of the US’s 14,000 McDonald’s restaurants are franchises.
The company’s official policy was to direct employees struggling to make ends meet to federal assistance programmes
Though it primarily concerns franchise businesses, the decision is significant because labour rulings against large, extensively represented corporations such as McDonald’s are rare, and this one is nothing short of incendiary. The case was brought by a group of workers who believed McDonald’s should be held jointly responsible for the pay and well-being of employees hired by franchise holders. Unions in the US have long argued big corporations should be at the table when it comes to collective negotiations on pay. McDonald’s and co. have hidden behind the caveat that it is franchise holders who hire, and are therefore ultimately responsible for labour disputes and issues. Essentially, McDonald’s argued staff at its restaurants across the US were subcontracted by franchise holders.
Jointly responsible
Business groups, however, have been vocal in their opposition to the ruling. They are not wrong to fear it, as franchisors would suddenly be held liable for thousands of legal cases of overtime, wages and union-organising violations brought by their employees and subcontractors. Richard F Griffin Jr, the labour board’s general counsel, however, was adamant there was merit in 43 of the 181 claims brought before the NLRB, which accused McDonald’s of illegally firing, threatening and penalising workers for getting involved with labour unions, and other pro-labour activities. McDonald’s is likely to contest the decision.
The dispute between employees and fast food chains has been heated for some time, and the ruling by the NLRB added fuel to the fire. At the beginning of September, workers from McDonald’s, Burger King and other franchises in California, Missouri, Wisconsin and New York walked out in protest over low wages. The industrial action, coordinated by local union groups, coalitions and the pressure group Fast Food Forward, has been the biggest yet, and the ruling by the NLRB, though favourable for workers, has done little to assuage their anger. The campaign as a whole has been funded and backed by the Service Employees International Union.
“Employers like McDonald’s seek to avoid recognising the rights of their employees by claiming that they are not really their employer, despite exercising control over crucial aspects of the employment relationship,” Julius Getman, a labour law professor at the University of Texas, told The New York Times. “McDonald’s should no longer be able to hide behind its franchisees.”
The ruling referred specifically to 43 cases filed since November 2012, though 64 cases remain pending. It is now up to the plaintiffs and the defenders to settle. “The National Labour Relations Board Office of the General Counsel has investigated charges alleging McDonald’s franchisees and their franchisor, McDonald’s USA, violated the rights of employees as a result of activities surrounding employee protests,” said the NLRB statement that followed the ruling.
Liability and responsibility
McDonald’s case has always appeared rather flimsy. The very nature of a franchise is that individuals or small company can buy the right or license from a larger brand to market the larger company’s products or services in a specific space. The essence of McDonald’s business is to license nearly identical restaurants selling nearly identical burgers all over the world. To then suggest the only aspect of its business that is not centralised and rigorously monitored by McDonald’s is patently ludicrous. For a McDonald’s franchisee, everything from the uniforms and the training of their workers to the way frontline staff greet diners is meticulously set out by head office. Everything apart from their pay, that is, if McDonald’s had it their way.
14,000
McDonald’s outlets in the US
90%
of which are franchises
“[The] decision of the NLRB is a vital step in reminding companies that they can’t hide behind labels or a franchise system to avoid complying with our labour laws,” wrote Sarah Belton, a lawyer with Public Justice, in The New York Times. “Our country is in the middle of a critical conversation about poverty, income inequality and the rights of workers to a living wage. Integral to this dialogue is today’s focus on the bottom line that often stresses cutting costs, like labour, in the hopes of raising profits.”
If Belton’s words appear far-fetched or overly dramatic, they are not. Fast Food Forward is campaigning for the minimum wage of employees to be raised to $15 an hour and that they should be allowed to join a union without fear of reprisals from employers. The wage increase would raise workers’ incomes above the poverty line but below the national average – hardly a grandiose demand from the workforce. According to the pressure group, McDonald’s own employee resources website suggested it would take $12.36 an hour net, roughly, to make ends meet. Fast Food Forward, however, claims the McDonald’s figure doesn’t factor in food, water and bills, and suggests workers would actually need two jobs to survive on that wage.
Even though McDonald’s seems to have been aware of the tragically low wages paid to its frontline workers, it has specifically chosen to hide behind the excuse that they are subcontracted by franchisees. “McDonald’s can try to hide behind its franchisees, but today’s determination by the NLRB shows there’s no two ways about it: the Golden Arches is an employer, plain and simple,” said Micah Wissinger, a lawyer who filed complaints on behalf of several McDonald’s employees in New York. “The reality is that McDonald’s requires franchisees to adhere to such regimented rules and regulations that there’s no doubt who’s really in charge.”
“Conceptually, this is a big step,” Catherine Fisk, a labour and workplace expert at UC Irvine law school, said of the NLRB counsel’s ruling to the Los Angeles Times. “It shows the general counsel is trying to adapt to the evolving nature of the restaurant business.” Though Fisk is sceptical about whether or not the ruling will lead to any change in terms of the unionisation of fast food workers, she pointed out that, more importantly, the ruling ensured that, if workers did organise, “they can at least bargain with someone who can do something”, i.e. the branded company in question.
“Allowing companies to game the system to prioritise profits comes with a cost,” wrote Belton. “American taxpayers spend over a billion dollars a year on public assistance to McDonald’s workers.”
Bad for small business
The NLRB decision is yet to be ratified, but, if it is, McDonald’s and other franchisors will be considered liable for matters of wages and benefits as a joint employer. If this standard is changed, the very nature of franchises in the US could be forced to change too, potentially jeopardising smaller businesses. If McDonald’s were forced to the negotiating table with unions over labour matters, it is likely it would pass on those costs to franchisees – often much smaller companies with proportionately smaller profit margins. And it is likely the ruling will affect franchises across a range of sectors beyond fast food.
Angelo Amador, Vice President for Labour and Work Force Policy for the National Restaurant Association, told The New York Times the decision was another example of the current administration’s quest to undermine small businesses. He said the ruling “overturns 30 years of established law regarding the franchise model in the United States, erodes the proven franchisor-franchisee relationship, and jeopardises the success of 90 percent of America’s restaurants who are independent operators or franchisees”. David French, Senior Vice President with the National Retail Federation, agrees, saying the decision confirmed the labour board was “just a government agency that serves as an adjunct for organised labour, which has fought for this decision for a number of years as a means to more easily unionise entire companies and industries”.
The reality is that McDonald’s requires franchisees to adhere to such regimented rules and regulations that there’s no doubt who’s really in charge
Another risk, detractors of the NLRB ruling are suggesting, is that, by being held accountable for labour and welfare, branded companies will back out of the traditional franchise model. “It’s a catastrophic event from a legal perspective,” Robert Cresanti, Executive Vice President of Government Relations and Public Policy at the Washington-based International Franchise Association told Business Insurance. “It shatters fundamental principles of privacy and control, and I think we’re going to have long-term repercussions from it.” There is a real danger, it seems, that this ruling will establish a precedent that will lead to franchisors becoming exposed to a number of liability issues that go beyond labour.
“This decision goes against decades of established law regarding the franchise model,” wrote Heather Smedstad, McDonald’s USA’s Senior Vice President of Human Resources, in a statement published on the company’s website. “McDonald’s does not direct or co-determine the hiring, termination, wages, hours, or any other essential terms and conditions of employment of our franchisees’ employees – which are the well-established criteria governing the definition of a ‘joint employer’.” She added McDonald’s, “as well as every other company involved in franchising, relies on these existing rules to run successful businesses as part of a system that every day creates significant employment, entrepreneurial and economic opportunities across the country”.
However, despite the cries of jubilation from labour unions and workers’ groups, – and grunts of dissatisfaction from businesses – the NLRB ruling is still a small step in what will undoubtedly be a long road to more egalitarian workers’ rights in the US. Regardless of the uproar in the business sector, the unionisation of workers is not necessarily such a scary thing. Better-paid, happier workers will have more time and money to spend consuming products they might not be able to afford if they continue to be paid unfairly low wages. The workforce should be treated with the respect and dignity that human beings deserve, because they are more than just the frontline workers at your local fast food joint: they are also its biggest customers.
As the demand for resources continues to boom, the hunt for new oil and gas fields has intensified. Despite the drilling industry suffering due to recent economic contraction, demand for advanced drilling capabilities is still high and industry budgets are still going strong. With oil and gas harder to find, and national conglomerates monopolising the easily accessible oil fields, drilling has become harder and more specialised. Rigs need to go deeper, challenge higher pressures and drill in harsher environments than ever before.
Just four years after offshore drilling was brought to a near-standstill in many parts of the world following BP’s Gulf of Mexico Macondo well blowout, the offshore-rig-construction sector is growing strongly, with oil majors moving to new, harsher frontiers while also having to meet tighter safety requirements.
Speculative drilling firms use technology that is at least 20 years old – Maersk Drilling’s Chief Technical Officer, Frederik Smidth
Sustained high oil prices have made exploration and production in areas such as offshore Africa and the North Sea viable, even though the climate in these areas – particularly the latter – is rough to say the least. This puts new demands on rig specifications, which need to be able to withstand extremely low or high temperatures, gale force winds and rough seas. This is a fresh demand on the immense technology that already ensures drilling companies can go deeper than any other firm has gone before, while maintaining some of the strictest security requirements out there.
Spending on wells has gone up significantly in recent years and, in 2012, reached $43bn, returning global drilling activity to pre-Macondo highs. Deepwater drilling in particular is poised for growth, with spending reaching $114bn by 2022 according to analysis by Wood Mackenzie. The projections suggest deepwater growth at an overall annual rate of nine percent for the next decade. The study also noted that deepwater drillers have discovered 41 percent of volumes and created $351bn in value over the last decade, eclipsing the performance of onshore and shelf. This, again, has led to significant growth in the deepwater sector, especially in the Arctic region, where licences increased by 39 percent in 2012 alone.
New specifications
However, meeting the demand for deep drilling in a harsh environment requires billions of dollars of investment, careful planning, and tens of thousands of additional workers to make it feasible. In particular, speciality, high-class rigs have come into demand. Only a few firms are able to fulfil the strict requirements in the North Sea and Arctic region, as well as provide the capability to drill at unprecedented depths.
The first firm allowed to drill in the US off-shore oil fields after the Macondo accident was Maersk Drilling, whose rigs already exceeded the new, strict safety regulation in the region. Maersk Drilling has never been the biggest player in the field, but has, in recent years, established itself as one of the key firms to watch. Having recently launched the world’s biggest jack-up, and currently working on a project that will see it take on wells with a higher pressure than ever before, Maersk Drilling has become known for its innovation and high standards. The firm has been in an intense expansion phase for the past three years, aiming to double its size by 2015.
“We’ve expanded our capability in ultra deepwater and ability to operate in the North Sea,” says Maersk Drilling’s Chief Technical Officer, Frederik Smidth. “This included a dramatic upgrade of our rigs, doing a 2010 model of our 15-year-old jack-ups, and improving safety and the work environment. We’ve had a particular focus on efficient drilling. So our rigs are stronger, bigger and more efficient. This comes down to a real automation of the drilling process that ensures there’s no need for human interference for a longer period of time than before. The rigs have the top-speed of a really well-trained crew, but the difference with real automation is that we can operate at that capability several times in a day.”
The uptime and drilling efficiency of the firm’s new rigs has been maximised through dual pipe handling, which ensures that, while one drill string is working in the well bore, a second can be assembled or disassembled, reducing the non-productive time. The rig’s automated drill floor features Multi Machine Control; a fully remote-operated pipe handling system that allows all standard operations to be conducted without personnel present and with improved efficiency.
Such rig specifications have changed dramatically in recent years as the industry has become increasingly competitive, and more and more players are looking to find a niche in the market where they can be the top-supplier. In particular, specialist firms such as Maersk Drilling have increased their focus on the equipment side of things.
“We have focused on creating full operating systems rather than specific tools,” says Smidth. “Speculative drilling firms use technology that is at least 20 years old and which focuses on the tools rather than an overall system. This makes them more dependent on the crew, and it’s less efficient and less safe.”
Strict regulation
A particular hurdle for rigs is living up to regulations in certain complicated oil fields. The North Sea oil fields are primarily under Norwegian regulation, which is among the strictest in the world. In general, drilling regulation has been tightened significantly as a reaction to the Macondo incident.
“With growing concern about environmental protection, rules and regulations for well drilling are harder and stricter,” says Indeok Kim, Marketing Manager for Samsung Heavy Industries’ drilling facility (a key rig-builder). “The outbreak of several severe oil spills has also made safety regulations more stringent. So it is inevitable to reinforce rigs’ fuel control and well-pressure maintenance capability. For instance, issues with BOPs (blowout preventers) have been highlighted in the aftermath of the Macondo accident, and they are inducing operators to require BOPs with higher technical specifications. By default, drillships need to be technically more evolved in order to be compatible with BOP requirements.”
The UK, which controls some of the oil fields in the North Sea, significantly beefed up its environmental inspections of oil rigs in the wake of the Gulf of Mexico disaster, doubling inspection staff numbers and the number of annual inspections. For the 24 drilling rigs stationed in the UK’s part of the North Sea, this meant focusing attention on safety and environmental performance, particularly for deepwater drilling rigs, which typically explore for oil in technically challenging areas where little is known about the geology. Such drilling has grown in popularity, with the UK Government agreeing to offer millions of pounds’ worth of tax breaks to oil companies seeking to develop the deep waters off the west coast of the Shetland Isles.
Norwegian regulation goes so far as to stipulate that oil and drilling rigs maintain special ‘acoustic switches’ that shut down operations completely and remotely in the case of a blowout or explosion. The Norwegian oil business has earned a strong international reputation for industrial efficiency, and environmentally benign exploration and production technology. Its regulations also insist on low CO2 emissions, in line with Norway’s policy to achieve carbon neutrality by 2030, and ensure rigs operating in the area are the most fuel-efficient in the world. This hasn’t meant less business for Norway – the world’s sixth largest oil producer – rather it’s resulted in companies working harder to gain access to the lucrative ‘elephant fields’ in the cold North.
“The North Sea is a very mature area, so we know it well,” says Smidth. “But the environment is very harsh – it’s cold, with strong winds and higher waves than anywhere else. This means that our equipment needs to be bigger and better just to stand and drill in that climate.”
Rig demand
In the past year alone, Maersk Drilling has unveiled two new ultra deepwater drillships and two new XLE jack-ups specialised in drilling in ultra harsh environments. In total, the firm will be building four XLE rigs before 2016, representing a total investment of $2.6bn. These oceanic titans will have a leg length of 206.8m, making them the largest jack-ups in the world, reaching water depths of up to 150m. Notably, all rigs have been built subject to an order from a leading oil company, as demand for specialised rigs continues to soar. The first three jack-up rigs are being supplied by the Keppel FELS shipyard in 2014-15, and the fourth will be delivered from the Daewoo Shipbuilding and Marine Engineering shipyard in South Korea in 2016.
The latest rig to be unveiled was the XLE-2, which was transferred from the Keppel FELS shipyard in Singapore to the Norwegian North Sea in August, where it will commence a five-year contract with Det Norske Oljeselskap. The rig will be working on the Ivar Aasen field, which contains approximately 150 million barrels of oil equivalents, bringing the total estimated contract value to approximately $700m. Maersk’s investments will be quickly paid off by the queue of oil companies wanting to order multi-million dollar specialised rigs.
The natural decline of oil production and rise of Enhanced Oil Recovery will lead rig demand – Indeok Kim, Samsung Heavy Industries
Exploration, appraisal and development wells are set to increase by 150 percent, from 500 wells per year in 2012 to 1,250 by 2022. To meet this demand, an additional 95 deepwater rigs will have to be constructed between 2016 and 2022 at a cost of $65bn, according to a Wood Mackenzie report.
“From 2012 to 2013, contracts for mobile offshore drilling units (excluding jack-up rigs) have reached the highest annual level ever, with 91 units on order in total,” says Kim. “Oil companies have increased the amount of capital expenditure to cover up the shortage of oil production on a global scale.” He also points out that, with the increase in ultra deepwater drilling, drillship and semi-submersible rigs have been particularly popular.
Going further
For major rig-builders Keppel Corporation and SembCorp Marine, this year has been particularly profitable. Keppel has won contracts worth $661m from Grupo R (a Mexican drilling company) to build four jack-up rigs that will be used by Pemex (Mexico’s state-owned energy company) to develop a series of prospects along the country’s coast. Pemex, the world’s seventh-biggest oil producer, is planning to invest $25bn this year alone. Similarly, Sembcorp’s sales last year rose 11.8 percent to $4.43bn.
Together, the two Singaporean companies account for about 70 percent of the market for
production of jack-up rigs, and dominate the market for semi-submersibles, which float in ultradeep water.
Smidth maintains the expertise offered by Asia’s shipyards is hard to find anywhere else, and that a strategy to become the world’s leading high-end supplier of drilling rigs means Maersk Drilling will continue to innovate. This has prompted its highly publicised partnership with BP to develop engineering designs for a new breed of advanced, deepwater, drilling rigs. The ‘20K Rigs’ will be able to operate in high-pressure and high-temperature reservoirs up to 20,000 pounds per square inch and 176°C. If the companies succeed, it will be an unprecedented drilling feat.
Competition has also started to emerge among shipyards after Daewoo Shipbuilding & Marine Engineering said earlier this year that it planned to re-enter the jack-up rig-building market for the first time since 1983. Korean and Singaporean shipyards have generally dominated the rig-building market, owing to their specialising in deepwater and jack-up rigs. However, Chinese shipyards Shanghai Waigaoqiao and China Rongsheng Heavy Industries have also secured orders this year for at least three jack-up rigs. For Samsung Heavy Industries, this has prompted a growing focus on niche markets.
“In order to penetrate niche markets such as the Arctic region, Stena and SHI have cooperated with each other to develop an Arctic drillship that can endure harsh cold temperatures,” says Kim. “The demand in the North Sea will be fundamentally high for some time. The natural decline of oil production and rise of Enhanced Oil Recovery will lead rig demand in the region and the need for replacing aged rigs in the area is expected to sustain the demand.”
The drilling industry is poised for another banner year, albeit one of increased competition and with a growing focus on innovation that can accommodate the need for safety when drilling deeper and more dangerously than ever before. Environmental concerns have never been so great, and, in order for drilling firms and rig-builders to maintain their advantage, it will all come down to specialisation, safety and efficiency for the beasts of the sea.
The age of the petrolhead car-obsessive is, hopefully, about to come to an end. If Google has its way, the cars of the not-too-distant-future will be able to drive themselves, freeing up the occupants to kick back and relax while they’re ferried to their destinations. But if you listened to the howls of those boy-racers who are aghast at the thought of losing control of their precious motorised toys, you’d think people were about to be plunged into a joyless future where machines will send them to their deaths in motorised metal coffins.
The news that self-driving (or ‘autonomous’) cars are set to roll onto public streets in the UK next year caused a stir, with many drivers wondering how safe it will be to have a computer-controlled vehicle competing for space.
Driving enthusiasts aren’t exactly the most level-headed people. If they’re not hurtling down a motorway over the speed limit, they can usually be found clogging up the already cramped city streets, spewing noxious CO2 emissions into the atmosphere, and bellowing about how it’s their right to have their own chariot take them to work. Ludicrously, their main argument against autonomous cars seems to be about safety; something they have little regard for when bending almost all rules of the road.
You’d think people were about to be plunged into a joyless future where machines will send them to their deaths in motorised metal coffins
While some of the protests suggest autonomous cars will be set loose on our streets without any thought for safety, the reality is quite different. The introduction in the UK will involve tests in certain cities for periods lasting between 18 and 36 months, and they will only include a small number of cars. In the US, states including California and Nevada have all signed up to test driverless cars, while Japan and Sweden have also undertaken extensive testing of the technology.
Google’s tests have so far seen more than 700,000 miles driven – 1,000 of which were on difficult terrain and awkward roads – and only one reported accident has occurred. Unsurprisingly, that happened when a human was driving.
The reality is millions have been invested by some of the world’s leading technology companies into developing a safe, reliable and efficient way of transporting people around that takes the primary cause of accidents – human error – out of the equation: the UK experiences five road deaths related to driver error every day.
With autonomous cars, issues around inexperienced and careless drivers, speeding and drink driving will all be eliminated. Safety will be determined on the ability of a highly advanced computer to judge its surroundings. The driver will be free to put their feet up, read a book, watch a film, or even do some work, eliminating all those wasted hours of the day that could be spent driving the economy.
Technology has come a long way over the last few decades. We can safely assume scientists who have devised a way of using satellites to pinpoint where you are on a map are more than capable of developing cars that are as aware of their surroundings. While the joys of driving slightly over the speed limit will be taken out of the gripped hands of petrolheads and given over to an emotionless computer, the lives saved as a result are surely worth the sacrifice.
There is one last reason self-driving cars cannot hit the roads soon enough: there are many in the world incapable of driving, be it for health reasons, age or incompetence – myself included. We should not be excluded from the outside world. Finally, the stigma of not being able to drive to the shops will be a thing of the past.
Consumers are right to relish the thought of turning their daily commute into a real head start on the day. Imagine those many, many mornings and nights spent with both hands on the wheel, staring into the all-too-familiar abyss of rush hour traffic: now imagine using them instead to do whatever you want. Except – and this is key – your safety rests on the ability of a computer program to prevent your self-driving car running a red light, speeding along school roads, or plunging you headlong off the side of a cliff.
The prospect of freeing up time and eliminating traffic casualties obviously makes for attractive reading, but it appears that, in amid the haze of early development excitement, many have lost sight of what could just as easily make the technology so problematic, if not life-threatening.
The single most frequently cited advantage of autonomous vehicles is reduced road traffic casualties: that unwitting pedestrians would, in theory, no longer fall foul of drink drivers, tiredness, or any number of human errors. However, the theory is not without its limitations, and the moment a self-driving car runs a red light or clocks a couple of mph over the limit, the industry will find itself forced to contend with wave-upon-wave of legal questions, all of them asking: ‘Who is to blame?’
Without a designated driver to speak of, the issue of who will be held responsible for an accident looks likely to throw the industry into disarray. Whether the owner, manufacturer or algorithm is to blame for any wrongdoing is a question few people – if any – are qualified to answer. What’s even more disconcerting is that accidents cannot be so easily attributed to computers once lives are at stake. The reassurance that no accidents have been caused by automated cars so far does not change the fact that no technology is immune to flaws.
Your safety rests on the ability of a computer program to prevent your self-driving car plunging you headlong off a cliff
Most worrying of all is that California passed a bill last year paving the way for autonomous cars without addressing the issue of liability in any depth whatsoever. When pressed on the issue of who would get the blame in the event of any wrongdoing, California Governor Jerry Brown retorted: “I don’t know – whoever owns the car, I would think,” before following up with: “That will be the easiest thing to work out.” Google’s Sergey Brin, who was also in attendance, was similarly flippant about the potential risks. “Self-driving cars don’t run red lights,” he said quite simply.
Aside from the issue of accountability, there’s also the question of ethics, and whether a computer is capable of weighing up what’s more important when it comes to making potentially life-threatening decisions. The problem in the main is that a situation could arise in which a computer-driven car is forced to choose between two dangerous actions, and, in that moment, must decide which is the less costly of the two. Whereas a human being is likely to factor ethical considerations into the mix, a computer can surely only opt for the more logical of the two. Without a human to guide it, an autonomous car is essentially a machine, programmed to either protect the driver’s best interests at all costs or else limit damages as much as possible – even if that means putting the driver at risk.
The technology is still in its infancy, but what’s clear already is that there are far greater challenges than cost and technological complexities if those in the industry are to avert what looks increasingly as if it will be an inevitable disaster.
After decades of civil war, Angola has become one of the fastest-growing economies in the world. But the country still faces great challenges if it is to achieve sustainable growth and improve the living conditions of its population. As Pedro Ferreira Neto, Founder and Group CEO of Eaglestone, an investment banking firm focused on sub-Saharan Africa, points out: “Angola has undertaken a challenging but successful development path in the last decade, but one also finds that a lot remains to be done in terms of improving infrastructures, human capital, the business climate and the social conditions of the population. However, I am sure Angolans will make it, and we are enthusiastic about contributing to the development of the country”.
Angola has undertaken a challenging but successful development path in the last decade
Following strong efforts to rebuild a country devastated by decades of war, Angola took advantage of the oil boom in the mid-2000s to rebuild its infrastructure and enhance overall economic and social development. A strong performance in both the oil and non-oil sectors (mostly construction and retail) has allowed per capita income to surpass $5,000, well above the average for sub-Saharan African countries.
The country currently has a positive fiscal balance and low public debt. On top of this, it has a more comfortable level of international reserves as well as a stable exchange rate, which helped lower inflation to single-digits for the first time in over two decades.
Government support Tiago Dionisio, Head of Research at Eaglestone, notes the Angolan authorities are now focused on implementing a development strategy that will ensure stability, growth and employment in the long-run. The “Angola 2025” government development plan is targeted at reducing the poverty levels and improving the literacy and skill levels of the local population. The National Development Plan 2013-17 is part of the Angola 2025 vision and aims to promote economic diversification. Authorities hope it will help double the number of investment projects approved every year by ANIP, the national private investment agency, and create many jobs.
Government officials are seemingly committed to promoting the recovery of national production, including the revitalisation of the agricultural sector in order to reduce the country’s reliance on imported goods. High quality soil and a good water supply make farming a valuable industry for Angola, accounting for 11 percent of GDP and 70 percent of total employment. In 2013, farm output grew 8.6 percent, mostly through strong growth in cereal production. However, agriculture in general is held back by limited competition and processing.
Angola’s first sovereign wealth fund, the Fundo Soberano de Angola (FSDEA), is now operational, with a $5bn capital allocation. Headquartered in Luanda, the FSDEA investment strategy envisages the establishment of a diversified portfolio that optimises expected returns in direct relation to the forecasted long-term risk. The chairman of the board of directors of FSDEA, José Filomeno dos Santos, said the fund will ensure the allocation of approximately $2.5bn to alternative investments in the agriculture, mining, infrastructure, hospitality and real estate sectors. These large investments across Angola and other African markets will, according to Santos, “nurture sustainable domestic and regional growth”.
The Government of Angola is also trying to stimulate local business entrepreneurship by promoting co-investments through a public venture capital fund, Fundo Activo de Capital de Risco Angolano, that invests in Angolan SMEs.
Nevertheless, doing business in the country remains challenging: Angola ranks 179th out of 189 economies in the World Bank’s Doing Business report for 2014, below many other Sub-Saharan African countries. Manuel Reis, Co-founder of Eaglestone and its Country Manager for Angola, highlights that private sector development is hindered by high logistics, utilities and operational costs, with Luanda ranking as one of the most expensive cities in the world. Despite the improvements seen in the last decade, unreliable electricity, poor transportation networks and weak human capital across the country contribute to an increase in the cost of local production and to a weak business environment.
Continuing challenges
Local companies often note access to financial services remains limited and that it is a key impediment to their business activity. This is partially explained by the complex administrative procedures when opening bank accounts and obtaining loans. Banks claim it is difficult to identify borrowers with good credit risk due to a lack of reliable accounting principles and credit information. Weak contract enforcement and a lack of collaterals are also key hurdles to further credit expansion.
Reis believes the succession of President José Eduardo dos Santos is being carefully addressed in order to ensure political stability and a smooth transition, with Vice President Manuel Vicente (former head of oil company Sonangol) being referred to as a likely candidate to succeed. The adoption of a new constitution in January 2010 has established a presidential parliamentary system whereby the President is elected for a five-year period, with a limit of two presidential terms.
Despite the challenges, the Angolan economy is firmly headed in the right direction so one should be optimistic about the prospects of many Angolans rising out of poverty.
Ello’s main selling point is that it doesn’t use ads, and that it won’t sell its users personal data on to third parties. “We believe in beauty, simplicity and transparency… We believe a social network can be a tool for empowerment. Not a tool to deceive, coerce and manipulate – but a place to connect, create and celebrate life. You are not a product”, reads an excerpt from its profoundly eloquent manifesto. That’s all well and good, but so far, a safe haven from advertisers and a minimalist, not particularly user-friendly design is all Ello has to offer. That’s not much of a USP just yet; plenty of sites don‘t use adverts. Even Facebook didn’t for years. In fact, Ello’s story up to this point echoes Facebook’s – a small, private network open only to a close circle of friends, until its following is big enough to take public. With this in mind, it’s possible that Budnitz et al might follow in Zuckerberg’s footsteps, waiting on the support of the masses before adjusting his business model to maximise profitability.
The idea that this relatively small, design-focused site could lead Facebook’s 1.2bn users astray seems slightly absurd
The idea that this relatively small, design-focused site could lead Facebook’s 1.2 bn users astray seems slightly absurd. At present, there are approximately 300 social networks out there. Most internet users are signed up to a select handful, but Facebook is indisputably the most popular. Despite the endless and ever-growing list of privacy accusations lodged against it, more than 750m of us continue to log on every day. As it turns out, the public is not too concerned about privacy violations and a barrage of adverts interrupting their daily lives – at least, not concerned enough to close their accounts and say goodbye forever. Facebook’s success is something Caterina Presi, Senior Teaching Fellow at Leeds University Business School, calls “the advantage of critical mass – or to put it simply, ‘all my friends are there’”.
Despite this, Ello has generated a lot of noise, and that is deserving of recognition. The main issue is how it will be in any way profitable without using advertisements or selling its users’ data. Plans for a freemium model have been mentioned – like that used by LinkedIn, where basic membership is free to all, but a few cosmetic features are available for a small fee. It could work, but most users are unlikely to pay for something they could get for free elsewhere. A social network is only as good as its user base, and the best way to bring individuals on board it is to offer it for free –the exact opposite of Ello’s proposal. Advertising is used on platforms simply because it is a tried and tested stream of revenue that does not necessarily come at the cost of the consumer.
Budnitz denies any financial motivation at this point, but as the site grows in popularity, costs will quickly add up. Facebook expects to spend $2bn – $2.5bn on capital expenditure this year alone. If sufficient network infrastructure funding is not available, the quality of the service will begin to deteriorate, and its members would return to what they know. Support currently comes in the form of venture capitalists, Vermont firm FreshTracks – who usually expect to see a fairly swift return on their investment.
If Budnitz and his team did successfully derail Facebook, then the next concern is what this would mean for the brands currently capitalising on social media advertising. In terms of results, it’s more lucrative than its backlash might suggest – a 2014 Edison Research’s Social Habit survey revealed that 76 percent of respondents considered Facebook to be the only social network they would use to connect with brands and products.
An ad-free network does not have to be a brand-free network, and could result in a forced shift to less conventional methods of connecting with target audiences beyond the banner ad. Plenty of brands have already signed up; including Netflix, Sonos and Budnitz’s own cycle company, but the difference is that Ello users are given the choice. If you don’t want to see updates from a brand, don’t follow them, and you won’t have to.
“It will help brands to think of their customers in co-creative and empowerment terms with greater urgency”, said Presi. “It starts with listening. I mean really listening. And then it is giving tools to your customers to express themselves and their love for your brand more meaningfully through social media. Give them something to stimulate their creativity, something that is difficult enough to feel challenged by, let them make it their own.”
Advertisers needn’t lose sleep over Ello. Like so many before it, its perceived exclusivity has stimulated the curiosity of the public, and that novelty will inevitably wear off. Instead, they should take it as an opportunity to immerse themselves in the conversation, rather than interrupt it.
After seven years spent on table tops and store shelves as the ‘latest thing’, the last generation of gaming consoles was left to grow a little too long in the tooth. Any sniff of significance the PlayStation 3 and Xbox 360 might have had at the beginning of the cycle waned as the years wore on; as diversification, dematerialisation, and the migration of gaming from the lounge chair to the mobile impacted the console business.
$28.9bn
Value of console gaming in 2008
$18.3bn
Value of console gaming in 2013
It was to an all-too-familiar sigh of exasperation, then, that industry rivals Sony and Microsoft revealed their next generation of consoles would be without backwards compatibility functionality, condemning an entire back catalogue of games to bargain bins and basements the world over. Citing technical limitations as the reason for the impracticality, the console-makers explained to disappointed gamers across the globe that the new PlayStation 4 and Xbox One consoles would not be the all-in-one entertainment hubs they’d so desperately hoped for.
Industry commentators nudged shoulders and wore an all-too-knowing smile, safe in the knowledge their predictions about console gaming’s imminent demise were spot on. Already riled by the rising popularity of mobile gaming, waning video game sales and an increasingly uninterested consumer, the market for console gaming looked to be on its last legs. The release of a new machine could bring nothing but disastrous consequences for all involved.
In an era when path-breaking advances have been made in the mobile sector, and PC gamers have gravitated towards digital distribution channels such as Steam, consoles have by and large failed to keep pace with the transformation. “The console market isn’t dying, but it isn’t growing either, which might as well mean the same thing,” says James McQuivey, Vice President and Principal Analyst at Forrester. Down from an all-time high of $28.9bn back in 2008, the console gaming industry was last year valued at $18.3bn, and sales throughout even the busiest spells were slow. Once a household staple, the games console – and what it represents – has turned from a family-orientated affair into an enthusiast’s paradise.
“Game consoles haven’t fallen yet, but we’ll see that this most recent generation of consoles will never sell as much as the generation before it,” says McQuivey. “50 million each for Sony and Microsoft – or more like 60 million for Sony and 40 million for Microsoft – is still a business, and a worthwhile one. But not one that Microsoft or Sony will value enough to care about building the generation that follows it.”
Manufacture and development take up a large chunk of spending on consoles
Digital shift
However, there is one figure that stands apart from the rest and represents a sign of more positive things to come for consoles. Amid the written proof of falling profits and dwindling customer numbers is one hopeful statistic: annual sales of digitally distributed console games outsold physical copies for the first time. No longer the disc-dominated marketplace it once was, the console gaming industry has thrust its way firmly into the age of digital distribution, and it is in this area that console players will focus their attentions.
One report, compiled by the Entertainment Software Association, shows digitally distributed games accounted for 53 percent of global sales in 2013, comparing favourably with the 41/59 and 32/68 split between digital and disc in 2012 and 2011 respectively. And while the drive to digital has long been a feature of the PC gaming market, it is only in recent years that console gamers have opted for digital downloads ahead of discs, leading to a seismic shift in industry models.
For one, the emergence of digital distribution means the opportunities for smaller developers are far greater than they have been historically, and studios need no longer plough millions of dollars into big budget titles in order to turn a profit. Small, independent developers have become a staple of the new digital-centric console gaming scene, calling into question the ‘go big or go home’ mentality of the industry.
The effects of this new digital-first strategy could be seen at work towards the beginning of the year, when the creative director and co-founder of the award-winning and multi-million-selling studio Irrational Games, Ken Levine, disbanded the company to capitalise on all things digital. “I am winding down Irrational Games as you know it,” wrote Levine in a public letter, marking the end of his 17-year spell at the company. “To meet the challenge ahead, I need to refocus my energy on a smaller team with a flatter structure and a more direct relationship with gamers. In many ways, it will be a return to how we started: a small team making games for the core gaming audience.”
Levine represents just one in a long line of major industry names inspired by the new opportunities afforded by the drive to digital. “We will focus exclusively on content delivered digitally,” wrote Levine, and so too should the rest of the industry if it is to weather the throes of a digital revolution in console gaming.
Cloud gaming
The dematerialisation of console gaming has led many to speculate about what the future may hold for the sector, and, indeed, whether there may be a future for dedicated machines at all. Perhaps the biggest change – and in many ways threat – to befall consoles is cloud gaming.
Although both Sony and Microsoft insisted their consoles would not be backwards compatible at launch, a solution to the shortfall has since presented itself: Sony will soon offer an entire back catalogue of games on a new streaming service it calls PlayStation Now. The company’s new service makes available a host of selected PlayStation 3, PlayStation 2 and even PlayStation 1 titles on either a per-game or subscription basis. Not only has this development appeased previously disappointed gamers, it has also addressed a number of lingering inefficiencies in the market and – perhaps inadvertently – paved the way for a console-free future. Should streaming services such as PlayStation Now really take off, we could see a situation where the cloud supersedes physical and even digital purchases as the next gaming frontier.
Digital/disc split in video game spending (%)
32|68
2011
41|59
2012
53|47
2013
It’s no secret that manufacture and development take up a large chunk of spending on consoles: it’s usually some way into the development cycle that the creators begin to turn a profit. The concept of cloud gaming is an attractive one for developers in that it allows them to update their games on the fly, instead of reconfiguring (and in many instances overhauling) game engines and mechanics for a new generation of consoles every few years. The potential cost savings of the cloud could even align the costs of console gaming with its cheaper mobile and PC counterparts, broadening its commercial appeal and freeing the sector from too-big-to-fail projects. Cloud gaming could bring a greater number of consumers to console gaming, as the financial barriers to entry – which have, in recent years, made it an enthusiast’s paradise – are broken down.
The development is not all plain sailing, however, and there are still a number of challenges for the industry to overcome. For one, much like the digital adjustments made to the music industry, buying a digital copy of a game does not mean the user necessarily owns it, so ‘owners’ cannot sell or share any purchases they make.
By far the largest obstacle, however, is the reliance on broadband speed ahead of hardware. The connection speed required to stream a game far outstrips what is required to run a music track or film. The sad fact is that, without a reliable broadband connection, users will be unable to get on board with cloud gaming. The success of the development hinges on internet infrastructure, especially in rural areas and developing markets, where capacity often falls short of demand. The data usage associated with streaming – not just with games but any media – poses another potential problem for consumers, in that bandwidth caps could impose additional fees on those using streaming services such as PlayStation Now, Spotify and Netflix.
Should users overcome these issues, however, the emergence of cloud gaming will mark a significant turning point for the industry; gaming will no longer be a product, but a service.
The Netflix model
The so-called ‘Netflix model’ has inspired admiring observers to try their hand at trimming costs by signing deals with developers and offering a subscription price. However, the vast majority of those attempting to replicate the model – Sony included – fall down in one key area: whereas Netflix succeeded early in putting pen to paper with a number of major industry names and then pumped its profits into new and more expensive deals, Sony, put plainly, does not have the financial clout to bring developers on board for a similarly low-cost service.
The subscription price for PlayStation Now has not yet been confirmed, and the marketplace at present offers only limited rentals on a per title basis. In all likelihood, the subscription price will come on top of PlayStation’s online service, PlayStation Plus, and, if current rental prices are anything to go by, the fee is likely to be on the steep side.
Critics of PlayStation’s cloud gaming endeavour have been quick to point out that the rental rates offer little in the way of value when compared with a physical copy of the game. Popular first-person shooter Killzone 3, for example, is available on PlayStation Now at a price of $2.99 for four hours, but the cost of the physical copy on GameStop is $4.99. This issue is near enough consistent across the board.
Claims that PlayStation Now equates to a ‘Netflix of games’ are both premature and ill-advised. At present, the streaming service succeeds only in offering gamers a chance to experience titles they are without the hardware to otherwise play. Streaming represents a departure from discs, but, without the cost benefits, the service benefits only a narrow cross section of consumers.
Running out of lives
The rise of cloud gaming not only poses a threat to retailers whose business model rests on selling used products, but more traditional game sellers, whose margins have been squeezed in recent years; their demise spared only by the additional profits that come by way of used game sales. Clearly, retailers still retain a distinct advantage over their cloud-based competitors for the time being – though for how long remains to be seen.
“Cloud will win – it’s winning in every other medium as well, from Spotify to Kindle to Netflix,” says McQuivey. “Mobile gaming has already proven that cloud wins and consoles will follow soon, once the network is fast enough to compensate for the extra burden console-quality games bring in terms of storage and bandwidth. But because cloud-based interactions are already integral to gaming, we can already see where gaming will be a few years from now: in the cloud. But remember, people still spend billions each year on CDs and even DVDs; the same will be true for games, at least for a few years. Until, suddenly, it isn’t. Second-hand retailers have at least three years to adapt, then they’ll find themselves about as relevant as Barnes & Noble.”
The fact remains that cloud gaming is still in the very early stages of development, and offers only a glimpse of where console gaming is headed. However, should latency cease to be an issue, the industry will be left with no option but to abandon dedicated machines and embrace the many benefits that accompany streaming. Then, and only then, will the issue of inadequate infrastructure reveal itself in full, leaving connectivity, or lack thereof, as by far the biggest obstacle to growth in a much-changed gaming market.
The devastation caused by freak weather phenomena such as Typhoon Haiyan and Hurricane Sandy has made authorities focus on the urgency of improving climate resiliency in major coastal and low-lying cities. With communities exposed to the fury of extreme weather and rising seas, officials are faced with the task of overhauling centuries-old infrastructure. Many have, up until now, assumed our sewers and technological developments would be enough to keep urban populations safe from extreme weather. But, as recent years’ catastrophic weather has proven, the majority of cities are in need of serious upgrades.
On July 2, 2011, a cloudburst inundated Denmark’s capital, Copenhagen, with 135mm of rain in less than three hours, flooding basements, streets and major roads. According to the City of Copenhagen, the deluge caused $1.04bn worth of damage. To make matters worse, the downpour came less than a year after a similar cloudburst flooded a large part of Copenhagen’s suburbs and caused irreversible damage to homes and infrastructure.
The experiences have left a mark, with 45 percent of Danes fearing damage from future downpours, and 61 percent of Copenhageners having experienced water damage to their property. Having witnessed millions of dollars’ worth of cars, shop contents and personal effects float around in lakes of sewage water – formed by the city’s low-lying roads, parks and pavements – Danish politicians have been forced to address one of the world’s major challenges: climate change and the havoc it could wreak on our way of life.
Sewers are normally not visible, but that will change – Lykke Leonardsen, Head of the City of Copenhagen’s Climate Unit
Climate management
The City of Copenhagen created a Climate Adaptation Plan, which will bolster the city’s defences against water, wind and temperature. “We know Copenhagen is going to experience climate changes and we conducted a series of studies that could give us an overview of the impact they will have on our city,” says Lykke Leonardsen, Head of the City of Copenhagen’s Climate Unit. “We did a risk assessment so we could match our city’s development with a plan to handle these issues.”
The intense flooding in 2011 prompted the city’s politicians and authorities to approve a cloudburst management plan, which, together with a water catchment plan, makes up the main part of the climate adaption strategy. Changes will include the building of dikes and making management of storm water more localised. There will also be warning systems for rain, and waterproof cellars, while urban areas will be adapted to store rainwater with minimal damage.
“The water catchment plans have been subject to public hearings because, in some areas, we’re creating water boulevards that can carry water away when there’s excess rain,” says Leonardsen. “We’re putting in a new layer of infrastructure and changing the way the city looks. For instance, sewers are normally not visible, but that will change, and obviously we needed to discuss that with the local community.”
As part of the plan, Copenhagen will be supplementing existing sewers with an overground water system that combines landscape architecture with water boulevards, and massive water storage in parks and football fields when necessary.
“We don’t want to fill up our sewers with rainwater and we need to keep dirty, polluted water in the sewerage system,” says Leonardsen. “By leading water into lakes, streams or the harbour, we’re trying to delay the flood and make sure the water above ground is clean. We had one fatality as a result of the cloudburst in 2011 and it was caused by a disease stemming from rat urine in the sewage water that had reached our cellars and roads. This is why it’s necessary for us to implement a citywide cloudburst management system.”
The downpours that flooded Copenhagen in recent summers are likely to be repeated across Europe. Meteorologists expect precipitation will increase 25 to 55 percent in the winter, and summer downpours will be 30 to 40 percent heavier and spaced further apart. Current sewers cannot meet such future increases in water, and storm surges from the sea are a growing threat for coastal cities such as Rotterdam, Hamburg and New York. The Copenhagen City Council has therefore decreed the new infrastructure must limit floods to an average of once every 10 years.
$1.04bn
Damage to Copenhagen from a single storm in 2011
Learning lessons
The cost of installing new drains throughout Copenhagen is estimated to be $1.73-2.6bn, and an additional $521-868m to separate rain and wastewater in individual dwellings. For this reason, planners have recommended managing rainwater locally instead of guiding it to sewers. The estimated investment for the overground solution is $868m and includes simple measures such as replacing concrete or tiles in courtyards and squares with grass and trees. Without action, damage to buildings and infrastructure, combined with lost earnings from storm surges and floods, could total as much as $3.5bn over the next 100 years.
“What we’re adapting to is something that happens every 10 years now, but will become an annual event in the future,” says Leonardsen. “The frequency and intensity of cloudbursts will only go up, and we need to spare the city from a lot of costs and damages related to that. We can’t gauge how bad future climate change will be, so we need flexible solutions for an unknown future.”
Leonardsen adds other cities can learn from them: “Study what happened to Copenhagen during the cloudburst or New York during Hurricane Sandy. You think your city is well functioning, but it is actually quite vulnerable to climate change and life can quickly grind to a halt.”
Copenhagen’s green development system to manage water is the first to incorporate an entire city. It will take up to 30 years to implement all the measures outlined, but it is clear this is an investment worth making. It will not only secure the city for years to come, but also provide its inhabitants with a slew of green, open spaces.
Clean water is something most of us take for granted, yet, for many, it is one of their most sought-after resources – particularly those living in developing countries. Concerns about the depletion of global water resources have grown rapidly in the past decade; it has been reported that almost half of the world’s population risks a life without clean water by the year 2025. In addition, a major part of the developing world suffers from cataracts blindness and does not receive basic immunisation against common diseases. This has prompted a strengthening focus on how innovations can provide clean water and basic health treatment to the poorest parts of the world.
Since only 2.5 percent of the Earth’s water is fresh, research efforts have been focused on the desalination of seawater, and the reuse and cleansing of wastewater to meet demands from agriculture, industry and domestic sectors. A key actor in this arena is Ingenuity Lab in Alberta. The organisation is Canada’s leading nanotechnology accelerator; a government-funded initiative working to create a bio-enabled, globally competitive and value-added industry while training the next generation of researchers and innovators.
While much progress has been made with the invention and development of membrane desalination techniques in the past 50 years, it is clear further advancements are necessary. In particular, the needs for better desalination and purification performance, as well as lower operating costs, are high on the agendas of researchers and consumers alike. Organisations such as Ingenuity Lab are working on developing state-of-the-art technologies that will address these challenges.
“Ingenuity Lab is developing technology that looks at not only purifying water resources that aren’t currently accessible for human and agricultural consumption,” says Dr Carlo Montemagno, Professor in the Department of Chemical and Materials Engineering at the University of Alberta, Canada Research Chair in Intelligent Nanosystems, and Director of Ingenuity Lab. “If you were able to remove or partially remove the salt and contamination, you’d be able to irrigate large areas of land that currently can’t be used for food production. In addition to making low-cost, clean water for people to consume, we’re looking at making the entire water resource base economically available, so we can improve the prosperity of individuals in developing nations where water resources are a significant challenge. The idea is also to purify contaminated downstream water from farms and make it economically viable for crop production.”
Clean water
Agriculture is the dominant means of land use in several countries, and the contamination of soil and water by pesticides and ammonia compounds from fertilisers and livestock waste is an on-going problem. Ingenuity Lab is developing enhanced aquaporin-based water purification membranes with the power to reclaim nitrogen from soil runoff and thereby increase the environmental efficiency of the agricultural industry.
“The current purification technology, or filters, have holes of a given size that filter out contaminants based on the size difference between water molecules and contaminants,” explains Dr Montemagno. “This has a core limitation that defines the energy needed for moving materials through holes of a given size. Our technology has holes which, in addition to being smaller than many contaminants, are also mapped to the water molecule itself, so it selects the water molecule and can separate much smaller molecules, such as ions and protons. The end result is that you have a much more efficient mechanism of getting pure water than you would using other techniques. Less energy, higher through-put, smaller footprint.”
Because this new technology is more efficient and cost-effective, it will be possible to implement in developing countries with a smaller capital investment. It is crucial to create technology that is cost-effective for the developing world, where the ready availability of drinking water could significantly reduce sickness, eliminate long transport times for people walking to find water sources, and provide dry and unfertile lands with irrigation.
Fighting blindness
Ingenuity Lab is focusing on improving global health standards by rethinking approaches to large-scale issues such as cataracts. Leading to a clouding of the lens of the eye, cataracts affect more than 20 million people worldwide and are responsible for 51 percent of cases of blindness. What’s more, the World Health Organisation (WHO) expects the number of people with severely reduced vision (of 3/60 or worse) as a result of cataracts will reach 40 million by 2020. It is the leading cause of blindness in developing nations and has a profound impact on the quality of life of patients and their families.
Cataract formation also carries a significant economic burden, as it can only be treated through expensive surgery and often results in the loss of work. For people in developing countries – where eye diseases occur more often due to malnutrition, limited health and education services, poor water quality and lack of sanitation – surgical cataract treatment is often far too costly for an individual.
“Cataract lenses cost in excess of a year’s salary for those in developing nations and, as a result, a lot of people don’t get surgery and become dependent on their family due to a disease that is actually treatable,” says Dr Montemagno.
“Our technology will treat the disease chemically instead of surgically, through a simple eye drop, so we believe that we can make the disease treatable and at a cost appropriate for the developing world.”
Easy vaccines
For the majority of people in the developed world, something like the flu is not necessarily considered a killer and we rarely have to fear diseases such as hepatitis or polio because widespread immunisation programmes have eliminated their spread. However, worldwide, influenza epidemics cause three to five million cases of severe illness each year, and about 250,000 to 500,000 deaths, costing more than $80bn a year in the US alone. Problematically, the vaccines the developed world uses against the flu are even more costly to administer in the developing nations where they are most needed. This is why WHO is looking at making influenza vaccines part of the recommended worldwide immunisation programmes. It is also why the development of a cheaper and easier alternative to standard vaccination is necessary.
“Normal vaccines need a significant infrastructure in order to be administered. You need a refrigerator for storing the vaccine, needles and syringes must be kept sterile, and they have to be administered by a healthcare professional,” explains Dr Montemagno. “This process is extremely costly for developing nations, so we’re developing a room temperature vaccine that can be administered via an easy-to-take pill.”
$80bn
Annual cost of influenza in the US
In developed countries, influenza has also been treated via nasal spray vaccines, which have a better immune response. However, such nasal vaccines have been taken off the market following safety concerns – including a possible link to Bell’s palsy, a form of facial paralysis. This has increased the need for a simple, cheap and safe alternative to vaccination via needle.
Being able to ship pills via normal shipping channels means oral vaccines will be easier to deliver, and could be stored and produced like any other major product. Dr Montemagno expects the oral vaccine to undergo animal studies in the next three to four months and to see human trials and rollout in the near-term once regulatory approvals are in place.
Such innovation is why The New Economy is proud to announce Ingenuity Lab as the winner of its award for Best Nanotechnology Research Organisation, 2014. It continues to forge the way for inventions that could have a significant impact on the developing and developed world, cutting healthcare costs, preventing the further spread of diseases and supporting a generally healthier population.
Following a bout of sustained pressure from activist investor Carl Icahn at the turn of 2014, eBay and PayPal have announced plans to separate and form two “independent publicly traded companies” in the second half of 2015. The split will see eBay’s current CEO John Donahoe step down and make way for Devin Wenig, President of eBay Marketplaces, with American Express executive Dan Schulman joining PayPal as President and CEO once the deal takes place.
“eBay and PayPal are two great businesses with leading global positions in commerce and payments,” wrote Donahoe in a statement. “For more than a decade eBay and PayPal have mutually benefited from being part of one company, creating substantial shareholder value.”
Whereas eBay’s revenue increased 10 percent this past year, PayPal’s increased almost
twice that
The decision to strengthen PayPal’s focus in the standalone e-payments field has arrived shortly after the announcement of Apple Pay and Alibaba’s Alipay systems, which together represent a potent threat to PayPal’s market dominance. With an active user base of 152 million, PayPal boats a presence in 203 markets and this year expects to process one billion payments. Whereas eBay’s revenue increased 10 percent this past year, PayPal’s increased almost twice that, at 19 percent.
The rate at which PayPal is growing ahead of eBay was the principle reason cited by Icahn earlier this year as to why the two entities should consider parting ways. “We are happy that eBay’s board and management have acted responsibly concerning the separation – perhaps a little later than they should have, but earlier than we expected,” wrote Icahn in a statement. “As I have said in the past and have continued to maintain, it is almost a “no brainer” that these companies should be separated to increase the value of these great assets and thus to meaningfully enhance value for all shareholders.”
eBay’s board and management echoed Icahn’s stance on the matter, with Donahoe writing: “putting eBay and PayPal on independent paths in 2015 is best for each business and will create additional value for our shareholders.” Post-split, both companies will be more focused and able to engage with new market opportunities and challenges in each of their respective fields.
When Jill Whalen, one of the most respected commentators in search engine optimisation (SEO), announced she was stepping away from SEO, it shocked many in the business. Whalen, a veteran of almost 20 years, had been one of the pioneers of SEO. She had previously self-identified as “the voice of reason” in the SEO industry, and had been one of its most active developers. And the reason this SEO behemoth decided to give it all up? “Google now works,” she said.
“When I first started writing and speaking about SEO back in the 20th century, there was no voice of reason in the industry,” she wrote in her “Leaving SEO” post on her blog What Did You Do with Jill?. “Everything written about SEO was based on the latest techniques for tricking the search engines into ranking your site above your competitors’.” Whalen has always maintained SEO was more than a business for her: it was a passion. She enjoyed finding ways around the problems posed by Google algorithms that stubbornly refused to cooperate. “Yet, I knew from experience that the real secret to SEO was not about tricks but about making your site the best it could be for your users while keeping the search engines in mind.”
90%
Google searches affected by the Panda updates
88%
Of businesses combine SEO with content marketing
74%
Of businesses integrate SEO with social media marketing
According to Whalen, recent updates to Google mean link building and content marketing now work instinctively within the search engine, rendering external SEO pointless. “Google put their money where their mouth was with their Panda and Penguin updates,” she wrote. “At last the only real way to do SEO was what I had been espousing all along. Today’s SEO blogs and conferences are bursting with SEO consultants talking about how, when you create amazing websites and content for your users, the search engines will follow.”
Google steps up
Panda and Penguin work by filtering sites by their quality. The Panda algorithm, launched in 2011, prevents low quality sites ranking highly in search engine results, despite the employment of clever SEO tricks – and, in practice, sites with huge amounts of advertising tend to rank lower. On their Webmaster Central Blog, Google described the process of developing Panda as a fight against the “black hat webspam” often concealed within sites to boost search engine rankings. According to Search Engine Watch, these algorithms affect up to 90 percent of all online searches, which means you have to learn how to play their game in order to achieve results. “To that end, we’ve launched Panda changes that successfully returned higher quality sites in search results,” Google wrote on its blog at the time of Panda’s launch in 2011. “And earlier this year we launched a page layout algorithm that reduces rankings for sites that don’t make much content available ‘above the fold’. The change will decrease rankings for sites that we believe are violating Google’s existing quality guidelines. We’ve always targeted webspam in our rankings, and this algorithm represents another improvement in our efforts to reduce webspam and promote high quality content.”
However, while most SEO experts agree Panda and Penguin have indeed revolutionised the industry, they have by no means killed it off. Since Whalen’s shocking step-down, however, SEO experts everywhere have been asking themselves again: ‘Is SEO dead?’
“The answer really depends on how you define SEO,” Sam McRoberts, CEO of VUDU Marketing and an outspoken proponent of SEO, told Forbes. “If, when you say ‘SEO’, what you really mean is manipulating search engines to place sites that don’t really deserve to rank well at the top of the SERPs [search engine ranking pages], then yes, I’d say that’s dead (or dying at least, as some manipulative tactics still work quite well).
“However, even though some SEOs work to game the system, I’ve never really felt like that was the correct definition of SEO. Because we so often use the SEO acronym, we forget sometimes that it stands for Search Engine Optimisation. SEO, at its heart, is the process of making websites more accessible and understandable to search engines. It shouldn’t be, and really doesn’t need to be, manipulative.”
A split discipline
Not everyone agrees with McRoberts, though. The industry has always been split between those who believed it was about ‘gaming’ the system in order to achieve high rankings, and others who believed it was about working with the tools Google provides. “SEO as it stands is dead and buried because it’s becoming impossible to ‘game’ the system,” Rhys Williams, Managing Partner of digital agency agenda21, told marketing magazine The Drum. “No longer can you simply implement tactics such as link buying, content distribution or buying low quality content and expect them to deliver results. Google has put an end to that, which means that many companies are struggling to adapt.”
A brief history of SEO
1998
DMOZ, a directory of website links, is launched and its listings become an important cornerstone of early SEO. Also launched this year is plucky search startup ´Google´…
1999
The first SEO conference, Search Engine Strategies (SES), is held. Multiple SES conferences are now held around the world every year.
2001
Until now, search rankings have relied heavily on keyword ‘meta tags´, but these are easy to game and lose value as search engines refine their methods.
2002
Google begins to penalise sites that ´sell links´ i.e. agree to host links to other sites in exchange for payment by lowering their positions in search results.
2003
Google’s Florida update penalises websites it feels have been overly manipulated by SEO experts. Such hammer blows will reoccur over the next decade.
2005
Google begins to work with SEOs, launching its Analytics platform and introducing the ´nofollow´ HTML value, which allows site owners to disown links they feel may hurt their rankings.
2008
You’ll never guess what SEO experts are relying on now… Linkbait. As Twitter takes off, enticing, cryptic and quite often annoying headlines, designed to attract clicks, become a key strategy.
2011
Google unleashes its deadly Panda update, penalising sites that offer a poor user experience – those with ´thin´ content, or which host an excessive amount of advertising.
2012
Google Penguin is released, penalising websites engaged in ´black hat´ SEO tactics, particularly link bombing i.e. causing a website to rank highly by creating a large number of low value links to it .
For McRoberts, Panda and Penguin have changed the way SEO experts should face Google, rather than eliminate the need for experts altogether. However, it is true that certain techniques, such as article spinning (“the process of automatically generating ‘rewritten’ versions of an article and submitting them to as many low quality article directories as possible,” according to search expert Pratik Dholakiya), are dead. Other techniques, such as link buying, might still work, but have become costly and inefficient. “The fact of the matter is Google’s algorithms just aren’t smart enough to identify link buying in every single circumstance. How could [they] be? Not even humans are that smart,” wrote Dholakiya. “The real reason this isn’t worth it for brands is because it’s actually more costly to buy links than to attract them naturally. The content has to be just as good as if you were doing it completely above-board, or over time it will become obvious that you are buying the links.”
Though this not the first time industry leaders have suggested traditional SEO techniques are dead, Whalen’s decision to back out of the industry has added some gravitas to the claim. However, what is more likely is that SEO has just evolved beyond its original capabilities, to include practices that achieve a much broader set of competencies. It is vital to remember that achieving high rankings is still a worthy and worthwhile goal for businesses of any type – but it is also important to note that SEO as we have practiced it in the past will no longer be effective. It is important for businesses and SEO experts to shed their former approach, and even their old definitions of what SEO should stand for, and start afresh. The death of SEO is “an eye-catching headline that has been churned out with regularity for years. And it’s deliberately misleading,” explains Joel Coppersmith, Head of Search and Affiliates at digital marketing firm Profero. “Achieving high rankings in search engines for business-critical searches remains a worthwhile goal because it still provides a long-term boost to the bottom line. The methods (and practitioners) involved in achieving this have changed to cover a far broader set of competencies, but the goal remains the same. Maybe the term ‘SEO’ has its own associations that no longer quite match the current skills and activities being employed, but there’s enough there that someone calling themselves an SEO five years ago would recognise.”
Optimisation v marketing
With the success of the Panda and Penguin algorithm updates, the industry has evolved to push SEO experts and media marketers closer together. Recent research by Econsultancy has found that up to 88 percent of businesses now combine SEO with content marketing in a bid to streamline processes, and 74 percent actively integrate SEO with social media marketing strategies. However, this has posed new challenges for SEO experts. “Social media requires a human touch, something that a lot of SEO professionals aren’t equipped to deal with,” Dane Cobin, a social media specialist at FST Group, told The Guardian. “SEO has always been tied to the performance of metrics, but you can’t carry out a social media campaign if you look at people as numbers instead of individuals.”
Alex Postance, Head of Earned Media at Epiphany agrees SEO and marketing are no longer mutually exclusive and should not be treated as individual disciplines: “SEO described as a process is dead. When we say ‘search engine optimisation’, what we really mean is the improvement in search engine rankings of our websites; this is an outcome, not a process. And to achieve this outcome, you do not ‘do’ SEO. You do marketing, lots of different types of marketing.”
One thing is clear: SEO will never truly ‘die’. It will just continue to morph into a new discipline that will eventually be utterly different from its original format. Dholakiya thinks companies will be forced to redefine content – as they are already doing by uniting SEO and digital marketing – in order to successfully target users. “The top sites on the web like Facebook, Amazon and YouTube aren’t what we typically think of as content sites,” he says. “Yet sites like these completely dominate top-notch content sites like Mashable or The New York Times. The most successful sites on the web are built on a foundation of applications, tools and communities. Most people can go without ‘content’ as we currently define it. Most of us can’t go without online tools and communities.
“Certainly, we can expect more traditional forms of content like blog posts, videos, infographics, ebooks and whitepapers to continue to play a huge role in content marketing. In fact, these can only be expected to grow over the next several years. Just be warned, it’s going to be harder to shine through. Interactive experiences, on the other hand, attract attention like nothing else.”
Search engines will always have to identify certain factors on a page in order to rank it appropriately, and, as such, will always be manoeuvred by content experts. However, increasingly, short cut and black hat techniques – such as hidden links, link spinning and the like – fail to yield results. SEO remains fundamentally about earning exposure, but SEO specialists and digital marketers will have to diversify their skills in order to achieve results in an increasingly competitive online environment. For McRoberts, this is a good thing, as it has led to the democratisation of what was once an expert field: “SEO, the art of making content more accessible and understandable to search engines, will exist and thrive for as long as search engines exist,” he told Forbes. “That said, SEO is no longer a silo. It has massive dependencies in other departments, from social and content to PR and advertising. If anything, I’d say that the role of [a search engine optimiser] has changed from specialist or technician to more of a project manager and strategist role. Search engine optimisers are exceptional at understanding how all the pieces of the online marketing puzzle fit together.”
These are exciting times in the industry, despite Whalen’s assertions SEO has evolved beyond needing to be managed manually. Search engines, equally, will continue to evolve in order to remain one step ahead of malicious SEOs who employ questionable techniques in order to raise their meritless pages in the rankings. Change is the nature of the business and the foundation upon which SEO as a concept was created.