JPMorgan to enter mobile wallet market

JPMorgan Chase announced on October 26 that it is set to enter the crowded mobile wallet market and into direct competition with the likes of Apple, Samsung and Google. Speaking at the Money 20/20 conference, the bank’s head of consumer and community banking, Gordon Smith, said of Chase Pay: “We have tried to build it around the principles of being simple, being rewarding and being secure both for cardholders and for merchants.”

Chase is something of a latecomer to the party

Chase is something of a latecomer to the party, in that the service won’t hit until mid-2016 – although the bank’s influence among American consumers, Smith says, should make up the ground. Smith stressed in his presentation that half of all American households are JPMorgan customers, and flaunted the impressive statistic that 34 million transactions are conducted on Chase branded credit and debit cards per day.

Chase has also struck a deal with the Merchant Customer Exchange – comprising Walmart, Best Buy, Kmart and others – to up the technology’s adoption, and the partnership means the service will be available in over 100,000 American stores. With no clear market leader having emerged so far, this association with big-ticket retailers is a key point of differentiation between it and its closest rivals.

The announcement comes at an important time for the emerging mobile payments market, as Apple Pay tries to convince consumers of the technology’s superiority over plastic. According to Phoenix Marketing International, a disappointing 14 percent of credit card holders have adopted Apple Pay, and growth has tailed off after an initial surge last year. Bryan Yeager of eMarketer, meanwhile, believes next year will be a landmark year for mobile payments.

Dell to buy EMC in billion-dollar, record tech deal

On October 12, computer giant Dell, signed a record deal with EMC Corporation for $67bn. Once the global market leader in PCs, the cash and stock deal marks Dell’s transition into a data management and storage firm. The combined company will have hubs in Texas, Massachusetts and San Francisco.

The growing need for Dell’s diversification follows on from the exponential rise in popularity of mobile devices, which has caused Dell’s sales in the PC market to decline for some time now. While EMC, a firm specialising in software for computer storage, has also faced falling sales as more companies switch to cloud-based storage. Although EMC now offers cloud storage solutions alongside its core products, it has not yet experienced commercial success in the market.

The bold bid was made possible as a result of a decision by CEO and founder Michael Dell to go private in 2013

Among its assets, EMC owns the security analytics firm RSA and has a majority stake in VMware, a cloud and virtualisation software and services provider.

“The coming together of EMC and Dell will create a powerhouse in the technology industry with more than $80B in revenue. The combined company will be a leader in a number of the most attractive high-growth areas of the $2 trillion information technology market,” states a press release published on EMC’s website. EMC shares jumped following the announcement, rising 4.3 percent to $28.36, according to Reuters.

The bold bid was made possible as a result of a decision by CEO and founder Michael Dell to go private in 2013. While it came at a price of $25bn, privatisation has alleviated growing shareholder pressure and has allowed Dell to operate away from the public eye in order to re-strategise the company’s core business.

Although the combined company is a necessary move for both Dell and RMC in order to secure their place in an increasingly competitive market, numerous challenges lay ahead. As well as a combined debt of $50 billion, the merger itself will be lengthy and complicated, and is not expected to close until sometime between May and October next year. According to various sources, Hewlett-Packard sees the merger as an opportunity as its two biggest rivals will be distracted with the completion of the deal and the difficult process of combining systems and product lines, while also having less capital to spend on R&D.

Blockchain comes to the financial services industry

The banking community met Bitcoin with an air of hostility when the digital currency first made headlines a few years ago. Since then, sceptics have come around to the fact that Bitcoin’s underlying technology, blockchain, could revolutionise financial services. Alex Batlin, head of UBS’ new innovation lab, spoke to The New Economy about why the bank has partnered with technology companies to explore the benefits of blockchain.

How does blockchain technology work?
The easiest way to think of it is as a way of distributing decentralised data. So if you think about what banks do today, they have private ledgers for accounts, using their own proprietary software to check the business logic of any messages passed along. That task is undertaken, on average, once a day.

What blockchain – essentially the technology under the rug of virtual currency – does is speed that whole process up immensely. So first of all you have a common fabric, because you have the same software, and this allows you to effectively reconcile ledgers and synchronise them. Instead of doing this once a day you’re doing it – in the case of Bitcoin – once every 10 minutes, and new technologies could make the process faster still.

Instead of doing this once a day you’re doing it once every 10 minutes

On top of that, it’s much more than just a distributed database, because, unlike today’s world where we send what are called ‘dumb messages’, we can now send ‘smart messages’, where you embed the business logic needed to validate the transaction. So it’s almost like a distributed application where I can send a message out not only containing the details of who you want the money paid to and how much, but also the conditions under which it should be paid.

The final really interesting part of blockchain is that, just because you send a message – as it so often is with kids – recipients can ignore the instructions they’re given. Someone might choose, for instance, to accept and confirm an invalid transaction, such as double spending. In order to prevent this from happening, you, rather than resorting to a central authority, ask everyone to participate in the network and effectively re-run the same checks. Everyone votes on the result and as long as 51 percent of the voters – so to speak – are honest then it can eliminate malicious intentions.

The problem with that model, because there’s no single person voting on the issue, is that it’s really easy for someone with a fast computer to issue lots of votes. To prevent that activity, there’s a concept in the community known as the proof of work concept, and the idea is that, if you don’t have a registration-to-vote mechanism or central authority, then you effectively pay to vote. Voters will be asked to solve a complex mathematical problem, which on average takes 10 minutes, so you need enough computing power and enough electricity to have a chance of winning this lottery.

There’s no central monetary control because it’s a peer-to-peer network, meaning that, if one participant drops out, then another will make up the crowd. This means that there’s no central point of failure, and it’s actually a pretty robust system.

What benefits can blockchain bring to the banking sector?
We’ve got a number of high-level benefits that we talk about. First is that we’ve now got a new model of trust, away from central utilities and on to this distributed ledger world. Another is that it’s close to real time clearance and settlement, so our clients get much faster access to their operating assets, and reduced settlement risk and reduced cost of mitigating that risk – resulting in cheaper transfers.

Banks could potentially move towards a cheaper infrastructure by adopting this common fabric where business and regulatory logic is processed by all network participants. This gives us the opportunity to move towards shared business logic and a shared platform. Some of the intermediaries we have today could potentially move towards providing those facilities on blockchain and at lower price points, which again means greater operating efficiencies.

At the same time, we could potentially look at the effect on regulation, and the ability to monitor transactions in real-time means that we can move to reporting as a by-product of doing business, and reduce a lot of the costs associated with regulatory compliance.

The other thing for banks is that we reduce operational risks where they’re no longer relying on a single solution or single utility, so there is no single point of failure.

What about the challenges?
It’s extremely nascent technology. The development tools are really fresh and there’s very little expertise in the market. What’s more, the technology only supports a limited number of transactions and voting takes time compared to centralised systems.

You also have issues with finality, because if somebody were to take over 51 percent of the network they could in theory rewrite the ledger. Now, you can mitigate that, but ultimately it’s part of the network. However, regulators could detect what appears to be a malicious attack on the network, and, if you incorporate the right circuit breakers into this new network, you could control it.

Can you tell us about UBS’ innovation lab and the work you’re doing there?
We’ve been engaged with fintech innovation for a long time. We’ve built up a lot of experience and hopefully respect in this ecosystem, so we decided that we wanted to take our agenda further.

With Level 39 we saw an opportunity to be a part of the system and send a clear message that we want to collaborate with fintech companies. We knew that it had to be a cross-market initiative, and it felt right to position what is an inherent cross-market technology in an environment that promotes collaboration. So that’s Level 39.

We hope this will promote financial inclusion, because of the reduced costs associated with the blockchain. We’re doing a bunch of experiments; a lot of internal experiments, and some experiments we talk about publicly when we believe it helps the industry to move forwards in the right direction.

One thing that has resulted from what we’ve been doing is that we’re very much encouraging firms to take a strategic view towards the architecture based on blockchain. By that I don’t mean it takes years and years. Some technologists today, for instance, are really good at working on specific solutions that support one asset class. However, certain technologies allow you to model near enough any type of asset, and those are more interesting to us because we can potentially put many assets on the same chain. The added cost of putting these assets on the same platform is insignificant compared to the months of work it takes to create a dedicated solution for each. This way there’s so much more scope and this stops us going into a technology cul-de-sac, so we really would encourage folks to take this approach. This technology could be a game changer, so let’s try and get it right.

Why have banks decided to partner, rather than compete, with tech start-ups?
First of all, we’re doing just that. We’re partnering with fintech companies, and the reason we do that is that I think startups have the right kind of mentality.

For us, as a large bank, we cannot take the same risks that a startup can, because if we get it wrong then the impact on our clients is significant. Startups can take greater risks, but at the same time they often hit scaling issues once they get to a certain point in their product maturity. This is where we can help as well. I think them having the ability to take risks and us having the skill to take them to the next stage is a good mix. For the foreseeable future I can see this relationship working really well.

Why automation investment does not create more productive workforces

It seems obvious that if a business invests in automation, its workforce – though possibly reduced – will be more productive. So why do the statistics tell a different story?

In advanced economies, where plenty of sectors have both the money and the will to invest in automation, growth in productivity (measured by value added per employee or hours worked) has been low for at least 15 years. And, in the years since the 2008 global financial crisis, these countries’ overall economic growth has been meagre, too – just four percent or less on average.

One explanation is that the advanced economies had taken on too much debt and needed to deleverage, contributing to a pattern of public-sector underinvestment and depressing consumption and private investment as well. But deleveraging is a temporary process, not one that limits growth indefinitely. In the long term, overall economic growth depends on growth in the labour force and its productivity.

Hence the question on the minds of politicians and economists alike: is the productivity slowdown a permanent condition and constraint on growth, or is it a transitional phenomenon?

4%

Average growth in advanced economies since 2008

There is no easy answer – not least because of the wide range of factors contributing to the trend. Beyond public-sector underinvestment, there is monetary policy, which, whatever its benefits and costs, has shifted corporate use of cash towards stock buy-backs, while real investment has remained subdued.

Meanwhile, information technology and digital networks have automated a range of white- and blue-collar jobs. One might have expected this transition, which reached its pivotal year in the United States in 2000, to cause unemployment (at least until the economy adjusted), accompanied by a rise in productivity. But, in the years leading up to the 2008 crisis, US data shows that productivity trended downwards; and, until the crisis, unemployment did not rise significantly.

One explanation is that employment in the years before the crisis was being propped up by credit-fuelled demand. Only when the credit bubble burst – triggering an abrupt adjustment, rather than the gradual adaptation of skills and human capital that would have occurred in more normal times – did millions of workers suddenly find themselves unemployed. The implication is that the economic logic equating automation with increased productivity has not been invalidated; its proof has merely been delayed.

Lower value added jobs
But there is more to the productivity conundrum than the 2008 crisis. In the two decades that preceded the crisis, the sector of the US economy that produces internationally tradable goods and services – one-third of overall output – failed to generate any increase in jobs, even though it was growing faster than the non-tradable sector in terms of value added.

Most of the job losses in the tradable sector were in manufacturing industries, especially after the year 2000. Although some of the losses may have resulted from productivity gains from information technology and digitisation, many occurred when companies shifted segments of their supply chains to other parts of the global economy, particularly China.

By contrast, the US non-tradable sector – two-thirds of the economy – recorded large increases in employment in the years before 2008. However, these jobs – often in domestic services – usually generated lower value added than the manufacturing jobs that had disappeared. This is partly because the tradable sector was shifting towards employees with high levels of skill and education. In that sense, productivity rose in the tradable sector, although structural shifts in the global economy were surely as important as employees becoming more efficient at doing the same things.

Unfortunately for advanced economies, the gains in per capita value added in the tradable sector were not large enough to overcome the effect of moving labour from manufacturing jobs to non-tradable service jobs (many of which existed only because of credit-fuelled domestic demand in the halcyon days before 2008). Hence the muted overall productivity gains.

Meanwhile, as developing economies become richer, they, too, will invest in technology in order to cope with rising labour costs (a trend already evident in China). As a result, the high-water mark for global productivity and GDP growth may have been reached.

A question of timing
The organising principle of global supply chains for most of the post-war period has been to move production towards low-cost pools of labour, because labour was and is the least mobile of economic factors (labour, capital and knowledge). That will remain true for high-value-added services that defy automation. But for capital-intensive digital technologies, the organising principle will change: production will move towards final markets, which will increasingly be found not just in advanced countries, but also in emerging economies as their middle classes expand.

Martin Baily and James Manyika recently pointed out that we have seen this movie before. In the 1980s, Robert Solow and Stephen Roach separately argued that IT investment was showing no impact on productivity. Then the internet became generally available, businesses reorganised themselves and their global supply chains, and productivity accelerated.

The dotcom bubble of the late 1990s was a misestimate of the timing, not the magnitude, of the digital revolution. Likewise, Manyika and Baily argue that the much-discussed ‘Internet of Things’ is probably some years away from showing up in aggregate productivity data.

Organisations, businesses and people all have to adapt to the technologically driven shifts in our economies’ structure. These transitions will be lengthy, rewarding some and forcing difficult adjustments on others, and their productivity effects will not appear in aggregate data for some time. But those who move first are likely to benefit.

Michael Spence is a Nobel laureate in economics

© Project Syndicate, 2015

Is there any point sending whisky into space?

Tom against 2

Sending casks of whisky into space not only makes commercial sense, it could also provide an improved method of maturation

It is easy to mock outlandish attempts at preparing food or drink. Whether it’s the £1,000 burger covered in real gold flakes by a restaurant in Chelsea, or coffee made from weasel droppings priced at £325 a cup, a market for apparently obscenely priced food and drink with a certain novelty attached to it exists. Likewise, the geographical location of food and drink has always been deeply entwined with its perceived value.

Sir Francis Bacon encouraged us to put nature on the rack and expose its secrets

Food and drink have often functioned as indicators of prestige and, since the start of trade, where certain foods and drinks have originated has been a component and indicator of that prestige. In early modern Britain, food brought from France functioned as such an indicator. According to the British Library, the famous 17th century diarist Samuel Pepys “was extremely impressed to learn that his friend the Earl of Sandwich intended to employ a French chef, writing in his diary that the Earl had ‘become a perfect courtier’”.

This continues today, with fizzy wines from the Champagne region costing more than those from other regions primarily due to the fact they come from that part of France, or caviar from the Caspian Sea fetching more on the market than the same product from other bodies of water. Add the prefix of a certain region or country before a product’s name, or add the caveat of “imported from X”, and someone, somewhere will be willing to part with a larger part of their income for that product.

With extravagant novelty or exotic origins meaning a product can lead those flush with cash to pay higher prices, sending their product into space makes perfect commercial sense for a whisky company. It is hard to think of anything more novel than consuming something that has safely left for and returned from outer space, and, geographically, there is nowhere more distant or exotic.

But this isn’t just about cold, hard economic calculations, made by those with a mind to exploit the odd excesses of the well heeled; the experiment also offers a chance to see how the ageing process of whisky is affected in different environments. Improvement in the product, not just profitable novelty, could be the outcome. Sir Francis Bacon (who enjoyed a dram or two himself) encouraged us to put nature on the rack and expose its secrets.

Whisky itself is prepared by keeping it in wooden casks for years. When the drink was invented, the idea of storing fermented grain in wooden barrels for long periods of time must have seemed outlandish, especially considering the precariousness of existence and low life expectancy: why wait seven years to drink something? Yet someone decided the delayed gratification of storing this brown liquid in a wooden vessel was worth it. And of course they did not regret it.

The results of sending these casks into the outer orbit of the Earth are unpredictable – which is exactly why it should be done. All human advances have formed slowly over time, through trial and error. A new, improved version of the drink might be created on this 13-month voyage, and, if not, we can add it to the long list of experiments that have returned no results and start looking for the next frontier.

Aaran against 2

Not only is Suntory’s decision an insult to lovers of a good single malt, but it also prevents important research being conducted

Construction of the International Space Station (ISS) started on November 20, 1998, with 16 countries working together to launch the Zarya module into orbit around the Earth. Since then, a further 13 pressurised modules have been launched and successfully connected to the structure, with an additional one scheduled for docking in February 2017.

Not only does this represent the pinnacle of human ingenuity and embody what can be achieved when mankind works with, rather than against, itself, the ISS has a number of practical uses too. Serving as an orbital laboratory, it offers researchers from all over the world the chance to conduct various experiments within a microgravity environment.

They’d be better off grabbing a bottle of Highland Spring

Some of these tests are extremely useful for the development of our species, offering insights that help us improve our understanding of the universe and ourselves. Others, like sending the best whisky in the world into space, are nothing more than expensive gimmicks that waste valuable time, energy and above all space aboard the ISS that could be put towards far nobler causes.

After all, one day, a long time from now – between five and seven billion years in fact – we will have to venture off this planet and attempt to colonise another. In order to do this, man will have to explore deep space. One of the major hurdles to long-distance space travel is that, without the protection of the Earth’s ozone layer, humans will eventually feel the effects of long-term exposure to space radiation.

In order to learn more about how such radiation affects us over time, NASA plans to send a number of frozen mice embryos to the ISS for a prolonged period of time. They will then be implanted into living mice once they return to Earth, allowing scientists to study them for longevity, cancer development, and even possible mutations.

Yet, the only justification the Japanese beverage company Suntory can muster for why its whisky deserves to take up space on the ISS is to find out if microgravity will help it to produce a mellower mouthful of its signature drink.

And if the whole thing seems beyond stupid to you, then you are not alone, as Steve Ury, an American whisky expert and author of the popular blog Sku’s Recent Eats, apparently feels the same. “To most people, what ‘smooth’ means is reducing the alcoholic burn”, he told the Los Angeles Times. “What produces that is more water.”

If Suntory are really interested in making a mellower tasting spirit, then they’d be better off following in their Scottish cousins’ footsteps and grabbing a bottle of Highland Spring with which to dilute their beverage, rather than whacking a load of whisky onto the top of a rocket.

“To me, space barrel ageing is sort of an absurdity”, said Ury. “I don’t know what the effect will be at zero gravity on ageing, but I’m not sure it matters. There’s no practical way to keep ageing your barrels in space.”

Worst of all, when the world’s best whisky finally returns from its little stint outside our atmosphere, we won’t even get the opportunity to try a tipple. The Japanese distillery is apparently refusing to put the drink on sale, denying whisky lovers from all over the world a chance to sample a wee space dram.

Phones are about to get a whole lot smarter

The increasingly bitter battle to dominate the mobile computing market has led to its two leaders, Google and Apple, making frequent pronouncements over their latest software advancements – no matter how minor – in an effort gain an advantage over each other. This one-upmanship might seem petty and inconsequential sometimes, with each firm copying or modifying the other’s designs and claiming them as revolutionary changes in software, but sometimes a breakthrough is made by one firm that dramatically shakes up the way the other operates, sending a wave of inspiration through the industry.

It seems such a development is about to occur in the form of apparently self-aware, intelligent operating systems. When Google Now was launched in 2012, it was the first attempt at offering users a system that would try and predict what they needed, before they even typed anything into a search bar. It did this based on a number of contextual parameters – mostly around location and time.

Google knew something we didn’t. It knew that Apple’s taste was a temporary advantage

Google the pioneer
Beginning as a mere intelligent personal assistant built into Google’s Android operating system, Google Now has evolved into an integral part of how people interact with their phones. As part of its Search application, Google Now recognises repeated actions made by a user, such as frequently travelling to a certain place or repeated calendar appointments, and offers up card-based suggestions for the user. These come in the form of directions, appointment cards, and other useful information. They include weather, event reminders, concert information, boarding passes, flights, birthdays, nearby events, public transit information, translation, restaurant reservations and even parking locations.

Google has since announced a further evolution of its Google Now service: Now On Tap. Unveiled at its I/O event in May, Now On Tap will attempt to fully understand what a user wants based on what they are doing with their phone. For example, a text message mentioning going for a drink will bring up information about nearby bars for both the receiver and the sender, as well as automatically creating a calendar reminder. It will also work with other apps, bringing up information based on what a user is doing with them. Some of these third party apps that will use Now On Tap include Airbnb, Fandango, Lyft, Pandora, The Guardian and Duolingo.

At the launch in May, Google Now’s lead developer Aparna Chennapragada, said Now On Tap would quickly learn what users need based on the text on the screen they’re reading. “The article you’re reading or the message you’re replying to is the key to understanding the context of the moment. Once it has that understanding, it’s able to get you quick answers and quick actions”, she said.

The opposition fights back
Such has been Google Now’s influence that Apple has now announced its own version of the contextually aware personal assistant, dubbed Proactive, which works within its Siri voice assistant platform on iOS. Proactive, announced in June as part of Apple’s iOS 9 operating system, was released to the public in September.

Working in much the same way as Google Now, Proactive brings up information for a user based on the time of day: from traffic information on their route to work early in the morning to film suggestions at the cinema in the evening. At the same time, it will offer weather information, remind the user of repeat events, and many other useful services. It marks a big evolution for iOS, which has for many years taken a more incremental approach to its upgrades, with Apple focusing more on aesthetics than dazzling new capabilities.

According to tech writer John Brownlee, Apple has been particularly slow to see the potential of AI in its software, instead choosing to focus on superficial user interface (UI) designs. “The thing is, Google knew something we didn’t”, he said. “It knew that Apple’s taste was a temporary advantage. It knew that designing a host of functional, universally integrated services was harder than designing pixels. And in the protracted thermonuclear war between Apple and Google, which first started when the search giant launched Android in 2008, Google knew that, ultimately, it would be AI, not UI, that would win the war.”

All the big players in the smartphone business are clamouring to get a piece of the intelligent assistant pie

Today, all the big players in the smartphone business are clamouring to get a piece of the intelligent assistant pie. Microsoft has launched its Cortona platform: an intelligent personal assistant, much like Apple’s Siri, working on its Windows Phone platform, as well as Android and iOS. Amazon has also announced its own assistant, Echo: a voice command platform that can be bought to sit in homes and used to give search information on news, as well as play music from a number of other services.

Yahoo chief Marissa Mayer has also recognised the importance of AI platforms within the smartphone business. In 2014, Yahoo acquired New York-based startup Wander (whose program acts as a visual diary) and the firm Aviate (whose service organises apps on a phone’s home screen depending on context). Around the time of the deals, Mayer spoke of how tying all these services into an intelligent platform was the future. “Think about how much your phone understands about you – your contacts, calendar, emails and images – and what happens when that context becomes part of the search experience. The future of search is contextual knowledge, and we are investing in the future.”

Writing in Time in July, leading consumer technology analyst and futurist Tim Bajarin said he expected all the big tech firms to invest considerable amounts of money in developing their intelligent software platforms. “The consumerisation of AI is set to be the next major battle in mobile as Google, Apple, Microsoft and more duke it out to offer shoppers the smartest smartphones. This fight will drive differentiation between devices, especially in mobile, where hands-free use is often critical.”

While these new AI assistants are set to transform smartphones into far more useful organisational devices than before, the technology won’t merely end there. Apple has incorporated Siri into its new Apple TV box, which will eventually act as a hub for people’s home-automation, enabling users to control heating, lighting and various other home services with their voices. At the same time, Proactive will start to learn habits and take away the hassle of turning on a light switch from its owner. Google is already in this market, having acquired Nest Labs, the smart thermostat company, for $3.2bn.

With computers now able to predict exactly what a user needs before they can even ask for it, the possibilities seem endless – and potentially terrifying.

LinkedIn and its role within the recruitment industry

In theory, the recruiter is the facilitator of the seamless, frictionless labour market that only really exists in textbooks or the dreams of economists. However, with the increasing popularity of the professional social networking website LinkedIn, the smooth talking job-shufflers may be under threat.

The concept of recruiting a candidate for a specific job has been about since the start of abstract labour roles, dating back thousands of years to primarily military roles in ancient Rome, Egypt and Greece. Since then, there have been attempts to set up more specialised agencies to match employers with employees. The earliest recorded attempt was Henry Robinson’s proposal to the British Parliament for an “Office of Addresses and Encounters” with the goal of matchmaking employees and employers in 1650. Across the Atlantic and at around the same time, Ralph Fogg petitioned a court in Salem, New England to set up a similar scheme. Both ideas were rejected. A number of similar attempts were made during the 19th century, with slightly more success, although on a small scale.

The modern recruitment industry has only existed for around 70 years or so. In the US during the Second World War, with top talented labour siphoned off to play important roles in the war effort, private agencies acting on behalf of employers made appeals for men with desired skills. The industry continued after the war and, influenced by economic change, picked up in the 1970s.

Expansion of the recruitment industry, 2015

7%

UK

9%

Germany

9%

Japan

Parallel to the recruitment industry as an intermediary between job- and labour-seekers, there has also existed the classified advert. When people started to work outside local kinship networks, and the mass media emerged, employers would pay for adverts in newspapers. This business has now largely been added to the list of ‘rise of the interment’ victims, with job board websites such as Mashable outcompeting print on cost, customisability and specificity. But the internet scarcely affected the recruitment industry. Until 2006, that is, when LinkedIn was founded.

Compliment or curse
LinkedIn can be seen as both a boon and a challenge to the recruitment industry; on the one hand it is a helpful tool, on the other it snatches away part of the business, opening up competition from in-house recruitment by employers.

When LinkedIn was launched, some firms put up a resistance to its use. To a large extent this was due to perceptions of work ethic; certain agencies would only consider a recruiter to be truly working if they were picking up the phone and calling potential candidates or clients. To them, spending hours on LinkedIn profiles appeared to be wasting time. It is far harder to measure successes on LinkedIn than to log successful or attempted calls to potential candidates. But it has now been adopted almost wholesale within the industry.

Cold calling was often a feature (and to some extent still is) of headhunting. Now, with the aid of LinkedIn, the calls can become less cold; recruiters are able to liaise with prospective candidates online first. As Roberto Sordillo, an employee of a financial services recruitment firm based in London told The New Economy: “You can now ‘warm-up’ the call to a certain extent as you are able to do more background research online”.

Recruitment also requires certain skills that cannot be replicated. As Sordillo said: “LinkedIn doesn’t capture the ‘soft skills’ that employers are looking for.” While jobs with low skill levels can easily be filled through a LinkedIn search, for the higher-level roles recruiters are often tasked with filling, the skill of the recruiter comes into play. A good headhunter will be able to identify the more subtle skills and experiences a candidate has, rather than merely looking at endorsements and self-described skills on a LinkedIn profile.

Executive appointments often depend on more than just skill sets listed online or on a paper CV. Recruiters for these roles are trained to determine whether the candidate can get on with the client, the potential team and the potential organisation. All this is highly tailored to the company that has hired the recruitment agency’s service. Headhunters who attempt to fill executive roles are required to understand the markets their clients work within, understand their specific challenges and their work environments.

In the house
LinkedIn, however, has allowed the growth of in-house recruitment. As Simon Hearn, Chief Executive of Per Ardua Associates told The Telegraph, there has been a “significant growth in the use of in-house recruiters”. A lot of this has come from the financial sector. “The majority of the main UK banks”, he continued, “have their own in-house recruitment teams and have had for a number of years. What is changing is the level at which they operate. One major UK bank is currently looking for a UK retail chief executive using an internal resource. Five years ago, this would have been unheard of.”

However, it can also be argued in-house recruitment is more the result of the recession years, and is as much an attempt to cut costs and pay less to headhunters. Whether or not they pose an actual threat depends on how they compare to professional recruiters. In its heyday, according to an article on the website Bizzcommunity, “recruitment depended extensively on word-of-mouth and face-to-face applications. The storage of information was also challenging as the agency would have to store files and archives of masses of CVs written on paper, making applications difficult to access and sort through”. A recruiter not only knew how to assess a client and the industry inside out, but was also a professional networker. Often, in-house teams do not maintain an active pool of contacts as recruitment agencies do, nor do they actively attempt to consult with their companies to propose how an individual with a particular skill set could be of benefit.

This could, of course, change. The idea of the professional recruiter, able to best place talent in roles, tailored to whatever preferences their client suggests could just as easily be replaced by employers growing their own recruitment teams and replicating the skills found in specialist firms. LinkedIn has taken the professional networker aspect away; the other components, of being able to find the right candidate and knowing the industry, could all be replicated by in-house recruiters.

At the same time, despite this potential threat, the recruitment industry is set to expand by seven percent in the UK in 2015 and by nine percent in Germany and Japan, according to Staffing Industry Analysts. The statistics do not lie: the industry has life in it yet.

Fuel cell energy brings new life to businesses

A growing number of large companies are turning to fuel cells as a source of affordable energy, won over by the idea of a steady supply of clean power that lacks the drawbacks of other forms of green energy. Fuel cells are less expensive than solar and wind, and take up far less space as well. These mini power stations can be placed discreetly on site and work by efficiently converting the energy released from a chemical reaction into electricity. Costs are less than those of grid-generated electricity, while releasing around half the emissions.

Adding to the growing status of fuel cells in the corporate world are the generous government tax credits and subsidies offered to encourage their proliferation, together with preferable financing terms. In the US, projects can receive a 30 percent federal tax credit, as well as grants. As it is not possible to get a short return on investment, many businesses are leasing the technology or engaging in power purchase agreements instead.

Powerful potential
Fuel cells can generate electricity around the clock, unbound by weather conditions (unlike solar and wind power) or shortages in the grid. “With the growing demand for constant, uninterrupted power, plus the risk of power outages due to more frequent and stronger storms, many companies are finding that fuel cells can provide resilient and reliable power while helping meet sustainability goals”, said Jennifer Gangi, Director of Communications and Outreach, Fuel Cell and Hydrogen Energy Association.

Gills Onions’ annual savings from fuel cells

$700k

In power costs

$400k

In waste disposal

The use of fuel cells as back-up generators also has a lot of potential in the public sector: hospitals and schools, for example, could stand to benefit immensely from such a dependable and economical power alternative. What’s more, through fuel cells, a decentralised power supply could be realised, offering cities far greater resilience – a crucial aspect in making populated areas truly sustainable.

“Fuel cells are rugged, durable and quiet, so they can be sited inside or outside in tandem with conventional and renewable technologies and fuels”, said Gangi. “They can also connect with the electric grid, or operate on their own, giving fuel cell customers peace of mind to trust that valuable data and networks will keep running no matter what.”

Furthermore, as there are fewer mechanisms and moving parts within a unit than there are in, for example, a battery, less maintenance is required in the long term.

In emerging economies, fuel cells could be revolutionary, particularly for countries in which the supply of electricity is unpredictable or even unavailable in remote areas. Growing awareness about the technology has resulted in various multinational companies introducing fuel cells in order to extend their reach and tap into challenging markets more effectively. One such group is Vodafone, which has recently introduced fuel cell generators to its operations in South Africa in order to relieve recent issues with power shortages. Given the potential for businesses with global ambitions, the use of fuel cell energy is on course to grow significantly in the near future.

Getting greener
Some argue fuel-cell-generated electricity is not as green as other alternative power sources, as it uses hydrogen-rich fuel, but the technology has a number of ecological benefits that may not be immediately obvious. Gills Onions, the biggest onion processor in the US, recently installed two DFC300 fuel cell mini plants to provide all the base load power needed for its processing operations. In order to make the energy ‘cleaner’, the generators use biogas that is made from onion waste produced on site. This means the company’s power is carbon-neutral; it is the world’s first food processing plant to use clean energy generated entirely from its own waste.

Aside from earning the company this respectable title, the savings are also significant. FuelCell Energy (FCE), a company that designs, manufactures, installs, operates and services stationary fuel cell power plants, states on its website that the process saves Gills Onions approximately $700,000 each year in terms of power costs, and a further $400,000 per annum as waste disposal is no longer needed. “The plants can utilise a variety of fuels, including renewable biogas from wastewater treatment and food processing, as well as clean natural gas, directed biogas and propane”, said Kurt Goddard, Vice President of Investor Relations at FCE.

Heat can be captured for heating, or even cooling

As energy is released from fuel cells electro-catalytically, rather than through combustion, the apparatus is extremely energy efficient. Moreover, the chemical reaction releases heat energy as a by-product, which can be used for other purposes, such as heating. “Heat can be captured for heating (called combined heat and power, or CHP), or even cooling, resulting in system efficiencies of 90 percent or greater. CHP allows users to reduce or eliminate the need for boilers or water heaters and their associated costs and emissions”, said Gangi.

Connecticut-based Pepperidge Farm uses the heat energy produced by its fuel cell power plant to support around 70 percent of the operations of its on-site bakery. Not only has the system lessened the company’s vulnerability to electrical outages, it has significantly reduced its carbon footprint by producing both electricity and heat from the same unit. “Thus far, our plants have generated more than three billion kilowatt hours of ultra-clean electricity, equivalent to powering more than 342,500 average-size US homes for one year”, said Goddard.

Resilience and sustainability
Although not a new technology, fuel cells are gaining momentum in terms of sales and international attention. “Part of what makes fuel cell technology so exciting is the way businesses and others have adapted them to new needs and purposes”, said Gangi. “Fuel cells can power small electronics or large buses and trains. Several utilities have installed large-scale fuel cell power parks to provide power directly to the electric grid.”

That being said, a lot still needs to be done within the sector. “The primary challenge is educating the public, lawmakers, stakeholders and potential end users about fuel cell and hydrogen energy technologies, and how they both are critical to achieve a zero-emission, low carbon future”, said Gangi.

Furthermore, the cost of a unit is considerable, meaning most SMEs are denied the technology at present. For this reason, it is particularly important for governments to continue supporting the adoption of fuel cell energy – even more so than they do already. If banks, investors and manufacturers themselves are in the position to lease the equipment, it will not only add a page to their corporate social responsibility portfolio, but also secure a reliable stream of revenue.

Perhaps the most exciting aspect of fuel cell energy is the fact it enables businesses and states to decentralise their power supply, limiting potential shortfalls drastically. “States in storm-prone areas are investing heavily in resiliency and micro-grid projects to ensure ongoing power despite weather-caused grid outages”, said Gangi. “Several projects currently underway involve fuel cells as part of a micro-grid in order to ensure constant power through inclement weather.”

Given businesses are increasingly reliant on computer systems for communication and operational purposes, ensuring consistent electricity supply through any eventuality is simply invaluable in a hi-tech world that hopes to become more sustainable.

Tidal power develops thanks to technological advancements

Over the last couple of years, the global energy market has suffered considerable turmoil, not least because of a plunging oil price and indecision from governments over how to tackle climate change. With fossil fuel sources proving both limited and polluting, there has been a frantic chase to find a sustainable and renewable source of energy that won’t harm the planet, while at the same time being cost efficient. While solar energy has surged ahead of others in recent years, many people have sought to harness another of the world’s natural forces: the ocean.

The seas covers around 71 percent of the world’s surface, greatly outweighing the landmass (which is where the majority of the world’s energy consumption occurs). However, deriving energy from the powerful ocean tides has proven far more difficult than many scientists predicted.

Tidal power converts the powerful ocean tides into electricity. With tides being more predictable than wind or solar, many people believe they could provide a much more reliable source of power than existing renewable technologies. However, tidal power has yet to take off as a reliable source of renewable energy because of the prohibitively high costs of constructing and maintaining underwater generators. Another concern has been that there are relatively few regions where tidal ranges can produce currents strong enough to generate power.

1.7mw

Capacity of Kislaya Guba Tidal Power Station in Russia

320mw

Potential capacity of Tidal Lagoon Swansea Bay

In 1924, the US Federal Power Commission conducted a study into a potential tidal power plant in the northern border region of Maine, as well as the southern area of Canada’s New Brunswick state. This would have involved a number of dams, locks and powerhouses around the Bay of Fundy and Passamaquoddy Bay, but the project was eventually scrapped due to a lack of sufficiently advanced technology and the Great Depression hampering investment.

Another scheme was proposed in the same area in 1956 – specifically in the Nova Scotia part of the Bay of Fundy. While the report said millions of horsepower would be generated by the scheme, it proved far too costly to make commercial sense.

Potential projects and problems
While it is a seemingly renewable and non-polluting source of energy, tidal power generators can have a devastating impact on marine life, with turbines killing sea life and barrages damaging the flow of water into estuaries and the ecologies around them. There is also the issue of the metal equipment corroding due to its constant contact with salt water.

Despite these problems, there remain a number of proposed tidal power schemes. These include a 50-megawatt tidal farm in the Indian state of Gujarat, which began construction in 2012, and a 1.05MW farm of 30 tidal turbines in the East River of New York City.

There are a number of different methods by which tides can be turned into sources of energy. Tidal stream generators utilise the kinetic energy that comes from water passing through power turbines, in much the same way wind turbines operate. In some cases, tidal stream generators have been built into existing structures, such as bridges. The cheapest form of tidal power, they also have the smallest ecological footprint.

Working much like a dam, tidal barrages are some of the oldest forms of tidal power generator, with large-scale projects constructed during the 1960s, including the 1.7MW Kislaya Guba Tidal Power Station in Russia. However, they have been criticised for both their impact on the water that passes through them, and the surrounding environment.

Dynamic tidal power is a method yet to be utilised, but with a number of proponents. A mixture of both potential and kinetic energy flows, the method would use extremely long dams of almost 50km in length, built far out to sea. Because they would not enclose any area, their ecological impact would be less than a traditional tidal barrage.

Another proposed option is a tidal lagoon, which would involve circular walls being constructed in the water to capture potential energy from tides. The Tidal Lagoon Swansea Bay has been proposed in Wales and would have a capacity of 320MW. It was granted planning permission by the UK Government in 2015. However, the scheme has received considerable opposition from local campaigners who fear for the ecological impact of the lagoon, as well as the relatively high cost – set to be £168MW/h, which is almost double the cost of nuclear power generation.

New tech
One group of scientists claims to have devised an alternative method of tidal power generation that would be both cheap and sustainable. Kepler Energy says its technology, which it is working on in collaboration with Oxford University’s Department of Engineering Science, could radically transform the way in which power is generated from the sea. By using a horizontal axis turbine (dubbed ‘a tidal fence’), Kepler hopes it will be able to avoid the use of expensive and large dams and barrages.

The fence could be deployed in shallower water than traditional turbines. The company hopes to deploy it in the UK initially, with a proposed one-kilometre fence in the Bristol Channel potentially operational by 2021, at a cost of £143m.

Speaking to Reuters at the unveiling of the tidal fence research, Kepler Energy Chairman Peter Dixon said the potential output of the technology is considerable. “The design we have at the moment and the proposition we have at the moment is to put a tidal fence, which is a chain of these turbines, in the Bristol Channel, and if we can build up to, say, 10km worth, which is a very extended fence, you’re looking at power outputs of 500 or 600MW. And just to visualise that, it’s like one small nuclear reactor’s worth of electricity being generated from the tides in the Bristol Channel.”

With scarce natural resources and collapsing oil prices shaking up the world’s energy markets, new forms of renewable energy are going to be explored by many companies. While solar has certainly jumped ahead of the pack, the potential for tidal power to play an important and sustained role is something many countries that don’t enjoy endless sunshine – such as the UK – will be hoping can be realised.

Google becomes ‘Alphabet’ in massive rebranding

Over the last decade and a half, Google has evolved from being merely a company known for delivering online search results into an unwieldy behemoth of a tech giant that offers all manner of online services, health technology, smart homes, energy sources, and forms of transport. While the transformation has made Google one of the most influential businesses in the world, some feel it has become a sprawling mess of a firm that suffers from a lack of focus.

In an effort to counter such charges, Google’s founders announced an exhaustive restructuring of the business: creating a parent company called Alphabet, while allowing many of its units to live as distinct entities away from the umbrella title of ‘Google’.

In a blog post on the Google website, Larry Page outlined the reasons for the change in structure: “We’ve long believed that over time companies tend to get comfortable doing the same thing, just making incremental changes. But in the technology industry, where revolutionary ideas drive the next big growth areas, you need to be a bit uncomfortable to stay relevant. Our company is operating well today, but we think we can make it cleaner and more accountable.”

He added the main goals for the restructuring were to be “more ambitious”, to form a longer-term strategy, to empower “great entrepreneurs”, to be more transparent, and to provide the company with “greater focus”.

The surprising news caused a considerable amount of debate within the tech community. Speculation over what the company’s motives were ranged from whether Page and cofounder Sergey Brin wanted to further shift former CEO Eric Schmidt away from running the business, to a simple desire to allow the various companies within Google’s stable space to breathe.

Google’s revenues, 2015 Q2

$17.7bn

Total revenue

$16bn

Advertising revenue

While the investors who have sought greater clarity from the firm over what it offers have also welcomed the news, others are concerned this new openness will expose the lack of profitability of Google’s many subsidiaries. Indeed, as part of the announcement it was revealed 90 percent of the group’s revenues comes from Google’s paid search and advertising. For a company that creates a whole host of other products – from the Android smartphone operating systems to the automated thermostat Nest – this represents a considerable dominance of its revenues.

New structure
The formation of the new umbrella company means Page will focus his efforts on innovation and creating the next revolutionary technologies as CEO of Alphabet. Brin will become the new firm’s president. Schmidt, whose suitability as a CEO was questioned by many during his decade in charge of Google between 2001 and 2011, will become group chairman. Google – which will now focus on search, YouTube, Gmail, Android and Google Maps – will officially get a new CEO in the form of Sundar Pichai, who was previously the company’s Senior Vice-President of Android, Chrome and Apps.

Pichai joined Google in 2004, having graduated from the Indian Institute of Technology in Kharagpur. In 2014, there was speculation the 43-year-old was in the running to succeed Steve Balmer as Microsoft CEO. However, he has now secured the top job at the internet’s most influential company. He is expected to refocus Google towards its core services, many of which are underpinned by the company’s advertising data.

As if to solidify in people’s minds the change of structure, Google announced in September it would be changing its logo. Retiring the small blue ‘g’ that had become commonplace over the company’s 17-year history, Google has replaced it with a capitalised, four-coloured ‘G’ that matches a new four dot logo. The move emphasises the new focus Google will have on its core businesses of search, maps, mail and its Chrome browser, according to the company.

“This isn’t the first time we’ve changed our look and it probably won’t be the last, but we think today’s update is a great reflection of all the ways Google works for you across Search, Maps, Gmail, Chrome and many others”, according to a post on Google’s official blog. “We think we’ve taken the best of Google (simple, uncluttered, colourful, friendly), and recast it not just for the Google of today, but for the Google of the future.”

Alphabet the conglomerate
By creating a conglomerate, Google has followed the path of a number of other big businesses. In particular, it seems the company has been eager to employ a similar strategy to iconic Wall Street investor Warren Buffett, a man the founders have long admired. His Berkshire Hathaway investment vehicle has been steadily acquiring firms from across a broad range of industries for years now, including his recent $37.2bn purchase of aerospace firm Precision Castparts.

Last year, Page spoke of his admiration for Buffett’s investment strategy in an interview with the Financial Times, echoing his outlook by saying: “One thing we’re doing is providing long-term, patient capital.” The company even included in its 2004 IPO shareholder letter that it had been greatly inspired by Buffett’s writing in Berkshire Hathaway’s annual reports.

Alphabet will form a similar structure, with Google acting as the largest player within a conglomerate of firms that include the life sciences operations of its secretive Google X research facility, as well as its health research facility Calico. Each of these individual firms will have its own CEO, who will be ultimately responsible for the operations and success of their business.

In a press release, Page wrote: “Alphabet is mostly a collection of companies. The largest of which, of course, is Google. This newer Google is a bit slimmed down, with the companies that are pretty far afield of our main internet products contained in Alphabet instead. What do we mean by far afield? Good examples are our health efforts: Life Sciences (that works on the glucose-sensing contact lens) and Calico (focused on longevity). Fundamentally, we believe this allows us more management scale, as we can run things independently that aren’t very related.”

The X laboratory will continue to invest in radical new technologies, such as the drone delivery firm Wing that it is developing. It will likely include other areas in which the founders are eager to make breakthroughs, such as self-driving electric cars and smartglasses. Alphabet will also include a Ventures and Capital division, which will allow it to use its considerable wealth to invest in a range of other companies and help acquire the latest start-ups.

Within the Life Sciences division of Alphabet, there will be companies that include the home automation platform Nest, which Google acquired for $3.2bn in 2014. The health services division, which has been known as Calico since its launch in 2013, will focus on researching cures and treatments for some of the world’s deadliest diseases. These have included partnerships with pharmaceutical firm AbbVie, the University of Texas South-Western Medical Center, 2M Companies, the Broad Institute of MIT and Harvard.

90%

Of Alphabet’s profits come from Google’s paid search and advertising

$8bn

Google’s total debt prior to the restructuring

While this seems like a distinct group of companies all acting independently of each other, the reality is likely to be quite different. Google’s user data, which it harnesses to sell online advertising, will likely be shared among all these divisions, meaning the search business will remain the key component of the group. It is because of this that many within the industry remain sceptical of Google’s ambitions.

Fleeing the regulators
Many are speculating Google’s transformation into Alphabet is merely a cynical ploy to avoid the attentions of regulators around the world: regulators who have been getting increasingly concerned about the breadth and influence of the company.

Google has been the target of a number of regulators in recent years, mostly as a result of competition concerns. The European Union is already taking Google to court over antitrust issues, having received 19 complaints from companies in both Europe and the US. These reportedly include Microsoft, as well as many smaller businesses that feel Google has an unfair advantage within the digital marketplace. Other firms include (according to a report by Reuters) French legal search engine eJustice, German business listing firm VfT, and British comparison site Foundem. Other bigger firms include travel sites Expedia and TripAdvisor.

The change in structure, with distinct companies operating under the Alphabet umbrella, could be seen as a way of forcing each of them to fend for themselves within their specific markets. In so doing, Google would hope regulators and competitors would be placated. However, the chances are that nobody will be fooled by this strategy and regulators will increase their pressure on the company to either properly spin off some of its operations or make some form of concession to rivals.

For the group as a whole, the move makes financial sense, and investors on Wall Street have welcomed it. Google’s total debt prior to the restructuring was roughly $8bn, which represents a very small amount for such a large company, but many expect the group to soon begin issuing more debt in order to invest in new products. While lenders would happily help out a profitable business such as Google, the rest of its operations – such as self-driving cars and smartglasses – might receive less enthusiasm. With the new structure, Alphabet can borrow against the profitable Google business at a lower rate than it would as a group.

Lassoing the Moon
Perhaps one of the main reasons for the move is the ambition of Google’s science-based founders; Page and Brin are both innovators, and they have surrounded themselves with some of the smartest brains in the scientific world. But up until now, Google’s business has been based on advertising, which doesn’t represent much in the way of the exciting ‘moonshots’ the founders like to fantasise about. Indeed, in the second quarter of this year, Google announced revenues of $17.7bn, and the vast majority of that – over $16bn – came from advertising revenue. Many think both Page and Brin want to focus their attention on the sort of revolutionary ideas X Labs works on, while allowing others to run the profitable, more staid business that is Google.

While the new Alphabet structure will undoubtedly still revolve around Google’s profitable online business, it will be intriguing to see just how many of the group’s subsidiaries will fair with the spotlight now on their operations. With visible CEOs and financial statements that aren’t buried among those of their parent companies, many of these firms will have to prove their products make commercial sense.

If these subsidiaries are to fend for themselves in the future, then a number of significantly loss-making areas could be revealed – more than previously thought. While Nest seems to be relatively successful, other subsidiaries might not be. The driverless electric car business has had years of research and investment without a product launch. Perhaps even worse is Google Glass, which got a consumer trial but had development “paused” after much public derision.

However, what regulators, investors and competitors are likely to welcome is this greater transparency from Google. For their part, the pioneering founders of the firm might be able to spend less time talking about the financial success of online advertising and instead focus on their passion for innovation.

Bill Gates: we must prepare world’s poorest farmers for climate change

A few years ago, Melinda and I visited a group of rice farmers in Bihar, India, one of the most flood-prone regions of the country. All of them were extremely poor and depended on the rice they grew to feed and support their families. When the monsoon rains arrived each year, the rivers would swell, threatening to flood their farms and ruin their crops. Still, they were willing to bet everything on the chance that their farm would be spared. It was a gamble they often lost. Their crops ruined, they would flee to the cities in search of odd jobs to feed their families. By the next year, however, they would return – often poorer than when they left – ready to plant again.

Our visit was a powerful reminder that, for the world’s poorest farmers, life is a high-wire act – without safety nets. They don’t have access to improved seeds, fertiliser, irrigation systems and other beneficial technologies, as farmers in rich countries do – and no crop insurance, either, to protect themselves against losses. Just one stroke of bad fortune – a drought, a flood or an illness – is enough for them to tumble deeper into poverty and hunger.

Facing climate change empty-handed
Now, climate change is set to add a fresh layer of risk to their lives. Rising temperatures in the decades ahead will lead to major disruptions in agriculture, particularly in tropical zones. Crops won’t grow because of too little rain or too much rain. Pests will thrive in the warmer climate and destroy crops.

For the world’s poorest farmers, life is a high-wire act – without
safety net

Farmers in wealthier countries will experience changes, too. But they have the tools and supports to manage these risks. The world’s poorest farmers show up for work each day for the most part empty-handed. That’s why, of all the people who will suffer from climate change, they are likely to suffer the most.

Poor farmers will feel the sting of these changes at the same time the world needs their help to feed a growing population. By 2050, global food demand is expected to increase by 60 percent. Declining harvests would strain the global food system, increasing hunger and eroding the tremendous progress the world has made against poverty over the last half-century.

I’m optimistic that we can avoid the worst impacts of climate change and feed the world – if we act now. There’s an urgent need for governments to invest in new, clean-energy innovations that will dramatically reduce greenhouse gas emissions and halt rising temperatures. At the same time, we need to recognise that it’s already too late to stop all of the impacts of hotter temperatures. Even if the world discovered a cheap, clean energy source next week, it would take time for it to kick its fossil-fuel-powered habits and shift to a carbon-free future. That’s why it’s critical for the world to invest in efforts to help the poorest adapt.

Many of the tools they’ll need are quite basic, things that they need anyway to grow more food and earn more income: access to financing, better seeds, fertiliser, training, and markets where they can sell what they grow.

Other tools are new and tailored to the demands of a changing climate. The Gates Foundation and its partners have worked together to develop new varieties of seeds that grow even during times of drought or flooding. The rice farmers I met in Bihar, for instance, are now growing a new variety of flood-tolerant rice – nicknamed “scuba” rice – that can survive two weeks underwater. They are already prepared if shifts in the weather pattern bring more flooding to their region. Other rice varieties are being developed that can withstand drought, heat, cold, and soil problems like high salt contamination.

All of these efforts have the power to transform lives. It’s quite common to see these farmers double or triple their harvests and their incomes when they have access to the advances farmers in the rich world take for granted. This new prosperity allows them to improve their diets, invest in their farms, and send their children to school. It also pulls their lives back from the razor’s edge, giving them a sense of security even if they have a bad harvest.

Going further, being prepared
There will also be threats from climate change that we can’t foresee. To be prepared, the world needs to accelerate research into seeds and supports for smallholder farmers. One of the most exciting innovations to help farmers is satellite technology. In Africa, researchers are using satellite images to create detailed soil maps, which can inform farmers about what varieties will thrive on their land.

Still, a better seed or a new technology can’t transform the lives of farming families until it’s in their hands. A number of organisations, including a non-profit group called One Acre Fund, are finding ways to ensure that farmers take advantage of these solutions. One Acre Fund works closely with more than 200,000 African farmers, providing access to financing, tools and training. By 2020, they aim to reach one million farmers.

In this year’s Annual Letter, Melinda and I made a bet that Africa will be able to feed itself in the next 15 years. Even with the risks of climate change, that’s a bet I stand by.

Yes, poor farmers have it tough. Their lives are puzzles with so many pieces to get right – from planting the right seeds and using the correct fertiliser to getting training and having a place to sell their harvest. If just one piece falls out of place, their lives can fall apart.

I know the world has what it takes to help put those pieces in place for both the challenges they face today and the ones they’ll face tomorrow. Most importantly, I know the farmers do, too.

Bill Gates is Co-Chair of the Bill & Melinda Gates Foundation

© Project Syndicate 2015

Java forever: why this programming language will not die out

When it comes to programming languages, there are few that garner as much attention or are the subject of as much debate as Java. Over its 20 year history, the controversial programming language has been pronounced dead numerous times, with many believing the final nail in the coffin had been delivered when Apple – a company that had traditionally been a big proponent of Java – chose to boycott it after it was plagued by a series of security vulnerabilities a few years back. In response, Oracle (who acquired Java from Sun Microsystems in 2010) invested heavily in the language in a bid to close the zero-day exploits that left it susceptible to attacks, along with strengthening the language in general.

However, despite the developers’ best efforts, Java has never been short of critics who choose to question its relevance in programming. Merely typing the term “Java is dead” into a search engine results in a plethora of articles and opinion pieces arguing the merits and (more often) the drawbacks of the language. But those who believe it is a relic, soon to be discarded on the scrap heap of history, are simply short sighted, as its longevity serves as a testament to its developer’s ability to take criticism and use it to refine their language’s features in order to ensure it remains a staple for programmers.

Old dog
“Because it is old, I think that people assume that, in this modern field of computing, with smartphones and the like, in terms of languages, Java is somehow less relevant due to its age”, said Gregory Thomas, an Android developer at Zinconix. “But Java isn’t going anywhere, at least not anytime soon.”

Thomas believes the language is likely to stick around because, over the course of its life, it has become one of the most stable languages on the market, a quality that is highly sought after in a number of software applications.

1995

Year Java launched

>9m

Developers using Java

Unsurprisingly, one place where stability is very important is in financial services and, as Java has this in abundance, it has grown to become rather popular within the industry. In fact, a number of investment banks (such as Goldman Sachs, Barclays and Citigroup) all use Oracle’s language for many front and back-end office applications, including the development of their electronic trading systems.

“Lots of banks still use Java, and financial institutions are not, on a whim, just going to change the language they use”, said Thomas. “They’re going to keep it and ensure that what they have already continues to benefit from the stability Java offers.”

Java’s success with the financial services industry ensures it will continue to be used for years to come, especially when the sector is famed for its use of, and reluctance to update, legacy systems. The stability Java offers has not only made it popular with banks, but has also made it the language of choice for lots of consumer and enterprise applications.

Java enterprises
Shortly after Oracle took the helm in 2010, one of the first things it did was release a new version of Java (JDK 7.0), the first new incarnation in more than five years. This year, Oracle announced it would release Java 9 in September 2016, which is meant to break up the language’s many functions into a number of bite sized components, making it quicker and more accessible for users. Such large levels of investment over the last five years have helped revive the language and will ensure the many enterprises that rely on it continue to do so.

Apache’s Hadoop, an open-source software framework, is written almost entirely in Java and is used to for distributing massive data sets. When asked by Datamation Magazine why the developers chose to use Java as its platform, Hadoop’s creator Doug Cutting said: “Java offers a good compromise between developer productivity and runtime performance. Developers benefit from a simple, powerful, type-safe language with a wide range of high quality libraries. Performance is generally good enough. When it falls short, native code has been used to keep overall performance in line with [the] C and C++ [programming languages].

“Some hate Java and complain that another language would have been a better choice, but every language has its detractors. The fact that Hadoop has been widely adopted shows that Java was a reasonable choice.”

Even companies such as Netflix and the mobile payment processing app Square use Java for the bulk of their coding; Oracle’s language provides high levels of performance, as well as the ability to scale up services with relative ease.

“We have thousands of Java processes running all the time, yet as we grow we don’t have huge infrastructure challenges”, explained Netflix’s delivery engineering manager, Andrew Glover, in an interview with European development consultancy Silvae Technologies. “We also have a lot of open source tools that are Java-based, which makes it easy to monitor, upgrade and scale our services.”

Web weary
One area where Java is not so strong, however, is the front end of web development. According to Thomas, Java’s memory footprint is just a “little too heavy” for many developers, while the likes of Python and Ruby simply beat Java in terms of speed. After all, speed is a massive part of the web browsing experience: users want their browsing to be quick; they want it to be snappy; they want data now.

“I think that Java is a bit too clunky, which is why it is used less by web developers”, said Thomas. “So Java is pretty much dead in web terms, but only up to a point – that point being the amount of processing the site has to do.”

Big web companies such as Amazon, Google, eBay and Kayak all use Java for their backend processing (e-commerce, users and other things that require a lot of communication with their databases). They use it because it’s tried and tested, and is scalable (any backend could deal with one user, but Java can deal with 200m or more quite easily).

The big boys still use Java for its compatibility, reliability and scalability

Smaller sites, on the other hand, do not tend to need to do so much processing and therefore can use newer, faster languages such as Python and Ruby. An example of such a site is the “front page of the internet”, Reddit. The site may boast over 36m users worldwide, but it’s mainly sending a lot of text back and forth and storing it, which doesn’t require the processing power offered by languages such as Java. As a result, the site benefits from using Python – a language that is much faster, more readable and easier to write in than Java, according to the site’s co-founder and former CEO, Steve Huffman.

Java evolved
So Java is perhaps dying in one realm, but very slowly. Websites that have light to middling numbers of users and a similar amount of required processing tend to veer away from Oracle’s language. But the big boys still very much use it for its compatibility, reliability and scalability.

Not only that, but, according to the TIOBE Programming Community Index, which tracks the popularity of different programming languages based on the quantity of lines written in them, Java 8 – the most recent version – is a massive hit. So much so that the index rates it as the world’s most popular programming language, with an almost 4.5 percent lead on the rest of the competition. The cause of this massive rise in the language’s popularity, according to the TIOBE’s report, is the added functionality recent updates have provided.

Prior to Oracle’s acquisition of Sun Microsytems, a number of the language’s key developers jumped ship, with many fearing the end was nigh. “But the doomsayers appeared to be wrong”, said the report. “The release of Java 8 in 2014 is a big leap forward. It is now possible to write Java programs in a functional and concise way. Surprisingly or maybe not, Java is consuming a large part of the market share that [rival language] Objective-C is losing.”

Rather than being close to death, Java is very much alive, and as long as Oracle continues to focus on the language’s strengths rather than trying to be all things to all people, it is likely to be around for many more years to come.