Digital Marketing 2025: Disruptive technologies that will commodify digital advertising

 

I

I have named this section “The Innovator’s Dilemma” after the highly influential book by Harvard professor Clayton M. Christensen, The Innovator’s Dilemma (Christensen, 2013). The book investigates the awkward situation of well-managed companies that fail even when the management is sound and does exactly what is expected in a well-managed company. Most business books approach this subject by reviewing the personal qualities of a company’s leaders and its culture; however, Christensen views this problem from a completely different angle. His studies have found that companies do not fail because they are not managed well. Rather, the pressure of maintaining and improving profit margins leads these companies to a path of protecting profit margins by listening to and giving people more of what they want, improving purple cow products in line with customer feedback, and focusing on total customer satisfaction. As such a company grows, the pressure to deliver revenues and profit forces its management to move away from the culture of innovation that used to define it, as a smaller company than it is now, to a more practical, revenue oriented culture able to continue delivering the margins expected by its shareholders. After all, as you grow, your costs grow, and a five percent increase on profits of a million dollars is easier to achieve than a five percent increase on 100 million. It is beyond the scope of this book to discuss in detail The Innovator’s Dilemma, although I strongly recommend any entrepreneur or business owner to read the book—you and your business will greatly benefit from its excellent insights. However, Wall Street’s concern regarding Twitter’s lack of profit comes to mind when considering the actions the company needs to take to please its shareholders. Similarly, at the time or writing, there has been a lot of talk about Facebook’s share value, due to the perception that its social ads find it a challenge to deliver commercial goals, such as sales or leads. The truth is that investors will not back forever a company that does not turn a profit, and the pressure is on companies like Google and Facebook to meet revenue targets. And how do they approach this challenge? They approach it by reducing the reach of the organic posts in your feeds, forcing you to spend more money to achieve increased reach, as the case of Facebook or that of LinkedIn shows. Or they do so by taking over the top four positions on the first page in Google; or by introducing paid ads, even when you have previously refused to accept them, and you know your audience will not like it—as the case of Reddit or that of Snapchat indicates. The point is, these companies need to make money, and for the time being the only way to make money is through more paid ads, to the detriment of organic reach. And the more capable they become at recognizing great content, the better the paid ads results and the happier their users are. In a nutshell: happy advertisers, more money for publishing platforms; thus, happy shareholders, less and less organic reach and impact—a highly predictable future. This has already started, as the Facebook example shows. An article on Hubspot explains the decline of organic reach on Facebook, with some great statistics to back up its conclusion that Facebook is quickly moving toward a paid social model at the expense of organic reach. For example, a study from EdgeRank Checker found that the organic reach of the average Facebook page reduced from 16 percent to 6.5 percent between February 2012 and March 2014. Another study by SocialFlow found a drop of 42 percent between January and May 2016, with the decline continuing to July 2016, when the organic reach drop was adjusted from the previous 42 percent to 52 percent. (Bernazzani, 2017) Thus, you can see how this trend supports the uncomfortable but inevitable conclusion that organic traffic is dying a slow but certain death, with the consequent impact on your business. The article ends with a brilliant quote by James Del, head of Gawker’s content studio at the time:

Facebook may be pulling off one of the most lucrative grifts of all time; first, they convinced brands they needed to purchase all their Fans and Likes—even though everyone knows you can’t buy love; then, Facebook continues to charge those same brands money to speak to the Fans they just bought. (Bernazzani, 2017)

And, Facebook’s ads strategy appears to have paid off if we consider that its advertising revenue was forecast to grow from $5.3 billion in 2014 to over $10 billion in 2017. (Marr, 2016) Oh, and if you are looking for any further indication of where this is going, consider that Google made its keyword tool available to everyone until 2016, when it decided to provide access to it only for active advertisers. After all, why would Google help webmasters to achieve better organic rankings? I like the simple conclusion of Paul P, one of the contributors to an advertising-themed forum, who sarcastically concludes that “Keyword Planner used to be a free tool, it no longer is as of September 2016. Google are short of money so they want yours…” (P. Paul, 2016)

The Future of Search Engine Optimization

2.1 Introduction

T

Those early days when Google took the Internet by storm seem far away now. From the very beginning, marketers vied for the commercial advantage of “owning” the first position in Google search results. Indeed, entrepreneurs, business owners, brand owners, and marketers all jumped at the opportunity of leveraging the power of this new and exciting publicity channel. The golden era of SEO (search engine optimization) had begun. As always, entrepreneurs and companies had plenty of help at hand from a myriad of SEO geeks that single-handedly ranked websites on the first page of Google results, making and breaking businesses almost overnight. And how would many of these great SEOs go about it? Well, the most common SEO practices included building thousands of links from unreliable sources such as link farms, press releases and directories, stuffing meta titles, meta descriptions and meta keywords, stuffing URL descriptions, and obsessively stuffing keywords into webpages. It did work for a time, mainly due to three factors. First, disciplines such as machine learning or AI were still in an emerging stage, and in spite of being around for a while AI development as a profession went through its ups and downs. An initial exuberance over its potential faded between 1960 and 1973 when it became apparent that the initial hype was not being complemented by significant advancements in the field. This led to AI researchers struggling to get funding between 1974 and 1980. However, AI was brought back on the map when Japan launched its Fifth Generation Computer System Project. The interest continued, and since 1990 funds have increasingly been invested in addressing various AI challenges. (Chace, 2015) Second, the capability of computers to store and process data increased exponentially as compared with the low processing power at that time. Third, the amount of data available in digital form was very low compared with the amount available now. In brief, Google’s access to data was limited, and the processing power of computers was low. For these reasons, to determine the relevancy of a website Google unwillingly had to rely on ranking signals that were mainly under the control of webmasters. It seems unnecessary to point out that SEOs employed various questionable, albeit creative, practices, and websites often ranked high by Google were not best suited to the requirements of the user, thus impacting negatively on user experience. The advantage of a high ranking meant big money for entrepreneurs and companies who realized that in order to beat the competition they did not have to produce the best product but rather they just had to employ the best SEO services. Anyone who was exposed to SEO in its early days remembers the aura of wizardry surrounding the profession, due to the control SEOs had over the ranking factors. Indeed, the myth was born that good SEOs could wave a magic wand and magically rank your website on the first page in Google. One digital marketer known to this author recalls the story of his becoming a star overnight within his company after ranking the company website on the first page in Google. But, all he had to do was change several meta titles, meta descriptions and headings in a not-very-competitive industry. Yes, times were good for an SEO profession that was flourishing, and with it many mediocre businesses were flourishing as well. Enter the scene Google Ads, served in the right hand column of organic search results, which were placed in the middle column. The privileged position in the center of the page ensured that the benefits of ranking organically in Google were largely unaffected by the introduction of Google ads. For example, a 2006 heat map study found that search results low down in the page received very little attention when compared with results higher up the page. (Enge, Spencer, Stricchiola & Fishkin, 2013) Furthermore, the right hand side paid ads were receiving an even lower amount of attention. This finding, of course, resulted in a loss of revenue potential for Google, which was already starting to test various elements of their ads for the purpose of maximizing revenues. Fast forward to today and the power balance has changed; SEOs have far less control over factors impacting the ranking position of websites within Google. In fact, most ranking signals considered to be essential in the past play very little or no direct role in ranking websites at present. For example, backlinks are still important, but long gone are the days when you submitted your mediocre website to hundreds of directories and link farms and got well ranked within Google. In fact, this practice would now do more harm than good to a website. Or let’s consider the advice offered by SEOs for optimizing images on a page. In The Art of SEO the authors provide the following advice: “A descriptive caption underneath the image is helpful…Make sure the image filename or image src string contains your primary keyword…Always use the image alt attribute”. (Enge, Spencer, Stricchiola & Fishkin, 2013, p 415) And all SEO tools point to the lack of alt tags as a technical issue. Some of them like YOAST’s plugin even go as far as providing information on whether the images on page include or not include your keyword within alt text descriptions. The idea behind this is that images will further improve usability and enhance a website by providing Google with information about the image and implicitly about the theme of the page. And yes, Google cannot read images but relies on webmasters to inform its algorithms about the content of images via alt text, text around the images, captions, and so forth. Thus alt tags are yet another signal that can be easily manipulated. As an example look no further than the images you purchase via services like Shutterstock where an image acquired by 1,000 webmasters is optimized in 1,000 different ways. But what can Google do? After all, Google can’t read images, right? Or can it? Actually, as early as 2012 Google’s algorithms have taught themselves to recognize cats within images, without humans ever teaching the algorithms about cats! (Chace, 2015)  Other examples include Google correctly classifying a picture of a boy riding a motorbike as “boy riding a motorbike on a dirty road”, and an image of two pizzas on a stove as “two pizzas on a stove”. And both Google and Facebook are now able to glance at an image and correctly identify the name of the person in the image. (Kelly, 2016) In fact, Facebook has developed a system formed of nine levels of artificial neurons than can determine with an accuracy of 97.25% whether two images are showing the same person or not, only slightly lower than the accuracy achieved by humans at 97.53%. (Ford, 2016) Finally, as early as 2011 a deep learning neural network designed at the University of Lugano performed better than humans when it correctly identified 99% of the images from a database of traffic signs. (Ford, 2016) Looking for more hints? Improvements in face recognition technology has given confidence to the US Department of State to implement a facial recognition system for visa processing purposes, and many advanced surveillance systems already employ machine learning algorithms and data mining technologies to analyze large quantities of voice, video, or text. (Bostrom, 2016) My conclusion is that image optimization will soon be a thing of the past, just like many other traditional ranking signs that have been overtaken by advancements in AI technology. But how about more-complex digital assets like audio recordings? Surely practices such as the use of optimizing tags may be around in that area for a while?  Actually, no. Recordings of conversations can already be analyzed by topic, voice tonality, and fluency vs. silence. (Marr, 2015) Given the improvements of speech to text technologies, recordings can also be downloaded in a text format and further sentiment analysis performed. You only need to think of the great advancements achieved by Google’s Translate algorithms: simply point a device toward the speaker of another language and have that language translated to your own in real time. Sure, the system is not perfect; however, it is only a matter of time until it is. For example, Google Translate has already achieved almost perfect accuracy between English and Portuguese. Or maybe video optimization will have a longer life? This will not be the case either. Technology is already available to recognize faces, behavior, situations, and even words within videos. (Marr 2015)  In the past Google applied for a patent on a face recognition process, while Facebook confirmed that their face recognition algorithms now recognize faces almost as accurately as humans do. (Marr, 2015) And we have already seen that Google’s algorithms can recognize cats in images with no optimization of alt tags, image names, or descriptions. Granted, the technology is still underdeveloped, and yes you can still manipulate rankings by mentioning one’s keyword at the beginning of the video and a couple of times more during it. However, make no mistake: video optimization is not here to stay.  Let us continue our examples and consider another important ranking factor, the almighty keyword. Keywords started as an essential ranking factor in Google. Every experienced SEO remembers the good old days when repeating the same keyword on a page would significantly improve your chances of getting a first-page ranking for a website in a Google search result. And practices such as stuffing keywords into pages became common practice for many creative SEOs. Fast forward to the present; within the SEO community the debate is moving toward questioning whether keywords still have an impact on the ranking of a website. For the time being keywords still have a part to play. However, the days of keyword stuffing, or optimizing a page for one keyword alone are long gone. Nowadays, the conversation is focusing on optimizing the page for a theme rather than for keywords per se. Indeed, when performing keyword research SEOs now need to consider variations of the targeted term, semantics, user intent, industry type, and so forth. And, a quick SEMrush analysis of one of your webpages will reveal that your page is being indexed for a large variety of keywords of which many may not even be mentioned on the page. The popular explanation is that Google is now getting smarter, and places you within an “ecosystem” relevant to the theme of your page. For example, Google can determine that an SEO consultant is also an SEO expert, SEO specialist, or SEO professional even when these keywords are not being mentioned on the page in question. We will stick with this explanation for the time being. In brief, we can also see how the impact of keywords per se as a ranking signal has evolved over time, as did its privileged status within the SEO ranking factors pyramid. If you are seeking further evidence, look no further than the spinning technique that used to be very popular with many grey hat or black hat marketers. Presently, spinning is still being used in some industries but its use has waned, and will slowly disappear over time. This is mainly due to improvements of Google’s RankBrain, which can now perform semantic analysis of keywords, keyword associations, synonyms, and analysis of the whole “ecosystem” around the keyword. In a not too distant future RankBrain will succeed where tools like Copyscape fail: identifying duplicated content even when words are 100% different. Similarly, traditional practices such as meta titles, meta descriptions, and optimizing H1s have nowadays very little direct impact, if any at all, on a website’s ranking. Of course these factors are still linked indirectly to ranking position because they improve user experience and make it easier for users to find their way around on the page. The argument is that a more productive time on a site equates to a better user experience, sending positive ranking signals to Google. However, the impact is limited and, as discussed, Google now places little value on most traditional ranking factors.

But how did it happen, how did SEOs lose influence with Google? More importantly, who will win the battle of ranking your website: the creative, technical SEO professional, or Google? And equally important, is SEO dead?

To answer to these questions, let us dig a bit deeper into the three main factors that have driven the advancement of Google’s machine learning algorithms: big data, computational improvements, and advancements in AI.

2.2 Big Data

We started this chapter by recalling the initial low bargaining power Google held over SEOs. Indeed, Google was relying on webmasters themselves to provide as much information as possible to help determine the relevancy of their websites. Besides the underdeveloped state of the machine learning arm of AI, the matter was made even worse by the fact that at the time most data had yet to be digitized. Indeed, only 25% of the whole world’s data was stored in digital form. (Ross, 2017) Hence, Google’s ability to learn from that relatively small set of digital data was limited, and this was in addition to unreliable ranking signals being easily manipulated by webmasters.  But the new digital era had begun, an era marked by an abundance of old data appearing in digital format, massive amounts of data being produced daily, and computing power reaching levels never before thought possible. From the 25% in 2000, the proportion of data in digital format grew to 94% by 2007. (Ross, 2017) Consider a 2013 IDC Digital Universe study referred to by Marr, which stated that out of the 22% of information “ready for analysis” within the whole universe, only 5% was being analyzed, and IDC’s prediction that the “ready for analysis” data proportion will increase to over 35% by 2020, while 10% of the data will be actively used for analysis. (Marr, 2015) In a nutshell, big data is going to become even bigger, and by 2020 an estimated 1.7 megabytes of data will be created every second for every inhabitant of our planet. (Marr, 2016) Google is well placed to take advantage of our willingness to share more and more data, if you consider its current web index alone is believed to be over 100 petabytes of data, comprising details of an estimated 35 trillion webpages. (Marr, 2015) Let us conclude by saying that big data is a “catchall phrase used to describe how these large amounts of data can now be used to understand, analyze, and forecast trends in real time.” (Ross, 2017, p. 154)

But where is Google getting all its data from?

Let me start by pointing out that it is not my intention to analyze every single source used by Google to build someone’s personal digital file. And even if I wanted to do that, realistically it would be impossible for anyone but Google to identify all such sources. My intention is simply to draw awareness to the increasingly large amount and the variety of data fed into Google’s RankBrain algorithm. This will help us gain a better understanding of the progress Google has made over the years. Most importantly, this will provide us with an understanding of Google’s strategic direction, and its direct impact on the profession of digital marketing. Time to provide a first hint: if you take one point from this book it should be that Google is not in the business of search at all. In fact, Google is not in the business of social media and blogging (Google+, Blogger, Feedley, Hangouts, and so forth), email (Gmail), TV (Chromecast), Maps (Google Maps), cellphones (Pixel), books (Google Books), images (Google Images, Google Photos), news (Google News), translations (Google Translate), videos (YouTube), Contacts, Google Calendar, Google Docs, an apps store (Google Play), or any other similar applications of its technology. And Google is far from disinterested when providing developers with API access to most of their applications. We use Google’s tools and applications day in day out because they do make our lives better. The one question we do not ask is: why is Google investing in these applications just to offer them free? The same question is pertinent for many other companies whether their name be Facebook, LinkedIn, Microsoft, or anything else. As users and marketers we got the whole thing wrong. We think about Gmail as a competitive response to Microsoft’s Outlook, we think about Google+ as a response to Facebook, Chromecast as a response to Samsung’s Smart TVs, and we think about Contacts, Gmail, and Google Calendar as productivity tools. Similarly, we think about Google Maps as a tool helping us navigate around town, while news feeds via News or Feedly allow us to personalize our stream. APIs are a god-sent gift, and thousands of digital marketing tools are being built both by white hat and black hat marketers leveraging Google’s APIs.  And, of course Google’s search engine needs no introduction, if you are anything like me you cannot survive without it. For Google though, its search engine, APIs and all other applications we have been discussing are simply a means of achieving its real goal. David Kelly got this right when he pointed out that in assessing the relationship between search and AI we seem to have got it backwards. Indeed, we believe that AI is being used to improve our search results when in fact Google is using its search engine to train its AI (Kelly, 2016).  Yes, we are talking about RankBrain, which has already found its way into the top three ranking factors in Google. This should come as no surprise, as Larry Page, one of the founders of Google, stated as early as May 2002: “Google will fulfill its mission only when its search engine is AI-complete. You guys know what that means? That’s Artificial Intelligence” (Chace, 2015, p 18).

To emphasize, Google is not in the business of Search at all. In fact, Search is simply a mean rather than an end; in our case the end is AI capability. Similarly, Google has no intention of becoming a cellphone, productivity, book, or TV company. Data and only data is at the heart of every initiative taken by companies like Google and Facebook. With this in mind, I hope I have convinced you that the vast amount of information collected via free services and applications represent the Holy Grail for Google, Facebook and the like.  And, if we go by extensive statistics provided by Marr in his wonderful book Big Data, the ROI from these initiatives is impressive by all standards. Indeed, consider that as long ago as 2013, more than a billion tweets were being sent every 48 hours, one million accounts were being added to Twitter every day, 293,000 updates were being posted on Facebook every minute, and 172,800 new members were joining LinkedIn every day. Furthermore, the average Facebook user created 90 pieces of content, including links, news, photos, notes, and videos every day. Every minute an estimated 571 websites were being created and Tumblr owners published approximately 27,778 new blog posts, and three million new blogs were being created every month. (Marr, 2015) And, of course, 350 million photos were uploaded to Facebook each day, three-and-a-half million photos were uploaded to Flickr every day, 100 hours of video was uploaded to YouTube every minute, over 45 million pictures were uploaded to Instagram every day, and since June 2013 Instagram users have shared more than 16 billion pictures. The cherry on the cake is that 72% of adults online use social media networking websites with little or no privacy control over their activity. For example, 25% of Facebook users never bother with any kind of privacy control. (Marr, 2015) Furthermore, six of the ten most popular websites rely on user-generated content for their popularity (Brynjolfsson & McAfee, 2016).

By now, I am hoping you have already started to reflect on the extent of your contribution to the feeding and training of algorithms such as Google’s RankBrain. With all this in mind, let us now turn to some of the applications Google makes available to us, and our role in training RankBrain. Again, my intention is to invite self-reflection, as this will help us to better understand the future.

Google+, Blogger, and Hangouts

Google tracks data related to webpage visits and mainly infers personal characteristics and behavior from browsing habits. Its main competitor, Facebook, has access to a more varied palette of personal data, such as where we live, work, and play, how many friends we have, what we do in our spare time, and so forth. (Marr, 2016). Of course, Google also wanted a piece of the pie, hence the launch of such services as Google+, Blogger, and Hangouts.

Chromebook

Could Google compete with established players such as JVC or Samsung in the TV arena? Probably not, nor has Google had any intention to move in this direction. However, Google had somehow to get its hands on all that juicy data: what you watch, when you watch, for how long you watch, how often you stop watching, and so forth.  What were Google’s options? Start building TVs to compete with Samsung? Or, maybe, another business model to compete with Netflix? Either of those two options would have been a distraction from its core business, in a competitive industry where Google would struggle against Samsung, Panasonic, Netflix, NOWTV, Amazon Movies, and other major players. The solution was Chromecast, described in Next Tech magazine‘s “Google Tips & Tricks” as “an incredibly cheap device which can completely change the way you watch online content…. Apps like YouTube, iPlayer, or Play Movies are supported, and the process is as simple as tapping one icon” (Next Tech, 2017).  Of course many other partners include NowTv, ITV, Channel 4, and yes, even Netflix. The same magazine goes further, explaining that Chromecast is Google’s answer to Apple TV, a misconception we have already discussed. Indeed, Google is competing with Apple but the prize is the rich amount of data insights rather than the TV business per se. This new move now provides Google with further data insight into your behavior: which icons you click on, what shows and movies you stream, what channels you prefer, how long you watch for, demographic information, and probably much more. To leverage the trend of cellphone overtaking desktop use, Chromecast also integrates with your Android or Apple phone or tablet. In a similar fashion, Google’s Wi-Fi Home System is not competing with BT, Virgin, or other Internet providers for the delivery of Wi-Fi services. In fact, Google Wi-Fi leverages your modem and Internet provider services to amplify the Internet connection throughout your house, and once connected, more data is being collected about you, your behavior, habits, and preferences. Yep, Google tries hard to make it as easy as possible for you to feed its RankBrain algorithm.

Gmail

Gmail has become a part of daily life for many of us, and we are often grateful to Google for providing us with free access to this tool. However, did you know that, with the free version of Gmail, Google reads and analyzes all emails you are sending and receiving from your Gmail account? (Marr , 2015). You probably didn’t. Hence, next time you send an email trying to buy some links or guest blog posts for your website, you should really think twice about the wording.

Waze

Waze was acquired by Google in 2013 for over a billion dollars and, at its core, it offers traffic information and recommendations to users based on information collected from other Waze users. In brief, you feed data such as your location or how fast you are moving into Waze’s algorithms. Again, a great opportunity for Google to collect information on your daily behavior.

Google Pixel phone.

Mobile is big, given that of the world population of over seven billion, six billion have cellphones. (Ross, 2017) In fact, mobile is so big that Google has started to move its search index to mobile. And nowadays a website that is not mobile-friendly will find it nearly impossible to rank on the first page in Google. Is the company trying to enter the mobile business ? By now, you know that Google’s decision to move into the mobile market was driven by the massive volume of data generated via mobile. A similar approach was taken by Microsoft, which tried to break into the mobile business by acquiring Nokia. After failing to win a significant market share in the mobile market, Microsoft developed Windows 10 as a common platform serving desktop, tablet, and phone. (Marr, 2016) Of course, Windows 10 provides Microsoft with data about users, their online activities, and their behavior as users. Back to Google: we can now reflect on some of the data insights Pixel might be collecting about us. Eye-tracking data, what we click on, when we click, our browsing history, applications we download, games we play, and so on. A more robust example is that of a mobile company called Sprint, which is using data from over 55 million mobile devices to better customize ads served to users’ phones. In addition to the actual mobile phone, the company uses data about the usage of the actual device: the text messages you send, phone calls, apps usage, emails, and so forth. This option is available on an opt-in-only basis, however Jason Delker, their chief of technology acknowledges that most other mobile companies include everyone automatically. (Marr, 2016)

Google Wallet

From the same Next Tech magazine “Google Tips & Tricks” article: “Set up Google Wallet in order to pay for purchases…It has never been easier to shop online, and Google is making strides to ensure that it gets easier by the minute” (Next Tech, 2017). Yes indeed, Google is making it as easy as possible for itself to learn about your purchasing behavior. The size of the prize? Google can now collect information on your earnings, how you spend your money, where you spend it, variations in your income, and more.

Google Books

A wealth of data is being generated about what we read, how long we read for, whether we skip pages, what pages we annotate, which ones we highlight, and any other details you can think of. (Marr, 2015)

Application programing interface (API)

Google, Amazon, eBay, and Facebook openly encourage all sorts of communities to interact with their platforms. Developers, marketers, and vendors are examples of people abandoning control over the use of their data in return for access to the capabilities developed by those companies. Consider for example that over half a million apps have been built leveraging access to Facebook’s APIs. (Marr, 2016) Regardless of the purpose any of these integrations may have, data companies such as Google learn a great deal about the way you use their platforms. One example of what they learn is something we will be talking about later—how you interact with  the social automation industry, where countless tools link to social media APIs, enabling users to automatically post content, grow a followers base using methods like follow/no follow, automatically like posts, invite friends, join groups, and so on. I will introduce you to many of these tools in the practical part of the book, and I use quite a few of them myself. What developers may not consciously be aware of, though, is that the more those tools are used, the more they train social media algorithms in making those same tools redundant.

Nest

Google acquired Nest in 2013 in what was signaled by the press as an attempt at creating an operating system for the home. One can draw a parallel with Windows 10, but for the home. Whatever the reason for the acquisition, one thing is clear: Google now has access to a wealth of information related to your temperature preferences throughout the day, when you arrive or leave home, and you overall habits. And the Google tradition of making it as easy as possible for you to provide data continues: Nest can easily integrate with 3rd party internet of things (IoT) devices like washing machines, smart wall plugs, fitness trackers, and smart watches. (Marr, 2016)

Google’s other stuff

Google Calendar: Google has access to data on your habits, schedule, acquaintances, and much more. Google Maps: At the very minimum Google will know where you are, when you are there, and how often you are in that place. YouTube: Google can access a wealth of data on what you watch, how often you watch, how long you watch for, and so forth. Google Docs. The easiest type of digital data to analyze is text. Google Search: Need I say anything?

At the time of this writing Google’s products page lists 104 items, including Google Search, Pixel phones, Daydream VR, Google+ social, Google Duo video-calling app, Google Docs suite, Google Scholar (Google Products, 2017). The truth is, we could go on and on almost without end about the ways Google collects data and feeds it to its algorithms.

But what types of data are you feeding into Google’s machine learning algorithms? Well, what better source for that information than Google’s own Privacy policy?

Google’s privacy policy

“Here are the three main types of data that we collect:

Things that you do

When you use our services—for example, carry out a search on Google, get directions on Google Maps, or watch a video on YouTube—we collect data to make these services work for you. This can include:

  • Things that you search for
  • Websites that you visit
  • Videos that you watch
  • Ads that you click on or tap
  • Your location
  • Device information
  • IP address and cookie data

Things that you create

If you are signed in with your Google Account, we store and protect what you create using our services. This can include:

  • Emails that you send and receive on Gmail
  • Contacts that you add
  • Calendar events
  • Photos and videos that you upload
  • Docs, Sheets, and Slides on Drive

Things that make you “you”

When you sign up for a Google account, we keep the basic information that you give us. This can include your:

  • Name
  • Email address and password
  • Date of birth
  • Gender
  • Telephone number
  • Country

(Google Data Privacy Policy, 2017)

Note the expressions “we collect data to make these services work for you” and “Things that make you “you””, which basically translates to “the data we collect about you trains our algorithms to know you as much as possible about you.”

Third Party Data

Let’s now turn our attention to another type of data source that is feeding into Google, Facebook, Twitter, and other companies leveraging big data. That is companies purchasing data from third party providers. Consider for example that in 2015 it was expected that almost 42 million smart wearable devices would be manufactured around the world, generating personal data about the subsequent wearers’ fitness, sports activity, weight, body mass index, lean mass, body fat percentage, steps taken, floors climbed, distance walked/run, calorie intake, calories burned, active minutes a day, and sleep patterns. (Marr, 2015; Marr, 2016) And in 2016 it was expected that over a period of the next five years a further billion wearable devices would be produced. (Kelly, 2016) Furthermore, 3 billion appliances, such as the Nest Thermostat, are expected to find their way on to our cellphones, and 100 billion chips will be embedded into the goods on Walmart’s shelves. (Kelly, 2016) Smart TVs count the number of people watching TV. (Marr, 2015) The ‘Up’ band created by Jawbone collects 60 years ‘s worth of sleep data every night and that data can be sold to interested third parties. (Marr, 2015) Mobile app Good2Go is being marketed as an “educational app for sexual consent”, enabling couples to consent to sexual activity prior to the actual act. What couples are not aware of is that the company formulated its privacy in a way that allows them to sell data such as who you had sex with and at what time. Of course, they may choose not to do so but the point is that their policy clearly states that the company “may not be able to control how your personal information is treated, transferred or used” (Ross, 2017, p 176).  Probably the biggest third party provider of data is a company called Acxiom, which claims to hold data on “all but a small percentage” of US households. (Marr, 2016) Acxion collects data from credit agencies about most US citizens, historical and current data of their domicile, how many children there are in their family, what magazines they subscribe to, public social media activity, public records such as electoral rolls, marriage and birth certificates, surveys completed, and much more. (Marr, 2015) I think we can agree that this is a lot of data, particularly when you consider that a credit rating agency such as Experian holds over 30 petabytes of information about people’s credit history, age, location, and income status; and over 282 other credit rating attributes are measured to help financial companies in fraud detection. (Marr, 2015) Other examples of data sources include the now defunct inBloom database sharing confidential student records with marketers, companies selling lists of families with illnesses such as AIDS or gonorrhea, lists of rape victims. (Ross, 2017)

We can now see that data companies such as Google develop your digital file by making their way into every area of your life, whether you’re using your mobile device, watching TV, browsing the Internet, or heating your house. This takes us to the next point, which is that algorithms will soon be able to make sense of all this data and even end up by knowing you better than you know yourself. Harari proposes that humans are nothing but a collection of algorithms, and that an external algorithm could in fact learn to “manage’ these algorithms better than we humans can. Many people may disagree, but Harari emphatically concludes that “attributing free will to humans is not an ethical judgment” (Harari, 2015, p. 283). The idea that humans are consciously making most of their decisions has been proven wrong over and over again by scientists and psychologists in thousands of experiments demonstrating that human decision-making sits mainly beyond our awareness. Genes and the environment work together behind the scenes, pulling the strings and determining every action we perform. We see ourselves as being in charge of our decisions, but we are puppets operated by forces outside our awareness. More on this has been discussed in the chapter on psychology. For the time being, consider this disturbing thought:

if you are presented with a choice of two switches, simply by looking at your neural activity scientists can tell which switch you will press before you consciously decide to press it. (Harari, 2015) Yes, your inner algorithms have already decided which switch you will press. So much for your free will and conscious decision-making. In fact, researchers implanted electrodes into the sensory and reward areas of rats’ brains, then built a remote control system allowing them to control the movement of the rats. Simply press left, and a rat will turn left. Press another button and the rat climbs a ladder. Keep in mind that it all apparently happens beyond the level of consciousness of the rat, the rat does not think it is being controlled but rather feels a desire to turn left and it turns left. (Harari, 2015) And an algorithm with sufficient information to know our inner workings will most often make far better decisions than we would make ourselves. Harari suggests that if we gave Google and its competitors access to our biometric devices, DNA information, medical records, fitness information, and so forth, their algorithms would prevent many of the bad decisions we humans make. He concludes that: unlike the narrating self that controls us today, Google will not make decisions on the basis of cooked-up stories, and will not be misled by cognitive short cuts. Google will actually remember every step we took …Google will advise us which movie to see, where to go on holiday, what to study in college, which job offer to accept, and even whom to date or marry. (Harari, 2015, p. 337)

If you believe this to be a sci-fi scenario, consider that a study conducted by Faceboook on 86,220 volunteers found that Facebook algorithms only needed 10 likes performed by a user to judge their personality better than their work colleagues could. Yes, that was 10 likes! In the words of marketer Seth Godin:  “A vote is a statement about the voter not about the candidate” (Godin, 2012, p 42). And, it doesn’t stop there. It took only 70 likes for Facebook to know volunteers better than their friends did, 150 likes to know them better than family members did, and 300 likes to predict their opinions and desires better than their spouses could. The conclusion of the research was that humans would be better if they ceded important life decisions to algorithms. (Harari, 2015)  Harari goes as far as proposing that we should replace the old “Listen to your feelings” dogma with a completely new one: “Listen to the algorithms! They know how you feel” (Harari, 2015, p 392). In fact, allowing a Google RankBrain-powered assistant to take over your everyday decisions and various important ones would make sense. In the end, its strong focus on user experience could only translate to a better life for you, the user. The idea is not new; Kelly, for example, describes filtering as one of the top digital trends of the future. (Kelly, 2016) To understand the importance of allowing Google to filter our choices, imagine yourself searching Amazon for a management book. I have just typed “management” in the books search bar and Amazon returned 1,326,652 books.  Given the fact that our attention spans are at present as low as 8 seconds vs. 12 seconds in 2008 (Jones, 2014), and the massive increases in choices, lack of filtering could simply paralyze our ability to choose. Thus, without realizing it we have already ceded power to algorithms, and we trust that all the data we’ve fed into them over the years has trained them to make the right choices for us. On reflection, 95% of my Amazon book purchase choices are made via the recommendation system. You may not consciously realize but you have already ceded control to Google for many of your choices. Simply by typing a search query in Google’s search box and clicking on one of its results you allow Google to decide what information is relevant for you. As Pedro Domingos puts it, “ Google’s algorithms largely determine what information you find…the last mile is still yours, choosing from the options the algorithm presents you with, but 99.9% of the selection was done for them”. (Domingos, 2017, p 12) Still not convinced? Imagine Google altered its algorithms tomorrow, and all today’s search results on first pages will be replaced with new results. The studies I have presented clearly indicate that you would click almost exclusively on first page search results. Google has now chosen the content you read and influenced your opinions, and all with a simple tweak to its algorithms. So, how could this level of confidence in our smart assistants look in the future? Let’s consider a couple of examples. Need to dress for an occasion? You may allow your personal assistant to scan your wardrobe, and pick the dress best matched to the profile of the person you are seeing. We are already ceding control to the algorithms in choosing the person we will spend our life with, if we are to consider the statistic that one-third of marriages in the US start with online dating. (Ross, 2017) Similarly, Forty percent of Americans use online dating and twenty percent of current committed relationships began online. (Totham, 2017)

2.3 COMPUTATIONAL POWER IMPROVEMENTS

We have been discussing the evolution of big data and how companies like Google use big data to gain knowledge about us and our behavior. However, RankBrain’s existence would never have been made possible without Google’s ability to store, process, and analyze large amounts of data, such as videos, emails, online behavior, social media activity, and photos. The rise of cloud computing and of computer power over time has provided Google with the ability to store and manipulate the large amounts of data that will eventually enable RankBrain to know you better than you are known by your colleagues, friends, relatives, and even yourself. To understand the growth of cloud computing, consider that the number of files stored on Microsoft’s Azure cloud network grew from four trillion in 2012 to ten trillion in present time. (Marr, 2016) However, the power of big data resides not in the amount of data owned but in the ability to store and process that data, something that in 2012 President Obama acknowledged by including in his budget a $126 million fund to develop exascale computing. Intel set itself a target of achieving this by 2018. (Chace, 2015) For a more glaring example of the increase in computing power consider China’s Tianhe-2 supercomputer which has 32,000 CPUs in 125 cabinets (Chace, 2015), or the fact that about 1,000 computers contribute to answering your every Google Search query in less than 0.2 seconds. (Marr, 2016) Another example underlining the importance of data handling capacity comes from Acxiom, the data collection company we encountered in the section on big data. The company’s growth soared following a partnership with Citibank in 1983, at which point another issue occurred. According to Acxiom’s founder, Charles Morgan, the challenge became managing growth and the lack of computing capacity. (Marr, 2016) Finally, if you are familiar with Moore’s law, you will not be surprised to hear that the rule still holds true: every 18 months the computing power of microprocessors doubles. (Brynjolfsson & McAfee, 2016)

2.4 Advancements in AI

Google is getting smarter; that’s what SEOs like to tell their clients in an attempt to keep the magic veil surrounding their profession. Google’s RankBrain is often mentioned to imbue the search engine with an ability to learn in a similar way to the far more highly complex human brain. And Google’s misleading name for its RankBrain algorithm does nothing to detract from the aura of humanness surrounding the learning and processing capabilities of RankBrain. Let’s try to get this right and clarify the AI learning process that actually makes Google smarter. By understanding how it learns, we will better understand both the myths and the possible future capability of RankBrain. We will also be in a far better position to judge the role of SEOs and the status of their profession in the future. There are two main paths to RankBrain achieving AI status. To describe the first method let me relate a conversation between me, one of my clients, and the client’s digital marketing agency. In an attempt to bedazzle us, the head of SEO at the agency claimed that SEO is more complicated than it used to be, as Google was “getting smarter”. Out of curiosity, I asked her what she meant by “getting smarter”. After launching into more waffle about the effect of Google “getting smarter” she responded to my question by associating Google’s RankBrain with the human brain. She explained how the Google “Brain” is crawling our websites, reading the content, and taking in the theme of the website, keywords, and so forth. What this head of SEO was portraying was a scenario where Google is getting smarter by emulating the workings of the human brain. As another example, we have recently had a talk delivered by a well-known social media figure and seasoned head of SEO. I was amazed at how little understanding this person had of the workings of Google’s RankBrain. In addition to making no reference to RankBrain as a ranking factor at all, he went on to advise us on creating as many pages as possible targeting specific keywords, and attributed the increase in the number of keywords as measured via SEMrush solely to this strategy. Even though I am a very polite person, I could hardly refrain from telling him that during the past two years most of my personal websites have seen significant increases in the volume of keywords recognized within SEM Rush, in spite of using no SEO, or new content being added to them at all. I could not help but assume that this highly paid, well-seasoned SEO could not understand the AI developments Google has seen over the years. Back to our conversation: brain emulation is one path described by Bostrom (2016) for reaching human-level intelligence. So, will RankBrain be able to achieve AI via this route? In principle, Bostrom points out that the problem is not necessarily knowing how to achieve this goal per se, but having the technology required to achieve it. Consider that your brain generates over a billion billion floating point operations per second. (Chace, 2015) The computing power needed to achieve this kind of performance is well beyond realistic achievements. For example, a team led by Markus Diesmann and Abigail Morrison has managed to create a neural network of 1.73 billion nerve cells, which were connected by 10.4 trillion synapses. This sounds impressive until we remember that a human brain contains 80 billion nerve cells. Diesmann and Morrison were able to simulate only one second of real brain activity. It took 82,944 processors and 1 PB of system memory to achieve this outcome. (Whitwam , 2013)

This attempt suggests that achieving AI by emulating the human brain is mission impossible. Instead of brain emulation, there are several technologies that could be used in trying to create a copy of the brain, 3D printing being the one most often mentioned. Another method involves sending nano-robots into a brain to survey neurons and return sufficient data to create a 3D map. (Chace, 2015) However, these technologies, like many others in the field, are well underdeveloped and incapable of coping with the complexity of the brain; none is considered a realistic means of emulating it. In addition to the matter of being able to map all the elements of the brain, there is the fact that all those elements continuously interact with each other with no specific pattern. This means that any scanning technology would also need to capture and replicate these interactions in real time. The truth is that most scientists in the field do not believe that human-level AI will first be achieved through brain emulation. Bostrom concludes: “the emulation path will not succeed in the near future (within the next fifteen years, say) because we know that several challenging precursor technologies have not yet been developed.” (Bostrom, 2016, p. 43) Hence, thinking of Google’s RankBrain as a brain per se will not tell us much about the path Google will take to achieve its AI goals.  We are left with the second path, which is machine learning. Machine learning is the process of creating algorithms that are able to gain insights from data fed into them without their being programmed to do so in detail. Recognizing lions in images, in spite of the image not being optimized is one example. As opposed to the human brain, which can quickly learn what a lion looks like, an algorithm needs to be fed with millions of images of lions to categorize what a lion is. Once it has acquired a concept of the characteristics of a lion, the algorithm will start a continuous process of trial and error, until it reaches a conclusion of what is and what is not a lion—what a lion looks like. Take an example of Google feeding its algorithms with millions of images of what it believes to be a lion. Google generated these images from various sources, including the alt text or optimization info provided by SEOs. Suppose you have optimized a Shutterstock image of a lion, while another webmaster has purchased the same image but done nothing to optimize it. RankBrain now monitors responses such as click-through rates, time on site, and so forth against user queries for lion images. Should the metrics indicate that your image does have a lion in it, the metrics will assume the same for the same un-optimized image.  You now have an image that is being categorized by RankBrain as a lion in spite of having no alt tag or other optimization performed on it. Of course, this is a gross oversimplification.  In fact, image recognition is becoming more and more a commodity, which is testimony to the advancements in this field. For example, Amazon’s Rekognition service: makes it easy to add image and video analysis to your applications. You just provide an image or video to the Rekognition API, and the service can identify the objects, people, text, scenes, and activities, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial recognition. You can detect, analyze, and compare faces for a wide variety of user verification, cataloging, people counting, and public safety use cases. (Amazon Rekognition, 2018)

Machine learning is the path that RankBrain is most likely to take to achieve AI level. Algorithms have in fact already managed to surpass humans in many tasks requiring skills that were previously thought to be restricted to humans.  Consider the well-publicized example of IBM Watson, which managed to beat the champions at the TV quiz game Jeopardy as early as 2011, and which is now being used in various industries, including as a business SAS model for AI. IBM Watson machine learning algorithms apply “more than 100 different techniques to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses”. (Wikipedia, 2018) IBM Watson’s “brain” is made of up of 90 IBM Power 750 servers processing 500 gigabytes of data per second (Marr, 2016), another example of improvements in computational power directly supporting advancements in the big data and machine learning fields. In fact, despite the deceiving name most so-called artificial intelligence entities currently employ machine learning technologies to achieve AI status. Further examples of AI outperforming humans include the CHINOOK algorithm drawing games with the checkers champion as early as 1994, Eurisko program winning the US championship in Traveler TCS (a futuristic naval war game), Deep Blue beating the reigning world chess champion in 1997 and many similar achievements, in crosswords and in games such as scrabble, bridge, poker, and go. (Bostrom, 2016) Just like IBM, and in spite of its deceiving RanckBrain name, Google uses the machine learning path to achieve its AI goals. This is no secret, with Peter Norvig, director of research at Google, confirming in a chat with Pedro Domingos that Google uses machine learning in literally everything it does. (Domingos, 2017) Understanding this idea is essential in helping you to understand the end point and the effects of a fully developed RankBrain algorithm. To sum up, in spite of the human aura created by the “brain” association with its RankBrain algorithm, Google algorithms work nothing like a brain.  Throughout the book, whenever I refer to AI capability, keep in mind I am referring to the machine learning arm of the AI field.

2.5 Time to cede control to Google

Kevin Kelly puts forward the very interesting idea of a future of leveraging the power of AI for improving and simplifying our increasingly complex and demanding lives. This idea made me think about an article I once read in which Facebook cofounder Mark Zuckerberg is asked his reason for wearing the same type of grey shirt every day.  Zuckerberg’s answer:

I really want to clear my life so that I have to make as few decisions as possible about anything except how to best serve this community. I’m in this really lucky position where I get to wake up every day and help serve more than a billion people, and I feel like I’m not doing my job if I spend any of my energy on things that are silly or frivolous about my life, so that way I can dedicate all of my energy toward just building the best products and services

(Trotman, 2014)

Zuckerberg’s answer provides a glimpse into the complexity and multitude of choices we will make in the not too distant future, although not all choices will be related to following our passions. Burdened with these choices, we will welcome any opportunity to simplify our lives and focus on things that are not being “silly or frivolous”. But how will we achieve this? Yuval Noah Harari proposes that AI advancements will lead us to ceding control to intelligent assistants that will know us better than our colleagues, friends, or relatives do. (Harari, 2015) We have already spoken about the insights gained by algorithms into our character based on what we ”like” on Facebook. Complementing Facebook’s findings, another study performed at Cambridge University found that your Facebook likes enabled algorithms to better predict personal characteristics such as sexual orientation, satisfaction with life, intelligence, emotional stability, religion, alcohol use, relationship status, age, gender, race, and political views, and more. (Marr, 2015) You should not be surprised then to find out that Facebook’s algorithms have improved to a degree where Facebook can now predict when you will change your relationship status from “single” to “in a relationship”. (Marr, 2015) Similarly, online dating websites such as eHarmony are already taking the guesswork and personal biases out of the profiling equation by profiling clients on about thirty particular attributes. (Marr, 2015) I dare say that in the near future, your Google intelligent assistant will gain access to real-time information such as your heart rate, breathing rates, steps taken, and number of calories burned. It will hear who you talk to in a board meeting, analyze your voice and language, and determine your appropriate course of action. If you consider this to be a sci-fi scenario, the well-known fashion company Ralph Lauren has already been testing shirts that include devices that collect all the data I have just mentioned. (Marr, 2016) Similarly, Squid, a smart shirt tested at Northeastern University in the USA has the capability of determining your posture and activating parts of the shirt to bring you to the optimal posture. (Kelly, 2016) Google also funded project Jacquard, which included experiments with smart fabrics that both collect information and return information on a screen simply by swiping the sleeve of your shirt with your fingers. The examples could continue; however, the point is that your intelligent assistant will have so much data about you that it will succeed in monitoring you right down to some psychological levels you have little access to consciously, well below your level of awareness. Yes, I am saying that your intelligent assistant will get to know you better than you know yourself. To recap: improvements in AI, particularly in its arm of machine learning enabled Google to better assess the quality of your website while also reducing reliance on traditional ranking factors that are open to manipulation. Agreed, Google is not quite there yet, and at present quality backlinks, content, and RankBrain represent the main signals contributing to ranking your website. However, make no mistake: Google will gradually reduce the importance of backlinks or keywords as ranking factors, to the point where backlinks or keywords will play very little or no impact on ranking your website. This is almost a repeat of the past; it closely resembles the dismissal of factors such as meta descriptions, URLs, H1s, or page titles. In fact, the dismissal of ranking factors is an ongoing mechanism in the AI evolutionary process. Bostrom’s explanation of the general AI process offers a very vivid understanding of Google’s progress from reliance on human input to its launch of and subsequent dependence on RankBrain. Bostrom explains that in the early stages of AI, improvements may occur through a trial-and-error process, acquiring data and assistance from programmers, in Google’s case the webmasters. However, at later stages the system should be able to improve and learn ways of working to the point that it will self-improve with no human input. This means that it could create new algorithms and structures boosting its cognitive performance. Furthermore, the system will be able to apply “recursive self-improvement, which means that it could improve or create a better version of itself, which then creates a better version, and so forth”. (Bostrom,  2016, p.34) Or, in the words of Pedro Domingos: “…the more data they have the better they get. Now we don’t have to program computers; they program themselves.” (Domingos, 2017, p. xi) In a nutshell, SEOs are naïve and guilty of wishful thinking if they expect to still maintain a role in ranking your website once RankBrain develops into a fully reliable AI “brain”. In fact, most SEOs seem to ignore the reality that Google’s mission has always been to create a fully developed AI entity, capable of accurately assessing the quality of your website with no external input. If you have any doubts, let us revisit Larry Page’s 2003 statement regarding the future of search: “Google will fulfill its mission only when its search engine is AI complete…That’s artificial intelligence”. (Chace, 2015, p 18)

2.6 The curve of value

To further understand the future of SEO, let us now consider the curve of value provided by SEO over time. We talked about eye-tracking studies discovering that most people were concentrating on the top of Google’s search results page. Thus, in the past being ranked in the first position in Google really paid off, as suggested by several statistics from the landmark SEO book The Art of SEO. (Enge, Spencer, Stricchiola & Fishkin, 2013) For example, an eye tracking study by AOL found that the first three search results were receiving 100% visibility, the 4th position was getting 85 % visibility, the 5th position 60%, the 6th and 7th position received 50% visibility and the 8th position received only 30% visibility. (Enge, Spencer, Stricchiola & Fishkin, 2013) However, a more important point is that regardless of the levels of visibility, a Cornell University study found that 72% of searchers clicked on the first result, which seemed to answer their query, and only 25.5% read all listings on the first page before making a decision on any result they clicked on. AOL studies have also found that first page search results received 89.71% of the whole search traffic, while the second page received only 4.37% of the traffic. (Enge, Spencer, Stricchiola & Fishkin, 2013)  But what are these figures actually telling us? Well, being ranked in the top four positions of the first page in Google commands the greatest search visibility for your website; anything starting at the fifth position downwards receives a far smaller share of visibility. However, being ranked first within the search results will get you 72% of total clicks, despite the fact that the first three positions all receive 100% of search visibility.  We can conclude that in the past, SEO as a digital marketing strategy used to pay off given the prominent position of organic results and the low amount of attention captured by the right-hand-side paid ads. Of course, Google would have jumped at the commercial opportunity of leveraging the high visibility of the first four organic positions if it could. However, the obsession for great user experience prevailed for some time, particularly given the fact that Google’s Adwords algorithms were also easily manipulated by webmasters. In short, Google was aware that serving ads on first four most-visible positions would impact negatively on the quality of user experience. Of course, anyone who is searching for a keyword within Google is aware that Google has now made that step: the right hand side paid ads column has disappeared, and the first four organic search positions have been replaced by Google Ads. This move was made possible by advancements within the machine learning field, improvements that also reduced reliance on traditional ranking factors. Specifically, Google is currently ranking paid ads based on a variety of signals, keyword quality being only one of them, albeit an important one. Nowadays, when reviewing the keyword quality of your ads, Google will be assessing the quality of three main metrics: your bidding, your ad, and your landing page. I will discuss in more detail how this is being applied in the Paid Search section of my second book. However, for the time being the important point to take in is that Google’s algorithms can assess with a high degree of accuracy the relevance of your ads and websites to the queries of the users. In fact, Google’s paid ads will often provide the same or better value to users than organic searches do by having a much better match to users’ wants. Moreover, Google provides a wide range of tools to help improve click-through rates for your ads, such as adding various types of extensions to improve share of screen for your ad. This nicely balances the need for quality user experience with improving Google’s revenue streams; in fact it seems to me that Google is discouraging users from clicking on organic search results altogether. Which takes us a step closer in understanding the future of SEO. First, I will argue that presently SEO has very little commercial impact on your business. Most importantly, I will argue that while at the moment SEO is not dead, it is certainly a dead man walking, in our case a profession on the verge of disappearing. To fully understand my argument, let us recap what we have learned so far. We know that 72% of users click on the first most relevant search result, which is now a paid ad rather than an organic result. Also, the first positions, which used to be profitable from an organic point of view, are now taken by paid ads altogether. We are left with position five and lower. However as we have seen, these positions have low visibility, not to mention that the largest market share of clicks goes to the first position, and most likely the next three ranked positions mop up the remaining clicks. We also know that only 25.5 % of people inspect all results on the first page before clicking, re-enforcing the point that the lower you are ranked on the page the less value for your business. Furthermore, given Google’s efforts to enhance the attention given to paid ads it is very likely that the 25.5% share has gone even lower now. To conclude, SEO is already providing little or no advantage for your business, even when your website is being ranked on the first page in Google.

Let us now consider a future where Google’s algorithms have improved to the point where RankBrain has become the only signal Google needs to determine the quality of your webpage. The first obvious point to make is that Google will assess the quality of your website with a high degree of accuracy. We have discussed earlier the amount of information that Google holds about you. I have also argued that you may voluntarily cede control of your information to Google, which may end up knowing you better than you will ever know yourself. It hardly needs stating that at this point Google will supply you with the most-relevant ads matching your specific user intent. In searches, you will be so used to getting the best results to your queries that you will look no further than the first four positions for an answer. And remember, Google has full bargaining power over advertisers. This means that Google will aggressively drive advertisers to further improve the quality and the content of their ads. Thus, in a full AI scenario Google will have gained the ability to deliver ads that provide better answers to your queries when compared with organic search results. And, if paid ads deliver better user experience, why not increase the number of paid ads at the top of the page? Why not six, eight, or even ten paid ads? User experience will be better, advertisers will be significantly raising their game, and second page advertisers will welcome the opportunity of ranking on the first page top paid results, maybe even pay a bit more. It may sound like an apocalyptic idea; however, when RankBrain achieves Google’s AI mission, a first page in Google with no organic results being served is a very realistic scenario. You may of course argue that people will not click on ads. Sure, they may not click on today’s ads but as we get more comfortable with the ads results provided by Google’s RankBrain, we will increasingly accept ads as the norm, and put our trust in RankBrain to take the best decisions for us. You may also argue that people will be disappointed at the lack of natural results. Ultimately, as long as Google tailors search results to my query accurately I do not really mind whether the result is an ad or a natural result. If the truth be told, most people would not even notice the difference. Not convinced? Look no further than the long queue at your local Sainsbury’s, Morrisons, or Tesco. How many self-service checkout points have replaced people’s jobs? The arguments before these moves were that we would reject the idea on the basis that people will lose jobs, machines could not replace the quality of human interaction, and so forth. And yet you are using self-service checkout points as often as you can, to improve the quality of your checkout experience. The reality is that people will adapt to the new reality of full paid ads on first pages, and they will in fact end up valuing the improved user experience. It is a process psychologists call habituation, which is the tendency for people to adapt to whatever comes their way. A classic example is that of lottery winners, who quickly adapt to their new life after a first rush of adrenaline. (Laham, 2012) To introduce the change Google could initially take a softer approach by adding two extra ads at the top of the page, bringing the total number of ads to six. Your organic search results would now appear in the 7th position and lower, resulting in little or no search visibility, and no clicks to those websites. Kelly  proposes another interesting scenario where in the future people will get paid for paying attention to the ads. As Kelly brilliantly points out, we are currently giving our attention to ads for free, and it makes sense that advertisers should pay for our attention. (Kelly, 2016) And we may need to allow our RankBrain-powered intelligent assistant to decide what ads we’ll find most useful: a win–win situation. We experience improved user experience by being matched to ads we will be genuinely interested in, while also monetizing our attention time. Contrary to what you may assume, advertisers will also jump at the opportunity of matching their ads to the right people, whether those people be influencers, certain segments of the population, people displaying certain behaviors, or others. Conversion rates will be higher, as advertisers trust RankBrain to match them to the people that have given permission to be advertised to, and have a genuine interest in their product. Most likely, the cost per click will also be higher depending on your target market, given the high competition for the attention of an influencer. Of course, when setting up their bidding strategy marketers may specify the settings and the target audiences themselves, however this would limit RankBrain’s ability to match them to the most relevant recipients.

Many SEOs and digital marketers may try to dismiss the “ads only” outcome I foresee. As humans we have the tendency to normalize advancement in technology and take innovation for granted. Indeed, over time we have experienced great advancements in technology, such as smartphones, hearing aids with algorithms filtering out noise, systems offering product recommendations, speech recognition software, machine translation programs, and much more. And yet, we soon consider these advancements to be the norm, and pay little attention to what they represent, which in reality is a great improvement in AI technologies. Yes, given the smooth transition from one technology to another, we soon forget that change happens right here, right now, right in front of us. Indeed, innovation is work in progress and it certainly is not a one big aha moment. To make this point, I particularly like to quote Walton’s statement in his Made in America biography: “….some folks have gotten the impression that…it was just this great idea that turned into an over-night success…And like most over-night successes, it was about twenty years in the making”. (Walton & Huey, p. 35) In this context, you can be sure that Google is well on track to becoming an AI entity. At the moment, RankBrain is a top-three ranking factor and sometime in the very near future you may be shocked to realize that you missed all the signs that pointed to its eventual exclusive use in ranking your website. And, as Walton emphasized, this achievement has not occurred in a big aha moment, but rather through the gradual accumulation of smaller and bigger innovations. Kelly named this continuous flow of change “a state of becoming”, and pointed out that “unceasing change can blind us to its incremental change…we tend to see new things from the frame of old”. (Kelly, 2016, p. 14) To remind myself about this point, I always find it useful to think about a situation described by Chace. (Chace, 2015) You are in a stadium. One drop of water is being dropped on to the pitch and then doubles every 60 seconds: 2 drops, 4 drops, 8 innovations, 16 innovations, 32 innovations, and so forth. You may or may not feel surprised to find out that it takes only 49 minutes to fill the stadium. What should surprise you though is that after 45 minutes the stadium is 93% empty of water hence the real progress occurs only within the last 4 minutes.  And, to conclude my argument consider the following statement from Elon Musk : “The pace of progress in artificial intelligence is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential…This is not a case of crying wolf about something I don’t understand”. (Chace, 2015, p. 89) Oh, and one more thing: Google has bought DeepMind for half a billion dollars. (Domingos, 2017)

A more important idea, though, is that in the future you may not be able to buy paid ads at all. Futurologist David Kelly suggests that by 2026 Google will not be in the paid search business at all. Kelly argues that “by 2026 Google’s main product will not be search but AI” and quotes Sundar Pichai, Google CEO, who referred to AI as “ a core transformative way by which we are re-thinking everything we are doing…we are applying it across all our products, be it Search, YouTube and Play etc”. (Kelly, 2016, p. 37) Hence, you should not be surprised to learn that Google’s core business will most likely gradually come to consist of supplying AI through the meter, in the same way energy companies supply electricity. Kelly in fact takes the idea further and proposes that by 2025 most businesses will operate on a model of “Take X and apply AI.” (Kelly, 2014)

This scenario is by no means a sci-fi fantasy, all you need do is again reflect on that co-founder’s statement of Google’s AI mission. On reflection, Google, Microsoft, and Amazon are already selling machine learning through the meter, via their Google Cloud, Microsoft Azure, and Amazon AWS platforms. For example, Amazon is already offering machine-powered capabilities to find relationships and insights within text (Amazon Comprehend), convert speech to text and build conversational applications (Amazon Lex), convert text to speech (Amazon Polly), recognize and analyze images and videos (Amazon Rekognition), translate text (Amazon Translate), and transcribe audio documents to text (Amazon Transcribe), while AWS DeepLens enables developers to run deep learning models locally on a camera, analyze their environment, perform sentiment analysis and build algorithms in line with their goals. (Amazon Comprehend, 2018)

As the saying goes…the future is here. But what are the implications? To begin with, quality websites and advertisers will get incentivized with higher rankings. As its core business shifts to AI, Google will not be selling paid ads at all. Instead, you will purchase AI units powering your ads. This will answer the rhetorical question of one top digital marketer in his presentation during a world SEO conference in 2016. The marketer was unsure how the dismissal of desktop search and the move to intelligent assistants would affect SEO results or paid ads; on reflection, he had few answers to Google’s ways of generating revenue in a non-desktop world of intelligent assistants. Putting aside the recent launch of Amazon Alexa’s screen version which could ultimately provide an alternative to Google search, I believe the question is inspired by a misunderstanding of Google’s mission. As discussed previously, Google is in the business of AI rather than search. This leads to the simple conclusion that Google’s main revenue stream in the future will be AI and not paid ads. In fact, paid search will be reduced to one of many streams of revenue generated by Google. Following this line of thought, Google’s core business will include leveraging AI capabilities to dominate traditional industries that are currently well out of its reach. The automotive industry is one example; Google’s driverless car project is well known to most people with an interest in the future. And why stop there? Google could leverage its AI capabilities to progress to so much more than building cars. As we saw already, Kevin Kelly made the very interesting prediction that the next 10,000 startups will in some shape or form employ a business model of “Take X and apply AI”. (Chace, 2015) If Kelly is correct, car manufacturers could end up having to acquire AI units to power-up their cars.  In an attempt to reduce operational costs, car manufacturers could sell cars as “AI ready,” passing the recurring cost of purchasing AI units to buyers. They could partner with Google in tailoring specific car models to specific AI types. This would set up a cycle of recurring purchases, as buyers would regularly buy AI units from the car manufacturer, in the same way they buy tires. The possibilities are endless, and the same line of thought can be applied to most industries of the future. As far as online advertising is concerned, I can easily envision a world in which advertisers will be buying an AI unit, applying it to their website and allowing RankBrain to decide the paid keywords and user intent most relevant to their business targets. This would also displace the need for keyword research or optimization, a positive course of action that will refocus webmasters on creating great content on topics they are passionate about. Of course, the same webmasters will feel a significantly increased pressure to improve the quality of their websites, resulting in an even better user experience.

To conclude this section, I believe that in the long term any form of SEO input will be dismissed, with RankBrain becoming the sole ranking factor Google will ever need.

The Future of Local SEO

G

Google’s RankBrain is getting smarter and will undoubtedly reach AI status sometime in the future. AI status will not be achieved through the more popular brain emulation method but rather through good old machine learning. The progress in the machine learning field will continue to be driven by the explosion of big data and improved computing resources. This new level of smartness will enable Google to accurately assess the quality of websites, including the quality of advertisers’ websites. Google will initially add a further two or three ads at the top of the page; however, a full first page of quality paid ads is not unrealistic. And in the future you will be paying for AI units rather than ads and targeted keywords. You may be purchasing units of AI in the same way you are now purchasing electricity. RankBrain will scan your website, determine the keywords, and decide where you will be ranked in the paid ads set by considering user intent. This will improve user experience, and Google’s revenue streams will flourish as well.

But where does this leave local SEO? There is currently a lot of talk about the power of local, and an entirely new local SEO discipline is being formed as a distinct arm of global SEO. New businesses are being formed and they are making local SEO central to their digital growth. New platforms are being built enabling you to take control of the submission and management of the local listings of your business. The argument is that the more local signals, the better your local SEO rankings. Books are being written on local SEO, and an entire local link-building industry is being constructed at a very fast pace. In September 2017 I attended a top SEO conference. While wandering around and reviewing various stands, the CEO of one of the many local SEO platforms boldly suggested that the Google My Business listing was the lowest ranking factor at a local level. Factors such as local citations, local links, and local reviews provided Google with signals that were more relevant than the Google My Business listing. This may well have been a sales pitch; however, it does point yet again to the widely held belief that ranking signals that can be easily manipulated will be around for a long while yet. The truth is, that sometime in the future you will wake up in the morning, have your coffee, open your laptop, and freak out at the realization that all your local citations, reviews, and links were not able to keep you on the first page in Google. You will finally acknowledge something that many SEOs have known will happen for a long time: RankBrain has just became the only ranking signal Google will ever need in ranking your website. And, contrary to the bold statements of the CEO we met earlier, Google My Business will be the only listing Google needs to determine the relevance of your website at the local level, combined of course with the number of RankBrain units you have purchased. Needless to say, the past will again repeat itself, and local SEO agencies or platforms will find themselves once more powerless against Google. That being said, local SEO will thrive and continue to gain popularity. Many local businesses will leverage the newly found power of local. With RankBrain connected to a more fit-for-purpose Google My Business console, Google will seize the opportunity to develop yet another stream of revenue. I believe Google will replicate the global SEO model for local, and go “all ads” on its first page or pages of search results. Foursquare already leverages the power of local by monetizing technology and user base via Promoted Places or Place-based ads. As the name indicates, Foursquare ads enable local businesses to move up the rankings and leapfrog competitors that are better ranked organically. Let’s simply call it “paid local SEO”. I doubt that you need convincing that Google is best placed in taking this business model mainstream. Simply purchase the number of RankBrain units you can afford, and allow RankBrain to rank you locally for relevant keywords. Suggestions on improving your website will be made, of course, in the same manner Google Search Console provides suggestions to improving your website visibility.

The Future of Data Analytics and Conversion Optimization

V

Various tools are available to provide data on literally every performance metric of your website. Google Analytics, heat maps, live visitor recording, A/B split testing, keyword analysis, competitor analysis, backlinks analysis, and pay per click (PPC) competitor analysis are just a few examples of the range of data at your fingertips. And tools of the trade are weapons SEOs still use in a determined attempt to maintain the aura of magic around the SEO profession. As an entrepreneur you can be excused for being bamboozled by the depth of information and expertise showcased by experienced SEOs. We will review some of the tools available in the tools section, however for the time being let’s have a go at predicting what the future holds for data analytics and relevant SEO tools. I already argued that RankBrain is going to become the sole ranking signal used by Google. You will purchase RankBrain AI units, apply AI to your website and allow the algorithm to decide the keywords and its variations your business will be ranked for within Google’s paid search results. The option to apply the algorithm to your organic search results will be available as well. However, as we have seen, it would make no commercial sense to waste AI budget on natural results that will get your website no clicks. This point takes us to the next question: what type of data-reporting processes and practices should we expect in the future? How will SEO data practices and tools evolve? I believe that SEO tools will become redundant at some point in the future. After all, as RankBrain makes SEO services redundant, SEO tools will implicitly become redundant as well. Let’s take leading backlink tools like AHREFS, MOZ, and Majestic as examples. There is no serious marketer who does not instinctively know that backlinks as a ranking signal will ultimately be displaced by RankBrain, hence the need for backlink analysis tools will disappear as well. Analysis of your competitors’ paid search campaigns or organic search rankings will also be unnecessary, as RankBrain will automatically determine the keywords your website will get ranked for.  Maybe A/B split testing, live user behavior recordings or heat maps will stay? This will not be the case. In fact, from the moment you allow RankBrain to take over, the algorithm will optimize your ads or website on the go for best conversion rates. This will be an ongoing rather than a once-a-week process, and RankBrain may optimize your website as often as every couple of minutes. The impressive type of A/B split testing efforts of team Obama during its 2012 campaign when that team tested over 10,000 variations of email messages (Ross, 2017) will become counterproductive, and will impact negatively on Google’s ability to optimize your website or your email campaign.

Chace best imagines and describes this scenario in his wonderful book Surviving AI: Back in the office, Julia set about re-configuring the settings on one of the company’s lead generation websites. The site used evolutionary algorithms, which continuously tested the impact of tiny changes in language, color, and layout, adjusting the site every few seconds to optimize its performance according to the results. This was a never-ending process, because the web itself was changing all the time, and a site which was perfectly optimized at one moment would be outmoded within minutes unless it was re-updated. The Internet was a relentless Darwinian environment, where the survival of the fittest demanded constant vigilance. (Chace, 2015, p. 35)

Of course, the evolutionary algorithms Chace is imagining are nothing other than our good old friend AI-super-powered RankBrain, and your AI credits budget will make all the difference between a quickly outdated website and a continuously optimized one. For Google, AI units will become an inexhaustible flow of revenue. For you, the cost will constantly creep up, and the “free” moral option of switching on or off your ads will disappear. Indeed, you will want to ensure your AI budget is not running out, even for one hour, as recovery will most likely involve additional AI costs. You will also be provided with the option to approve changes suggested by Google as an alternative to ceding full control of your website to RankBrain. However, most often you will prefer to cede control to RankBrain, both for convenience and for timely reaction to competitors’ optimization changes. The only tool you will ever need in this future is a significantly upgraded Google Analytics account, which will incorporate all functions currently provided by third party tools. Google already has the capability to provide most features provided by tools such as Crazy Egg; at the present it simply chooses to focus on its core business, which is AI. This makes perfect sense, as Google operates under full awareness of the fact that once AI is achieved, the company will seize control of most industries of the future. And certainly, adding heat maps to their analytics account is far from an immediate priority. However, from the moment AI status is achieved, Google Analytics’ capabilities will explode well beyond its existing features. I suspect the analytics of the future will incorporate a wide range of advanced data that is currently of the competence of data scientists. In fact, advanced data science analytics will become a central part of Google Analytics. It is not hard to imagine a future where Google will have a significant impact on the profession of data science and other analytic tools that we are now familiar with, by making machine learning a commodity. Consider for example the fact that data scientists spend up to seventy percent of their time cleaning up data they’ve collected inhouse, available online, or via public APIs. Furthermore, it is not expected that Data Scientists be proficient in building programs in R or Python in their job descriptions (McKinney, 2016 ), though having taken many courses in data science I acknowledge that data science is far more difficult to grasp than digital marketing.

However, even though data scientists may claim that strong programming skills are required in their profession, this skill will provide no advantage over Google. For example, we are already seeing machines writing simple programs, and with time they will become better at it. Take the case of Appligenics, the British company  we met earlier , and which has already been building software that writes software. Journalist Daniel Pink  explains the gap between human programmers and Appligenics’s software: ”Where a typical human being…can write about four hundred lines of code in one day, Appligenics applications can do the same work in less than a second.” (Pink, 2008, p. 44) Let us also remember that Google Cloud Platform, Amazon Web Services, IBM Watson, and Microsoft Azure already offer various machine learning capabilities and services on a pay as you go option.

Now, let’s consider Google’s RankBrain as an alternative source of data science models. Google is not spending seventy percent of its time cleaning up the data as data scientists do; it has access to massive amounts of data well beyond the data accessible via public APIs or already available online, not to mention its own IP processes and expertise. The truth is that once AI is achieved, Google will be developing predictive models down to the most granular level niche or industry. The service could be offered as a free option. Alternatively, this could become another stream of revenue for Google by it offering options to buy AI units and apply it to prebuilt or drag-and-drop predictive models within Google Analytics. I will risk repeating myself yet again: Google, Amazon, Microsoft, and IBM Watson are in a sense already selling machine learning/AI services “through the meter” via their Google Cloud, Amazon Web Services or Microsoft Azure platforms. Many data science, SEO, and conversion analytics professionals will argue against the idea of data science being sold as a commodity, and many popular futurists may do as well.  For example, Brynjolfsson  & McAfee points out that in the future people will have to work with machines (Brynjolfsson & McAfee, 2016), run with machines, rather than against them. Kevin Kelly, one of my favorite futurists also believes that in the future we will earn our wages based on how well we work with machines and forecasts that 90% of our coworkers will be unseen machines. (Kelly, 2016) Similarly, white hat SEO has become the only sensible road to follow within the SEO community. The idea behind it is that trying to deceive Google—black hat in popular SEO terminology—doesn’t work anymore. Hence, by working together with Google and being honest with RankBrain you will achieve longevity through your efforts and protect your website against future updates. However, the idea that people will work with machines, particularly in the digital age is a combination of wishful thinking, a large dose of naїvety, and cognitive dissonance at best. You only need to consider again the progress made by Google in dismissing traditional ranking factors. As we discussed, you are feeding, consciously or unconsciously, a massive amount of data into Google’s algorithms, which allows Google to further refine its algorithms. Yes, you are training RankBrain to replace you.

This leads us to another point, which is that in the future Google Analytics will, depending on the settings you choose, recommend or perform optimizations of your website live. Yes, live. Predictive data science models within your analytics dashboard will help you visualize, forecast, and plan various improvements to your website. You may shift between various metrics to understand the impact of specific metrics on improving performance of your site vs. the competition, and tailor your strategy to beat each of your individual competitors. You may need to increase your AI budget to leapfrog competitor A and write a more comprehensive piece of content to further improve an article that RankBrain deems relevant to your topic. And while you are at it, you might purchase extra AI units for promoting your article to all the people who linked to, shared, or liked the original article. A predictive model will inform you that by writing the article, your paid ads will improve ranking position by one place. I know this sounds a bit sci-fi, right? Let me ask you to consider how Buzzsumo, one leading outreach tool, works.  Input a keyword to Buzzsumo’s search bar, and the tool will return the most popular articles by number of shares on main social media channels. A list of websites linking to the article, and a list of people who shared the article will also be available. Buzzsumo also includes a module allowing you to search for Influencers in your industry or niche. One basic idea behind tools like Buzzsumo is that by finding the most popular content you can rewrite it, make it better, and promote it to influencers, sharers, and webmasters linking to the original article. Yes, but re-marketing to influencers, sharers, and people linking to an article? Well, Google has access to the same data, and to data far beyond what any third party tool will ever dream of. Backlinks to your website is one simple example of Google being best placed to beat tools like AHREFs, Buzzsumo or Majestic, given the inexhaustible amount of websites indexed by its algorithms.  And, if Yuval Noah Harari’s prediction proves correct, you will willingly cede control to intelligent assistants such as Google Assistant for better optimization of your increasingly complex and demanding life.

Of course, Google Assistant may also be powered by RankBrain. In fact, you can easily imagine a time when smart assistants take over the organization of your schedule, communicating with each other directly and organizing meetings; and all happening with no interaction between the users. That being said, even at present Google holds a vast amount of information about you, whether you are a sharer, influencer, or a webmaster linking to a piece of content. Google crawls your free Gmail address, social profiles, web mentions, IP address, and so forth. In this context, re-marketing to you as a sharer, influencer, or webmaster is a realistic scenario even at the present time. As far as predicting that improving an original article will increase your paid search rankings, as an SEO you are always being told that Google is looking for original content. Why, then, would Google encourage making improvements to an already existing article? This is simply a matter of further improving user experience; think no further than the depth of articles on Wikipedia. Google predictive models could also inform you that writing articles on particular themes will improve the overall authority of your website, improve rankings, and ultimately forecast the revenue generated by the actions you have taken. One basic example is research by Google in 2011 that showed that every $1 spent online drives between $4 and $15 in offline sales. (Enge, Spencer, Stricchiola & Fishkin, 2013) However, RankBrain’s capability will go well beyond general case studies and will link your AI ROI forecast to specific industries, locations, and even footfall. You may perform a breaking-point analysis, assess your ROI and decide the number of AI units to purchase. Moreover, in optimizing your website, RankBrain will consider real-time live data in addition to historical, retrospective information. In fact, past performance will have little or no value for webmasters, given the fluent nature of the Internet. In the words of Yuval Noah Harari: This is the paradox of historical knowledge. Knowledge that does not change behavior is useless. But knowledge that changes behavior loses its relevance. The more data we have and the better we understand history, the faster history alters its course, and the faster our knowledge becomes updated. (Harari, 2015, p. 58)

Indeed, RankBrain will perform so many optimizations every couple of minutes, it will make historical data useless for future optimization purposes.  This is a big shift from your current Google Analytics account, which is generating primarily historical data, leaving it to webmasters to make sense of it and to decide the best course of action. And for the entrepreneurs and marketers that currently struggle with the immense amount of analytics provided via the Goggle Analytics tool, I have only good news. As Google Analytics provides mainly numerical data, developing a Narrative Science type of business (more on this later) will make perfect sense for Google. You will be able to wave goodbye to those complicated reports and say hello to narrative generated from your analytics data.

The Future of Content

T

There is no SEO study that does not place quality of content within the top three ranking factors used by Google in determining the relevance of your website. And as I have argued in the previous sections, Google’s primary goal in Search is to eliminate reliance on factors open to manipulation, backlinks being one example. Google has always pointed out that great content represents the only way webmasters will maintain and improve rankings in the long run. In fact, gaining the ability to rank your website based on the quality of your content and user intent is the very reason RankBrain exists. Of course, serious digital marketers have taken note, and “content is king” is on everyone’s agenda. In brief, there really is no debate over the importance of great content as a ranking factor. Many tools are available for helping you in deciding your topic, title, length, or the angle of your article. And even more tools are available to help you bring to life your content, with little or no programming or design experience. Content writers have become an integral part of serious SEO teams, sometimes commanding fees higher than the SEO experts themselves. And, as the SEO profession gradually lives its final days, great content will more than ever be required. The only notable difference is that paid search rather than organic search will benefit. Indeed, as I have been discussing earlier, RankBrain will have gained the ability to assess the quality of a website and rank it within paid search based on three factors alone: AI budget, quality of content, and user intent. There will be no need for keywords, just great content. Daniel Pink is one journalist that passionately argues that the future of human professions revolves around jobs requiring skills such as creativity that machines will supposedly be unable to take on. Pink supports his position with some great statistics. For example, he shows that in the US the number of web designers has increased by a factor of ten in a decade, the number of content writers has increased by 30% since 1970, 50% more people earn their living from composing music or singing, and the number of universities offering Master of Fine Arts programs went up from 20 a couple of decades ago to over 240.  He concludes by stating that: More Americans work today in arts, entertainment and design than work as lawyers, accountants and auditors…in a world enriched by abundance but disrupted by automation and outsourcing of white collar work, everyone, regardless of profession, must cultivate an artistic sensibility…we must all be designers. (Pink, 2008, p. 55, p.  69)

Keeping this in mind, we cannot avoid the conclusion that a great future lies ahead for content writers. Or does it? A very important distinction has to be made, which is that while Google will be able to rank your website based on its great content, the algorithm will care less and less about the provenance of the content. Yes, I am stating that automated content platforms will make content writers redundant. This may seem counterintuitive, as Google places very little value on automated content at present. And, as a good SEO, you have learned that original content is at the top of Google’s agenda. Hence, the argument is: give Google what it wants, automated content is a no-no. However, let me point out that the main disadvantage of automated content is not the process per se but the quality of content produced via automation. Yes, tools like Kontent Machine and SEO Content Machine will produce average content which may suffice as a blog post generating backlinks, or on your own blog enhancing the theme of your website. However, for the time being, the quality of automated content is well below that of the content human writers produce. Hence, let us conclude that the problem is not that content is being generated automatically, but that the content produced is of an inferior quality to human-generated content.

Let’s now imagine a future of automated content matching and surpassing the quality of content generated by humans. Moreover, imagine that machines could generate 1,000, or why not 10,000, great quality articles per minute. Would Google still place more value on human content? Would Google place more authority on a website with ten great human-generated articles vs. a website with ten thousand machine-generated articles of the same or better quality? I think not. It helps to think of a more practical example to better understand the issue at hand. Let’s assume that RankBrain consistently finds that time on site metrics is higher for the automated content website; people click more, read more, bounce less, and appreciate the website more overall. RankBrain also knows that the depth of the website is better, and will continue to be better, given the human limitation of producing content at the rate of automated content generating machines. Would Google give users what they want, or direct them to the lower value human website? By now we know Google is literally obsessed with great user experience, and if automated content provides it, then so be it.

However, as a man, I feel compelled to make my final stand. My whole argument is based on the idea that machines will gain the ability to generate content equal to or better than that created by humans. Critics may argue that this may never happen. This line of thought though, represents yet another case of wishful thinking and cognitive dissonance. First, content automation technologies are already available and are generating content that humans cannot distinguish from human-generated content. One often-published example is of a company named Narrative Science. Narrative Science started initially by using its proprietary machine- learning algorithms to generate sports articles from statistical information such as charts and game stats. It is widely acknowledged that Quill, Narrative Science’s algorithm produces stories that cannot be identified by humans as machine generated. In fact, the algorithm is so human that many of the clients displayed by the company on their website include serious organizations like Delloite, Credit Suisse, Mastercard, PwC, the UK’s National Health Service, and Forbes. (Mar, 2016) And all you need to do is upload your stats in a format such as CSV, JSON, XML, etc., and allow the algorithm to create your great article or presentation; it is that simple.

Quill currently generates articles for a variety of industries, including sports, healthcare, financial services, and commercial businesses. And, as early as 2011, in an interview with Wired writer Stephen Levy, the co-founder of Narrative Science predicted that within 15 years from 2011 90% of new articles will be written by algorithms. (Ford, 2016) You may dismiss his prediction as a marketing exercise; however, you should not engage yourself in this kind of self-deception: highly respected publications such as Forbes are already using this technology with no complaints from their unaware readership. Marr (2016) concludes that more and more algorithms are creeping into professions and performing tasks that were previously the reserve of humans, including producing stories and reports in natural language. Companies such as Narrative Science achieve this in spite of lacking the massive technological, IP, data insights, and expertise Google has at its disposal. So an obvious question is: what would stop Google from taking full control of content by allowing RankBrain to produce content as well? Please keep in mind that we are not debating the moral issues, but simply considering the possibility of Google becoming the ultimate content automation platform as well as everything else. And, looking a bit closer into it, this would make perfect sense for several reasons. First, an AI-powered RankBrain has already gained the ability to accurately determine quality of content. Google has access to an almost infinite database of content in literally every aspect of human life, as discussed in our discussion of big data. Google has also gained the ability to read images and listen to words accompanying videos, not to mention the fact that it has intimate access to your agenda, emails, and all the other digital information you may generate. In this context it is hard to find a reason why Google would be unable to deliver a significantly improved Narrative Science-type project. Of course, there is also the option of buying companies like Narrative Science, deploying RankBrain to it, and significantly improving the output. The truth is that Amazon and Google have already developed extensive machine learning algorithms, which are being provided on their Google Cloud and Amazon Web Services Platforms—and they offer the same capability as a service too. Simply use Amazon Comprehend to identify the relevance of the text, and then apply Amazon Polly to it to turn text into lifelike speech. Let’s make a final stand and assume that as human content loses ground to automated content, all humans stop creating content. No content created at all; period. This would potentially present a problem, as Google will now have no access to fresh information to use for generating relevant content. How would Google compile articles related to an international conference or a local meeting without receiving any human input? In fact, this is a problem that has already been solved. To begin with, voice to text technology gets better every day, and a conference could be automatically downloaded into a text format in real time and subsequently used by RankBrain and other algorithms to generate great content. For example, Enable Talk, a robotic glove, is using flex sensors in the fingers to recognize sign language and convert it to text. (Ross, 2017) In yet another demonstration of how written data can be converted to digital, Apixio, a cognitive computing firm operating in healthcare, uses optical character recognition (OCR) technology to convert data such as doctors’ notes, charts, consultants’ notes, radiology notes, pathology results, and discharge notes from hospital records. (Marr, 2015) Now, anyone who has ever tried to decipher a doctor’s note will agree with me that Apixio’s software must be nothing short of a miracle. Similarly, Amazon Lex already ”provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text” (Amazon Lex, 2018) And why not using Amazon Transcribe for transcribing the audio file or video commentary as well? Or even better, simply allow AWS DeepLens to have a look around and listen to the conference, analyze the information it gathers, recognise participants, perform sentiment analysis, relay it back in a text form, and even develop predictive models and forecasts?

Of course, you may want to process the data in order to gain insights prior to its being fed into the content generation algorithm.  As we have seen, Amazon Comprehend is one example of a natural language processing service that uses machine learning to generate insights and relationships in text, while Amazon Rekognition (Amazon Rekognition, 2018) will look after your image and video analysis needs. In addition, Amazon Translate can ensure that inputs from foreign delegates are accurately interpreted. And once the text has been prepared, why not deploy it in the form of an article using Narrative Science-type algorithms, or in a video format using Amazon Poly? Ok, but how about content that requires human creativity: books, musical compositions, research papers, and so forth? Progress has already been made in many areas we believed to be the reserve of human creativity. Some human input will most likely still be required; however, these positions for humans will be very few and only open to the very best people. Most of us believe creativity is the reserve of the human species, consider it our competitive advantage, and argue passionately against signs which indicate that this may not be quite true. But, as one example of such signs, in 2012 the London Symphony Orchestra performed a piece named Transits to an Abyss, which was written by Iamus, and, as you may have already guessed, Iamus is an AI algorithm. Similarly, in his research on machines’ ability to innovate, Stanford professor John Koza found that across a variety of industries algorithms produced designs competitive with work produced by humans. Moreover, on two occasions the work produced by algorithms created new patentable inventions. (Ford, 2016) Ok, you may argue, but these algorithms will never be able to replicate human emotions that are so essential to the originality of a piece of art. However, we have already seen how technologies developed the ability to accurately identify human emotions at a deeper level than we perceive them ourselves consciously. The only challenge left for algorithms is getting the creative desire simulated and translate that into art. Again we are not quite there. However, some progress has been made in this direction. For example, Simon Colton of the University of London has created software that can accurately read emotions in photographs of people and paint abstract portraits that convey their emotional state. But how about assessing quality of content? Shouldn’t we humans play some part in checking and editing the article, the piece of content, before it’s indexed by Google, particularly given the pressures of getting out the highest quality content? Unfortunately, algorithms are already on track to fill in this gap as well, if we are to accept the results of a study carried out at the University of Akron’s College of Education in 2012. The study was carried out on 16,000 marked student essays and found that compared with human markers machine learning algorithms performed as well and at times better at the task of marking the essays. (Ford, 2016) At this point I feel the need to point out that we are talking about algorithms having the capability of marking essays, hence determining the quality of essays and the creativity and knowledge of students. We now have a Narrative Science-type algorithm that automatically writes quality articles for Forbes, and another algorithm acting as a second editor. Hence the idea that creativity is an exclusively human attribute is a fallacy; algorithms such as RankBrain will prove us wrong.

Ford (2016) identified a very important trend that brings us one step closer to understanding the process that will ultimately lead to the dismissal of content creation by humans. This trend features two important aspects of the process leading to automation of human tasks. The obvious one is that jobs created via offshoring are temporary, and represent only a transitional step toward automation. The second aspect, though, is that while not ready to take on humans fully in some industries, algorithms enable the offshoring of the relevant jobs through the “automation funnel”. McAfee and Brynjolfsson looked at the tasks that have been offshored within the previous 20 years and discovered that those tasks were routine, well-structured tasks that were the most easy to automate. McAfee concludes: “in other words, offshoring is often only a way station on the road to automation.” (McAfee & Brynjolfsson, 2016, p. 184) As for the profession of copywriter, the decline of that has already started if you consider the variety of platforms and outsourcing services offering copywriting at far lower costs than employing the services of copywriters directly. For example, you can now use Copify to order quality articles, blogs, webpages, or email templates at a significantly lower cost than when same copywriters were contacted directly.  In fact, as a client you are not aware that many digital marketing agencies use platforms such as Copify or Freelancer for outsourcing content you are paying premium fees for. Clearly, Copify and Freelancer outsourcing services are simply a transit step to the full automation of content production. It is quite realistic to believe that Narrative Science-like algorithms will replace human writers, who are already commanding lower fees on these platforms.

The Future of Content Amplification

I

In the “tools of the trade” section I will discuss various content amplification tools and practices. In addition to driving traffic to your website, other primary goals of these tools and practices are building backlinks to your content. I have recently attended one leading SEO conference, Brighton SEO, and registered for several presentations. Topics include: emphasizing the importance of great content for link-building purposes, various innovative practices for developing content strategies that attracted backlinks with little to no budget, and quite a few others. One point comes over again and again: most agencies use backlinks rather than quality of post per se as a metric for assessing the performance of content writers. The assumption is that if an article has gained a high number of links it must be of high quality, or at least it resonates better with the target audience. However, we know and Google also knows that this logic is flawed, as in most cases links are being built by digital marketers rather than attracted naturally by quality of content. In essence, Google is being manipulated again.

As discussed, it is only a matter of time before backlinks will be dismissed for link building purposes. Implicitly, content writing for link building purposes will be dismissed as well. This will be a welcome change both for content writers and for users. On one side, content writers will now free themselves to write passionately about their favorite subjects with no pressure to adapt content for link building or use keywords, H1s, and other recommended SEO practices. The user will also benefit from better quality content. There is, however, another consequence to the dismissal of content writing for link building.  In the “content creation” section I have painted a rather apocalyptic image regarding the future of content writing as a profession, and the impact of human-generated content on ranking your website. On a more positive note, the best content writers will most likely operate within particular niches, such as brand awareness campaigns or reputation management. I believe that while the bulk of content production will be automated, brands looking to develop distinct identities will still benefit from human input from a creativity perspective. At the same time, you should not allow yourself to be too optimistic. First, the number of these positions will be small, and only the very best writers will occupy them. Second, any belief you may have that a machine cannot create brilliant concepts, such as Nike’s ”Just Do It,” is flawed. You only need to input a keyword in free tools like Portent or HubSpot’s Generator to get an understanding of the wealth of content ideas generated by these tools. Keep in mind that these tools are rudimentary when compared to a fully developed RankBrain which will be holding every bit of information about your brand, down to, in the above example, what the shoes are made of. Hence, in this scenario you may simply purchase AI units, apply the AI to a tool such as Portent, input your brand keyword, your brief, and click on Start. Because human content will still play a role in niche markets, we may also assume that content amplification tools will continue to be put to good use in amplifying the content produced. However, such an assumption would not hold. We have already been discussing the manic pace of content production, with much content currently being re-shared on various social media channels. For example, you can use highly affordable tools such as Social Pilot or Post Planner to curate content, add comments, and schedule the content in advance for as many months you like. You can go so far as setting up automated posting from your favorite RSS feeds. However, like Google, social media platforms are also obsessed with providing great user experience. And, regardless of whether original or curated, the flood of content automatically posted on their platforms impacts user experience negatively. On the one hand, we humans have limited attention, we simply cannot cope with the amount of information and stimuli that is being thrown our way daily. On the other hand, platforms themselves have different brand identities and user bases that will often appreciate different types of content. Hence, amplifying a standard article on Facebook and Reddit may meet quality standards of a Facebook audience but would be far off the target with the highly passionate and engaging Reddit audiences. In brief, a one-size-fits-all content amplification approach is not in the interest of any platform.

From yet another perspective, social platforms are looking to attract users who are committed to their platform, and who actively contribute and engage with the community.  Simply amplifying articles that may or may not resonate with their audiences provides little value to the network. You may ask yourself: but why are social networks not making their stand against automated content? It may seem hard to believe but machine learning technology is one primary reason. Similarly to Google’s RankBrain, most platforms advanced the quality of the algorithms powering up the networks; however, they are not quite there yet. For example, if you ever used a tool for the purpose of growing your follower base on the follow/no follow principle you will at some point have experienced the frustration of having some of your accounts “caught” and banned, while other accounts slipped through and performed well. Remember our earlier discussion of APIs? As you use these tools, which are being built by freely accessing APIs of these platforms, you are also training social network algorithms to gradually dismiss your tools. You may ask yourself why social media giants have not taken the fight into the legal arena, given that their AI algorithms cannot yet represent the full answer to automation practices. They did in fact. The most recent example is Instagram’s stand against several major automation giants, which led to well-known companies such as Instagress and Mass Planner closing down. Did it work? Not really; most such companies have opened as closed doors communities and continue to operate. And, for every Mass Planner there are dozens of similar tools providing automation services. Another of my favorite tools, Social Pilot, has taken a different approach by offering an Instagram posting calendar and reminder alerts, rather than actually enabling webmasters to automate their content plan. This is in fact the best example of the future of automation: what will happen when all social media giants catch up with Instagram? Surely, as a user of Social Pilot, a calendar reminding you to log in to your social accounts to post content will have little or no value.

To sum up: social media giants like Instagram, Facebook, Twitter, and LinkedIn are better off focusing on the cause of automation rather than on the effect. Indeed, improving algorithms will ultimately tackle the cause of the problem as opposed to inefficiently raising a legal war against the automation industry.  Thus, similarly to RankBrain, as social media giants improve their AI capabilities and algorithms, all practices impeding user experience will gradually be displaced. Some of the practices I refer to include content amplification on multiple social media channels, follow/no-follow practices, automatic subscription and posting to groups, automated and spun comments, automated likes and follows, and so forth. We discussed at length the consequences of RankBrain achieving AI status in terms of content, search results, and user experience; the same benefits will follow for social media companies. To recap, benefits include better content, better brand identity, more engaged audiences, and dismissal of black hat practices.

One point where RankBrain and social media networks will differ is in the automation of content production. Given the social nature of social media, most content will continue to be user produced, with algorithms supporting and improving the social interaction between users. For example, machine-generated articles may be used to initiate conversations, which will be shared by users, generating likes, opening conversations, and sparking debates. Of course, other content-sharing platforms such as Stumble Upon or Scoop.it may welcome the opportunity of providing users with high quality content, regardless of whether it is machine-generated or not. And, provided that technology allows them, they may as well follow in RankBrain’s steps and leverage AI capabilities and algorithms for generating their own quality content. Another area where social media platforms will emulate Google’s business model is the increased prominence of sponsored content. We discussed at length the innovator’s dilemma and the reasons behind why companies must grow, and have seen how growth concerns over Twitter impacted the network and perceptions on its ability to become profitable. We have seen how Facebook has already significantly limited the reach of your organic posts and skyrocketed their revenue from paid ads. And how would social networks meet the growth need? The answer is simple: more sponsored content within your stream. In fact, I see no reason why social media companies should not replicate Google’s model, testing sponsored content within the first positions of your stream. Of course, sponsored content will be tailored to the individual, and a smart Facebook AI algorithm will already have plenty of data about your previous behavior, characteristics, topics, and articles you have been reading, third party information, databases, and so forth. Given the depth of information held about you, AI algorithms may take a more aggressive approach to improving the commercial value to advertisers. Past information collected within your digital file will be complemented by real-time information about your behavior and state of mind. While gazing at your Google Pixel phone, the device will gaze back at you, classify your state of mind, and tailor content or ads to match your mood. (Kelly, 2016) If that sounds too futuristic for you, consider that the technology is already available, thanks to the efforts of Rosalind Picard and Rana el Kaliouby, two MIT researchers who developed software capable of detecting human emotions via smartphones’ camera devices. The software can identify whether you are depressed, perplexed, bored, or experiencing some other emotion while gazing at the phone. (Kelly, 2016) Or consider the machine learning service offered by Amazon’s AWS DeepLens which “lets you run deep learning models locally on the camera to analyze and take action on what it sees”. (AWSDeepLens, 2018) Yes, you heard me—the AWS camera, looking at you, analyzes your environment and begins to build algorithms based on the goals you are setting up. Similarly, Amazon Reckognition will find you into the crowd, access your digital file and perform sentiment analysis on your face. In fact, it is likely that social algorithms will hold sufficient information to congratulate you on your pregnancy, even before you or your family receive the news. This would be possible by aggregating data such as images you posted, comments such as feeling sick in the morning, purchasing medicine, location, and habit changes, data from your fitbit device, and voice searches, to name just a few items. While browsing, you will find yourself staring at ads that display explicit messages such as “Morning sickness? Feeling tired? Hate coffee? Buy a pregnancy test from XXXX!” Of course, I am no professional copywriter; however, the point is that social algorithms will have the capacity to make inferences, provide personalized ads created in real time, and nudge you into a more commercial buying mindset. Not convinced? Well, you may be surprised to hear that this has already been achieved at a more macro level. Marr  describes the case of a 15-year-old girl receiving maternity coupons from the supermarket Target. Her father filed a complaint against the store but later apologized after finding out his daughter was indeed pregnant. In this case Target’s algorithms accurately predicted pregnancy based on changes in purchasing behaviors. (Marr, 2016) Target’s extensive use of algorithms was popularized by Charles Duhigg, who has spoken at length about Target’s ability to predict pregnancy based on changes in buying behavior, such as purchasing an unusual high level of unscented lotion, vitamins, scent free soap, cotton balls, hand sanitizers, washcloths, etc. In fact, Target got so good at it that they could go as far as identifying the semester of pregnancy. (Duhigg, 2012)

It may be a step too far, but there is no reason why AI-powered Facebook algorithms could not have the capability of generating unlimited ads targeted at you, with no input from the advertiser. Retargeting would particularly benefit from this type of approach. For example, a social media company or RankBrain may retarget individual users with different ads and funnels depending on the how far they have progressed in their search journey, historical data, and real-time live behavior or searching. User context and intent will become particularly important if social media advertising is to gain market share from Google. Indeed, most advertisers use Google for driving commercial results, and social media for branding or awareness-raising campaigns. The Foursquare business model mentioned in the “Local SEO” section will become a particular strength for social media companies, given the high amount of time users spend interacting on these platforms. Indeed, out of every three minutes spent on the Internet, one is spent on social media accounts. (York, 2018)

You may find yourself walking down the street, two months into your pregnancy, and a mobile ad would nudge you toward visiting your local Mothercare store, located 50 meters away. Or a great lunch offer from your local shop may be served on your mobile around the time you contemplate your options for lunch. Of course, you have just laughed; as an experienced marketer my examples are already old news, and various technologies already make this possible. As a simple example, Google Maps already generates real-time information on your mobile about the place you find yourself in at this very moment. You may assume that Google traces your location via a magical feature of the Google Maps app. However, if this were the case, Google would not ask for access to your location when you requested direction to another location. The most likely explanation is your very own mobile, Fitbit bracelet, tablet, and so forth. Yes, smartphones contain various sensors such as GPS trackers, accelerometers, gyroscopes, and proximity, ambient, and NFC sensors. These sensors inform Google and other advertisers about your location, how fast you are moving, the temperature around you, the position of your mobile phone and whether you are transferring funds. (Marr, 2015) Needless to say, this information will enhance Google’s ability to profile your user intent, mood, or readiness to buy throughout the day.  Furthermore, social media algorithms, RankBrain, and other leading algorithms will get to learn about you to the point where they can predict your lunch order and have it delivered at just the right time, before the 1 p.m. meeting set up on your calendar. Of course, your Fitbit bracelet and loyalty card would have trained the algorithm that, before meetings, anxiety most often causes a drop in your glucose levels. And, as your purchasing behavior indicates you generally purchase a more sugary snack thirty minutes before the meeting, the meal will include a slightly more sugary snack, helping you in re-balancing your system. You may laugh off my statement, however I assure you that retailers are very confident about their ability to predict your buying behavior. One example is Amazon, which patented a system called “anticipatory shipping”. (Marr, 2015, p. 205) In effect, the patent underlines Amazon’s belief in predicting and dispatching your purchases before you actually made the decision of buying it. Still not convinced? Consider the amount of targeting options already available to advertisers via Facebook’s Audience Insights platform. As an advertiser, you may choose to show your Facebook ad to people based on location, age, interests, connections, and more-advanced options. Not very convincing, right? Things change when you drill down within the various categories. For example, the Interests category includes very granular targeting options such as interests in various industries (e.g. advertising, agriculture, banking), the entertainment you consume (e.g. type of movies, books you read, TV shows, music, games), family and relationships (e.g. dating, family make-up, fatherhood, friendship, marriage, parenting), fitness and wellness (e.g. bodybuilding, dieting, gym membership, meditation, nutrition, yoga). You can choose to deliver your ads to people based on their behavior, language, relationship status, education, work, financial situation, home type and value, parents, politics, life events, and much  more.  Yes, I can deliver my ad to single dads with two children, MBA educated, earning over £75,000, owning a £500,000 house, who practice yoga, have certain political views, and so forth. This is all being made possible because Facebook already has this information about you. After all, remember that over 25% of us do not even bother with any kind of privacy settings!

Perhaps the Cambridge Analytica scandal of 2018 best indicates the amount of data about us that companies like Facebook have in their possession.

Aleksandr Kogan, Cambridge Analytica’s chief researcher is reportedly proud of scraping the Facebook profiles of “50+ million individuals for whom we have the capacity to predict virtually any trait.” (Romano, 2018) Most people, and public opinion, condemned this mischievous act of harvesting our personal data, ignoring the reality that WE provided access to our data by mismanagement of our privacy settings and by uploading massive amounts of information to our Facebook profiles, adding to it daily. As Aja Romano concludes: “The Facebook data breach wasn’t a hack. It was a wake-up call.” (Romano, 2018)

The Future of Native Advertising

N

Native Advertising will mostly follow the trajectory of social advertising, and an increased pressure to maintain share of attention will drive customization and improvements of ads down to individual users. Devices such as your Fitbit will signal your mood, readiness to take action and the step you are at within the sales funnel. A tailored ad will then be served directing the audience through the funnel. Given the prominence of automated content, the advertiser may be better off leaving ad creation to AI algorithms. I expect that these algorithms will become more competent than humans at creating as many ads as are needed to match individual user characteristics and intent. In fact, Google Adwords has recently introduced and testing automatically launched machine-generated ads that are augmenting human generated ads. (Marvin, 2017) Conversely, as algorithms will update content in real time based on user ongoing behavior, characteristics, mood, and the weather, fixed conversion optimization tactics such as A/B split testing will become redundant. Following RankBrain’s model, newspapers may decide to leverage the cloud to build a high-quality, natural sponsored content ecosystem, as opposed to sending traffic to your website. This would increase time-on-site metrics and improve revenues, given the control newspapers would have over the funnel.  A Scoop.it type of model would deliver great user experience, tailor quality sponsored content to the individual, and keep people on the advertiser’s website.

On email marketing

C

Ceding control to AI-powered intelligent assistants for organizing our lives will lead to the end of email marketing. As algorithms get more and more accurate, spam or unsolicited email will not even reach your inbox. And even if it does, it will be automatically deleted before you even see it. You may not think about it this way, but spam filters are nothing other than algorithms that are continuously trained for the purpose of improving email experience. Email algorithms, though, will go so much further than just managing spam. One example I can think of is algorithms understanding that you have downloaded a “lead magnet,” but have not provided permission for it to be added to an email marketing database. Hence, any subsequent emails from that recipient will be deleted. But then, you may not have to provide your email address to download lead magnets at all. Simply input your keywords within the algorithm content generation bar, press Enter and the very best content will be either created or curated for you on demand. In fact, you may find that email marketing companies will be developing AI algorithms as a core business, as opposed to email marketing per se. Following the customer service mindset employed by Google, email algorithms will change focus from the advertiser to the user, if they are to survive as a viable business model. We may find email marketing companies shifting attention from cluttering your inbox to cleaning it, making it more relevant, and customizing it down to your individual needs. A very timid start is being made by Mailchimp, a market leader that recently introduced its Mailchimp “brain”. The purpose of the brain is to help advertisers by taking the email marketing strategy out of their hands. In brief, Mailchimp says that by ceding control to its brain, marketers will improve sales. Some of the services that can be automated by Mailchimp’s brain are: welcoming new subscribers, creating emails, retargeting website visitors with emails and ads, recovering abandoned carts, finding new audiences with ads, thanking first time customers, sending order notifications, following up on purchases, and rewarding best customers. (Mailchimp, 2018)

In this context, ceding control to the algorithm does not sound that bad anymore, does it?

Another interesting approach to email marketing has been proposed by Esther Dyson who has emphasized that people should be paid for reading emails. Indeed, Esther could charge different fees for reading emails she receives, based on the type of email and Esther’s influence. For example, an email from an entrepreneur could be charged more than one from a student. (Kelly, 2016) This approach will, first, clean up your inbox, as companies will think twice before buying and deploying email lists of users that haven’t given their permission to be marketed to. Second, you gain control, and manage the amount of attention you pay to your emails. And, just as proposed in Google’s case, you may as well purchase AI units, apply it to your email inbox and allow algorithms to match marketing emails to your profile. And why bother replying to the sender when your Google assistant can generate personalized answers based on the digital profile of the sender, including his or her social media activity, previous emails, blog posts, and so forth? In fact, in 2013 Google had already applied for a patent on a system that could automatically generate personalized email replies and social media responses based on the digital profile of the target, while applying the writing style of the sender. (Ford, 2016) Advertisers, from their standpoint, will also jump at the opportunity to purchase AI units, deploy AI in their email marketing tool of choice, and choose various email opening bidding strategies based on levels of influence or characteristics of audiences. This will be a great advancement in advertising, as marketers will now know for sure that their emails are being read by the right people, people that have provided permission to be targeted by relevant email marketing campaigns. This is in contrast to the blind and increasingly inefficient email marketing approach. In the end, this will be a win–win situation. The Mailchimp brain will manage the whole process for you. After all, its purpose will be to displace the inefficient practice of direct mass email marketing; Mailchimp sells AI units not emails. Another consequence of smarter algorithms will be the streamlining of the email production process, making it less of a dreaded and time-consuming task. While listening to clothing recommendations provided by your intelligent assistant, you may decide to write an email to a potential client, a task that would normally be a bit of a chore. Luckily, your intelligent assistant, powered by RankBrain can also collect data on your target, including blog posts, social media profiles, statements, emails, Nest inputs, and so forth. Armed with this information, RankBrain can now draft, write, and optimize your message to match your target’s digital profile. Of course, this also goes for sending SMSs, VR, calls or social media messages. And all you had to do was provide your intelligent assistant with the name of your target, your keywords, and a context. You may of course jump in and suggest corrections; your intelligent assistant will dutifully adjust the message. Or, going back to Kelly’s prediction, you may find that email marketing will die a quick death as well, and your intelligent assistant will communicate directly with the Google, Siri, Cortana, or Echo assistants of your target. If you want evidence of the abilities of Google Assistant, all you need do is watch a demo presented by Google’s CEO, Sundar Pichai, which shows a Google Assistant making an hairdresser appointment for a client. The amazing part is that as the hairdresser doesn’t have a vacancy on the date and at the time requested by the caller, the Assistant recalibrates and continues the conversation offering other times on that date, and arranges an appointment. (Welch, 2018)

On VR and Augmented Reality

K

Kevin Kelly went to great lengths in reviewing the most disruptive technological platforms that will impact on us within our current life span. He referred to computers as the first disruption, mobile phones as the second, and concluded “the next disrupting platform now arriving is VR.” (Kelly, 2016, p. 231) VR and augmented reality represent great advertising opportunities that will not go unnoticed by marketers looking to develop new distribution channels. You will wear your smart Google Glass everywhere you go, whether running, walking, driving, or having a coffee in your local coffee shop. Glass, powered by RankBrain will greatly enhance your productivity. You will simultaneously have your coffee, watch a movie, read a book, or read your emails. Of course, many other uses will make Google Glass an indispensable part of your wardrobe. Obviously, Google will apply AI technology to developing this new stream of revenue. As I mentioned earlier in the book, Glass will watch you, accurately identify your mood, and tailor content or ads to your current situation. Moreover, given that you have already ceded control to Google Assistant, algorithms will most likely know you better than you know yourself, further customizing the content and ads you are being served. Furthermore, Google is now monitoring every action you take while wearing Glass: what games you play, what movies you watch, what books you read, the meetings planned on your calendar, and even your attention span during a movie, book, or game.  This leads yet again to further customization of your content or ads, and significantly improved conversion for advertisers. But how will it look in practice?  You may be watching an action movie and be served an ad of a relevant product or a product matching the feelings generated by the movie.  In terms of augmented reality, while driving your car, ads are being served as overlays on buildings throughout your journey. Augmented ads are personalized to your current mood, the time you spend in the traffic, your earlier behavior, and so forth. While looking at the very same landscape, the driver behind is being served totally different augmented ads, personalized to his own characteristics, mood, and digital profile. This is great progress when compared with the current one-size-fits-all ads you see on buses, bus stops, or buildings. And, given the digital form of VR technologies, every move you make is monitored, analyzed, and further used to build your Google digital file. Some data that is already being collected and analyzed via VR includes eye tracking, body motion, sound, data input, heartbeat, head movement, control inputs, communications, and much more. VR analytics will provide valuable information, given the findings of a Stanford University study that people behave very similarly in VR to how they do in real life. (Sky, 2015)

In sports and games, the high interaction between various players during a game will provide further opportunities for advertising to an even more granular level, down to communities of players showing similar behaviors, playing styles, or performance. Finally, as smart assistants establish themselves as the de facto “connector” of your household appliances, augmented paid ads will start following you around the house, knowing when you switch on your TV, open the fridge, or adjust your thermostat. Indeed, consider the estimation that between 2015 and 2020 the number of wirelessly connected devices will grow from 16 billion to 40 billion. Similarly, a report by Juniper estimated that smart home services will reach a global market value of $71 billion by 2025. Yes, the internet of things (IOT) will represent yet another advertising opportunity for the likes of Google and Facebook, and not a small one I must say. Google is already working hard on leveraging this opportunity and encouraging people to connect their smart houses through various Google products, such as Google Home, Google Home Mini, Google Wifi, Chromecast, Chromecast Audio, Nest Learning Thermostat, Nest Protect, Nest Cam IQ Indoor, Nest Cam indoor, and Nest Cam Outdoor. Yes, Google is making it as easy as possible for you to digitize your life. Incidentally, this also allows its algorithms to continuously build your digital profile. Google will become the biggest corporation on the planet, their profits will explode and besides being more expensive the alternative of not advertising on Google will represent commercial suicide. But then the same goes for Amazon, Microsoft, and Facebook. And who knows what other innovative companies may enter the game?

The race is on.

REFERENCES

  1. Ash, Tim. Ginty, Maura. Page, Rich (2012). Landing Page Optimization: The Definitive Guide to Testing and Tuning for Conversions, 2nd Edition, US : John Wiley & Sons
  2. Amazon Comprehend (2018). Amazon Comprehend. Discover Insights and relationships in text, Retrieved from https://aws.amazon.com/comprehend/?nc2=h_a1
  3. Amazon Lex (2018), Amazon Lex. Conversational Interfaces for Your Applications. Powered by the same deep learning technologies as Alexa, Retrieved at https://aws.amazon.com/lex/?nc2=h_a1
  4. Amazon Rekognition (2018), Deep learning-based image and video analysis, Retrived from https://aws.amazon.com/rekognition/?nc2=h_a1
  5. AWS DeepLens (2018), The world’s first deep learning enabled video camera for developers, Retrieved at https://aws.amazon.com/deeplens/?nc2=h_m1
  6. Anderson, Chris (2009). The Longer Tail. How Endless choice is creating unlimited demand, GB : Random House Business Books
  7. Ariely, Dan (2012). The (Honest) Truth about Dishonesty, UK: Harper Collins Publishers
  8. Ariely, Dan (2009). Predictably Irrational. The Hidden Forces that Shape Our Decisions, UK: Harper
  9. Baron-Cohen, Simon (2012). Zero Degrees of Empathy. A New Theory Of Human Cruelty And Kindness, GB: Penguin Books
  10. Barnatt, Christopher (2013). 3D Printing. The Next Industrial Revolution, GB : ExplainingTheFuture.com
  11. Baumeister, Roy F & Tierney, John (2012). Willpower. Rediscovering Our Greatest Strength, UK: Allen Lane
  12. Bernazzani, Sophia (2017). The Decline of Organic Facebook Reach & How to Outsmart the Algorithm, Retrieved from https://blog.hubspot.com/marketing/facebook-declining-organic-reach
  13. Bloom, Paul (2011). How Pleasure Works. Why we like what we like, GB : Vintage Books
  14. Bostrom, Nick (2016). Superintelligence. Paths, Dangers, Strategies, GB: Oxford University Press
  15. Brafman, Ori & Brafman, Rom (2011). Click. The Power Of Instant Connections, GB: Virgin Books
  16. Brooks, David (2012). The Social Animal. A Story of How Success Happens, GB : Short Books
  17. Brynjolfsson, Erik & McAfee, Andrew (2016). The Second Machine Age. Work, Progress, And Prosperity in a Time of Brilliant Technologies, US: Norton
  18. Cabane, Olivia Fox (2012). The Charisma Myth. How Anyone Can Master the Art and Science of Personal Magnetism, GB: Penguin Group
  19. Caldwell, Leigh (2012). The psychology of price. How to use price to increase demand, profit and customer satisfaction, GB: Crimson Publishing Ltd
  20. Carnagie, Dale (2006). How to Win Friends and Influence People, GB: Vermilion
  21. Chabris, Christopher & Simons, Daniel (2010). The Invisible Gorilla And Other Ways Our Intuition Deceives Us, GB: Harper Collins Publishers
  22. Chace, Calum (2015). Surviving AI. The promise and peril of artificial intelligence, GB: Three Cs
  23. Christensen, Clayton M (2013). The Innovator’s Dilemma. When New Technologies Cause Great Firms To Fail, US: Harvard Business Review Press
  24. Christakis, Nicholas & Fowler, James (2011). Connected. The Amazing Power of Social Networks and How They Shape Our Lives, UK: Harper Collins Publishers
  25. Cialdini, Robert B (2007). Influence. The Psychology of Persuasion, US: Collins Business Essentials
  26. Dolan, Paul (2015). Happiness by Design. Finding Pleasure and purpose in everyday life, GB: Penguin Books
  27. Domingos, Pedro (2017). The Master Algorithm. How The Quest For The Ultimate Learning Machine Will Remake Our World, UK: Penguin Books
  28. Duhigg, Charles (2012). The Power of Habit. Why we do what we do and how to change, UK: Random House
  29. Dweck, Carol S. (2008). Mindset. The New Psychology of Success. How We Can learn To Fulfill Our Potential, US: Ballantine Books
  30. Enge, Eric. Spencer, Stephan. Stricchiola, Jessie & Fishkin, Rand (2013). The Art of SEO. Mastering Search Engine Optimization, US: O’Reilly
  31. Ford, Martin (2016). The Rise of the Robots, GB: Oneworld Publications
  32. Next Tech Magazine (2017). Google Tips & Tricks. Unlock the power of the world’s most amazing free apps, UK: Future Publishing Ltd
  33. Gardner, Dan (2009). Risk. The Science and Politics of Fear, GB: Virgin Books
  34. Gilbert, Daniel (2007). Stumbling on Happiness, GB: Harper Perennial
  35. Gladwell, Malcolm (2001). The Tipping Point. How little things can make a big difference, GB : Abacus
  36. Gladwell, Malcolm (2006). Blink. The Power of Thinking without Thinking, GB: Penguin Books
  37. Godin, Seth (2005). Purple Cow. Transform Your Business by Being Remarkable, UK: Penguin Business
  38. Godin, Seth (2012). All Marketers Are Liars, US: Penguin Books
  39. Google (2017), Google Products, retrieved from https://www.google.com/intl/en/about/products
  40. Google (2017), Data Privacy Policy, Retrieved from https://privacy.google.com/intl/en-GB/your-data.html
  41. Harari, Yuval Noah (2015). Homo Deus. A Brief History of Tomorrow, GB: Harvill Secker
  42. Heath, Chip & Heath, Dan (2008). Made to Stick. Why some ideas take hold and others come unstuck, GB: Arrow Books
  43. Hood, Bruce (2009). Supersense. From Superstition to Religion- the Brain Science of Belief, GB: Constable
  44. Johnson, Spencer (1999). Who Moved My Cheese?, GB: Vermillion
  45. Jones, Graham (2014). Click.ology. What works in online shopping and how your business can use consumer psychology to succeed, GB: Nicholas Brealey Publishing
  46. Klein, Gary (1999). Sources of Power. How People make Decisions, US : MIT Press
  47. Laham, Simon (2012). The Joy Of Sin, GB: Constable
  48. Lipson, Hod & Kurman, Melba (2013). Fabricated. The New World Of 3D Printing, US : Wiley
  49. Kahneman, Daniel (2011). Thinking, fast and slow, GB: Allen Lane
  50. Kahneman, Daniel & Deaton, Angus (2010), High income improves evaluation of life but not emotional well-being, Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2944762/
  51. Kelly, Kevin (2016). The Inevitable. Understanding the 12 Technological Forces That Will Shape Our Future, US: Viking
  52. Kelly, Kevin (2014). The Three Breakthroughs That Have Finally Unleashed AI on the World, Retrieved from https://www.wired.com/2014/10/future-of-artificial-intelligence/
  53. Kim, Larry (2018). 10 Remarketing Facts that Will Make You Rethink PPC, Retrieved at http://www.wordstream.com/blog/ws/2015/10/01/remarketing-facts
  54. Krug, Steve (2014). Don’t Make Me Think. A common Sense Approach To Web and Mobile Usability, US: New Riders
  55. Mailchimp (2018). Marketing Automation is like a second brain for your business, Retrieved at https://mailchimp.com/brain/#/
  56. Marr, Bernard (2016). Big Data in Practice. How 45 Successful Companied Used Big Data Analytics To Deliver Extraordinary Results, GB: Wiley
  57. Marr, Bernard (2015). Big Data. Using Smart Big Data Analytics And Metrics To Make Better Decisions And Improve Performance, GB: Wiley
  58. Marvin, Ginny (2017). Google AdWords’ automated ad suggestions test is getting a reboot, Retrieved at https://searchengineland.com/google-adwords-automated-ad-suggestions-beta-281924
  59. McKinney, Wes (2016). Python for Data Analysis, US: O’Reilly Media
  60. Milgram, Stanley (2010). Obedience to Authority, GB: Pinter and Martin
  61. P, Paul (2016), Forum conversation, Retrieved from https://www.en.advertisercommunity.com/t5/AdWords-Account-Issues/Keyword-Planner-says-it-is-a-free-tool-but/td-p/586519
  62. Pink, Daniel H. (2011). Drive. The Surprising Truth About What Motivates Us, GB: Cannon Gate
  63. Pink, Daniel H. (2013). To Sell Is Human. The Surprising Truth About Persuading, Convincing And Influencing Others, GB : Canongate Books
  64. Pink, Daniel H. (2008). A Whole New Mind. Why Right-Brainers Will Rule the Future, GB : Marshall Cavendish International
  65. Pinker, Steven (2003). The Blank Slate. The modern denial of human nature, GB: Penguin Books
  66. Priestley, Dan (2014). Entrepreneur Revolution. How to develop your entrepreneurial mindset and start a business that works, GB: Capstone Publishing
  67. Robertson, Ian (2012). The Winner Effect. How Power Affects Your Brain, GB : Bloomsbury
  68. Romano, Aja (2018). The Facebook data breach wasn’t a hack. It was a wake-up call, Retrieved at https://www.vox.com/2018/3/20/17138756/facebook-data-breach-cambridge-analytica-explained
  69. Ross, Alec (2017). The Industries of the Future, UK: Simon & Schuster UK
  70. Ross, Lee & Nisbett, Richard E. (2011). The Person And The Situation. Perspectives of Social Psychology, GB : Pinter and Martin
  71. Schmidt, Eric & Rosenberg, Jonathan (2014). How Google Works, GB: John Murray
  72. Searchmetrics (2017), Rebooting Ranking Factors – Google.com, Retrieved at http://pages.searchmetrics.com/rs/656-KWJ-035/images/Searchmetrics-Rebooting-Ranking-Factors-US_whitepaper.PDF?mkt_tok=eyJpIjoiTmpaak9EQTVaV0kzTm1ZMiIsInQiOiJjQVlaZnlCSmtveU5UZWc3NEM2ZlwvbkRMSDRRcnVUWlZRNFlQNnRlYVRFTEszak5EaHdraDd4eWFGUXhjNmdSOTJVS21BTFwveHVSa2J0dXIzUkZ5V2Z3SUE4ZHU4cm5uV0JweU8zcW4wRU5SU0hHUjJOOVJoSndianUyenVVZFVYIn0%3D
  73. Sharot, Tali (2012), The Optimism Bias. Why we’re wired to look on the bright side, GB: Robinson
  74. Sharp, Byron (2010). How brands Grow. What marketers don’t know, Australia: Oxford University Press
  75. Sky,Nite (2015). Virtual Reality Insider, GB: Amazon
  76. Slater, Laura (2005). Opening Skinner’s Box, GB: Bloomsbury Publishing
  77. Susskind, Richard & Susskind, Daniel (2017). The Future Of Professions. How Technologies Will Transform The Work Of Human Experts, GB: Oxford University Press
  78. Sutherland, Stuart (2007). Irrationality, GB: Pinter and Martin Ltd
  79. Steele, Claude M. (2011). Whistling Vivaldi. How stereotypes affect us and what we can do, US: W. Norton
  80. Stone, Brad (2013). The Everything Store, GB: Transworld Publishers
  81. Taleb, Nassim Nicholas (2007). Fooled by Randomness. The Hidden Role of Chance in Life and in the Markets, US : Penguin Books
  82. Tapscott, Don & Williams, Anthony D. (2006). Wikinomics, How Mass Collaboration Changes Everything, US : Portfolio
  83. Tavris, Carol & Aronson, Elliot (2008). Mistakes Were Made (but not by me). Why we justify foolish beliefs, bad decisions and hurtful acts, GB: Pinter and Martin
  84. Totham, Isabel (2017).10 Online Dating Statistics You Should Know, Retrieved from https://www.eharmony.com/online-dating-statistics/
  85. Trivers, Robert (2011). Deceit and Self-Deception. Fooling Yourself the Better to Fool Others, GB: Allen Lane
  86. Trotman, Andrew (2014), Facebook’s Mark Zuckerberg: Why I wear the same T-shirt every day, Retrieved from https://www.telegraph.co.uk/technology/facebook/11217273/Facebooks-Mark-Zuckerberg-Why-I-wear-the-same-T-shirt-every-day.html
  87. Walton, Sam & Huey, John (1993). Made In America, US : Bantam
  88. Weinberg, Gabriel & Mares, Justin (2015). Traction. How Any Startup Can Achieve Explosive Customer Growth, UK: Portfolio Penguin
  89. Welch, Chris (2018). Google just gave a stunning demo of Assistant making an actual phone call, Retrieved from https://www.theverge.com/2018/5/8/17332070/google-assistant-makes-phone-call-demo-duplex-io-2018
  90. Whitwam, Ryan (2013), Simulating 1 second of human brain activity takes 82,944 processors, Retrieved from https://www.extremetech.com/extreme/163051-simulating-1-second-of-human-brain-activity-takes-82944-processors
  91. Wiseman, Richard (2010). :59 Seconds. Think a little, Change a lot, GB: Pan Books
  92. Wiseman, Richard (2007). Quirkology. The Curious Science Of Everyday Lives, GB: Macmillan
  93. Wikipedia (2018). Watson (computer). Retrieved at https://en.wikipedia.org/wiki/Watson_(computer)
  94. Wilson, D. Timothy (2011), Redirect. The Surprising New Science of Psychological Change, GB : Allen Lane
  95. Wilson, D. Timothy (2002), Strangers to Ourselves. Discovering the adaptive unconscious, GB: The Belknap Press of Harvard University Press
  96. York, Alex (2018). 61 Social Media Statistics to Bookmark for 2018. Retrieved at https://sproutsocial.com/insights/social-media-statistics/
  97. Zimbardo, Philip (2009). The Lucifer Effect. How Good People Turn Evil, GB: Rider