visit
Written with help from Nir Kabessa and Professor Dan Rubenstein.
When Larry Page and Sergey Brin received funding for Google in 1998, they had to figure out how their academic project could make money. Initially the two were against advertising because advertising would incentivize displaying paid advertisements over quality searches. Concluding their discussion on advertising in a 1998 , they say, “we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.” Later that year, Page and Brin would decide to with small ads on their platform.
Fast forward to 2019 and Google over a third of the digital advertising market share, followed by Facebook with over 20%. Additionally, estimates predict that this year digital advertising spending will finally surpass that of traditional ads. There is a fundamental problem with the advertisement business model, though. It is the mixed incentives that Page and Brin identified in 1998. Users want useful services and quality content, but advertising platforms want users to view ads. Marc Andreessen, founder of Netscape, that advertising became the dominant business model of the internet because there was no way to spend money on the early web. But now growing mistrust over online advertising may have created an opportunity for startups to realign online incentives. First, we will look at some intrinsic issues with the advertising business model, and then we will examine how new platforms may be able to realign incentives.
Attention and Data Extraction
Fundamentally businesses advertise to get users to buy their products. The value of any advertising platform is therefore proportional to the number of potential customers that will see the ad. There are two ways that platforms can maximize the value of their space: get users to view more ads and better target users with ads. Unfortunately, the platform’s incentive to increase these two metrics contrasts with providing users better services.
In many ways platforms efforts to attract more attention from users goes hand in hand with extracting their data. If sites can recommend content to keep users engaged longer, the platform gets more data. The more data platforms get, the better they will be able to recommend content to grab their attention in the future. All the while, more attention and data means that users will see more ads specifically tailor to them. Users want quality content, though, not attention-grabbing content. Mark Zuckerberg acknowledges issues with attention-grabbing content on Facebook in a 2018 . He found that people are naturally attracted to “sensationalist and provocative content,” and that “people will engage with it more on average — even when they tell us afterwards they don’t like the content.” Zuckerberg admits that borderline content makes Facebook worse, yet does not acknowledge that Facebook is incentivized to show extreme content. It is very likely that Facebook promotes content based on the attention, especially since other platforms do.
This past June Tristan Harris, an ex-Google employee and the founder of the Center for Humane Technology, with the Senate Commerce Committee how content recommendation algorithms use personal data to determine what content will keep users engaged the longest. Unfortunately these algorithms have uncovered a flaw in human nature: people are naturally attracted to content slightly more extreme than their current views. In his testimony, Harris describes how teenage girls watching dieting videos on YouTube would be recommended anorexia videos and how watching a moon landing video would later lead to recommendations for flat earth conspiracy videos. YouTube’s recommendations get more and more extreme in order to keep users engaged longer. Their algorithms use people’s natural inclinations to influence them to give their platform more attention. Most importantly, though, as Zuckerberg admits, users do not necessarily like extreme content.
If YouTube recommends content in order to attract more attention, Facebook and other social media sites almost certainly do too. All companies seeking ad revenue online are competing for attention. Since attention is finite, they are competing in a zero sum game. If one platform is able to get more attention, it is likely taking away attention from another. If one app engages in manipulative practices to get more attention from its users, then the other apps must follow or fall behind. It is a modern example of the prisoner’s dilemma except its outcome affects all of the internet.
Facebook and Google modified their algorithms as public awareness about their problems grew. Tristan Harris, who helped these issues gain attention, brags on his personal about influencing initiatives for digital wellbeing at Facebook and Google. But even with steps in the right direction, Google and Facebook are not changing their business model. Zuckerberg has proposed changes in response to problems on Facebook, but as Chris Hughes, co-founder of Facebook, discusses in his May 2019 article “,” we cannot trust Zuckerberg for his word because of Facebook’s lack of transparency. (1) Facebook’s algorithms are proprietary. Nobody can audit them. (2) Facebook’s engineers have often had trouble implementing Zuckerberg’s vague promises for better news feeds. (3) We do not even know if Zuckerberg intends to change Facebook as he proposes. His proposals are contrary to his business model and will likely hurt their profits. More likely Facebook is only making minimal changes to calm public mistrust.
Facebook’s opacity is also why we cannot trust Zuckerberg when he discusses privacy. Platforms ultimately want user data to be able to better target them with content and ads, but their vast amounts of data has begun to cause concern. Let’s consider the . Leading up to the 2016 election, Cambridge Analytica, a political consulting firm working with Donald Trump, paid people to fill out a personality test on their app. Users logged into the app through Facebook and agreed to share their data for academic use. Cambridge Analytica then managed to collect data of users’ Facebook friends, and with data of over 87 million Facebook accounts, they were able to determine people’s political personalities and send ads appealing to specific users. After getting exposed in 2018, the scandal has raised a lot of questions regarding online data. How many other companies may be using our data to try to influence our decisions? Can organizations effectively use our data to manipulate us? And do Facebook and Google have the power to influence national elections? Many of these questions remain unanswered, but trust in the companies that aggregate our data has certainly dropped.
In the aftermath of the scandal, Zuckerberg has plans to make his social networks privacy focused. WhatsApp and Messenger are now end-to-end encrypted and Facebook is surely storing user data more securely, but these small upgrades in privacy are unlikely to fix any real issues. Facebook still has massive amounts of data about every user, and more importantly, sites can collect other data to continue targeting users with ads and content. According to Tristan Harris, advances in AI will allow algorithms to determine people’s political personalities just based on their mouse movements and click patterns. While Facebook may have recently increased user privacy in some areas, they are still capable of finding the same information about users to send them targeted ads and content. Again, Facebook is only making minimal changes to calm public mistrust. Facebook knows they can work around these self-imposed limitations to collect user data. Fundamentally nothing is changing.
Mistrust in Facebook increased greatly after the Cambridge Analytica scandal. Many joined the ‘Delete Facebook’ movement, but because users did not have any viable alternatives, it was largely . Even Chris Hughes, says the scandal made him acknowledge issues with the network. Hughes is among many proponents to break up Facebook. Their campaign has certainly helped to increase overall mistrust in tech companies. At the beginning of 2019, a from Hootsuite and We Are Social reported that 40% of Americans are concerned about misuse of their personal data. But their for quarter 3 shows that number increased to 66%. More and more people want alternatives.
Disrupt Instead of Break Up
In response to growing mistrust over large internet companies, many are calling to break up Facebook and Google. While breaking up large tech companies will lead to more competition in the short term, it likely will not have any major effects on the problems inherent with advertising online. Platforms will still maximize user data and attention to increase the value of their advertising space. Instead, the solution should be to realign incentives online. Clayton Christensen’s theory of disruptive innovation suggests realigning incentives may be an opportunity for startups.
Clayton Christensen, a professor at the Harvard Business School, first defined disruptive innovation in a 1995 article and discusses it at length in his book The Innovator’s Dilemma. Disruptive innovation brings a new value proposition to a market. In the short term, disruptive technology creates less profit but offers some benefits to users. It’s a big enough change from the dominant business model in an industry that the largest companies will not adopt it. They cannot justify investing time and money into an unproven technology that will decrease their profits in the short term. Christensen’s theory helps explain why Google, Facebook and other platforms with an advertising business models are unlikely to change. They may make minimal changes to calm public fears, but truly fixing these issues would mean Google and Facebook would have to sacrifice short term profits to overhaul their business for an unproven, new business model. For a startup, though, finding new business models is an open problem that could disrupt the tech industry.
Marc Andreessen is among many believers that blockchain technology could help realign incentives between platforms and users. For one, blockchains allow for digital ownership of data. Currently most applications user a client-server model. When a user logs into Facebook for example, Facebook’s servers send data to the user’s computer, called the client. All Facebook data is hosted on Facebook’s servers which makes it really easy for say Cambridge Analytica to access millions of users’ data or even for one hacker to over 100 million credit cards. But if users owned their application data, they could store their app data in their own personal storage. If the application or third parties want user data to be able to send targeted content and ads, each user would have to give consent. With this technology, the Cambridge Analytica scandal and many downstream issues of the advertising business model would not be possible.
, a decentralized computing platform, is building the infrastructure to allow app users to own their data (disclosure: I hold Blockstack's token). One of Blockstack’s core features is its decentralized storage called Gaia. Gaia storage is designed to individually store user data for apps. User data is encrypted in their own personal storage and when users sign into applications, their application data is decrypted client-side. Another advantage for users is Blockstack’s single sign in for all Blockstack apps. It is just as easy as using Facebook authentication on services like Instagram, Spotify, or the app Cambridge Analytica used to collect user data, but apps and third parties never have access to their data. Blockstack founder Muneed Ali the shift from centralized storage to personal storage like the shift from mainframes to desktop computers during the 1980s and 1990s. The shift would give users control over their data and prevent apps from abusing their platforms.
Where Blockstack falls short is that it is only creating incentives for developers to create apps that respect privacy. Gaia storage abstracts away the application backend so developers can create apps faster and cheaper. Blockstack also has a developer incentive program they call ‘App Mining.’ Each month Blockstack apps are rated and distributed money according to their ranking. Eventually their App Mining program will $1 million monthly until the funds run out in about 10 years. App mining not meant to be the only source of revenue for apps, but along with faster development time, it is a good incentive for developers to start building in their ecosystem.
While Blockstack is creating incentives for developers to create apps that respect user privacy, there is also an opportunity to realign incentives around online content. Instead of incentivizing content that attracts attention, blockchain technology can be used to reward user contributions to a network. This is what decentralized blogging platform is building. Everyday the Steem blockchain distributes Steem, their native cryptocurrency, to individuals based on their subjective contribution to the network. User contribution is determined by user votes so that creators are rewarded when others like their content. Instead of rewarding content for the attention it receives, content would be rewarded because people actually like it.
This all sounds great in theory, but unfortunately Steemit has not worked as well in practice. Many users are abusing the system by paying others for support. The Steemit whitepaper notes, “Eliminating ‘abuse’ is not possible and shouldn’t be the goal. Even those who are attempting to ‘abuse’ the system are still doing work.” Steemit’s only solution is to limit the number of daily user votes, but this has not stopped fake activity. Users can for ‘vote bots’ to automatically support their content. Like how Google and Facebook have found shortcuts to win more attention without providing better content, members on Steemit are finding shortcuts to gain more support.
The Steemit whitepaper is correct that it is impossible to stop abuse in a completely decentralized network. John R. Douceur proves this in his influential 2004 paper “,” where he concludes it is impossible for a decentralized network to stop fake identities from gaining disproportionate control over the system. It is possible to disincentivize abuse, though. has already solved this problem with their consensus mechanism. The genius of Bitcoin’s consensus is that voting power is relative to computational power. For miners to contribute to the Bitcoin network, they pay with computation so attempts to abuse the system and undermine the cryptocurrency will ultimately just waste their computational power. Bitcoin works because the incentives between its miners and users are aligned; both groups want to maintain the integrity of the network so that Bitcoin maintains value. With that said, there could be potential for a platform like Steemit that put a cost on contributions. Cryptocurrency rewards would still be distributed based on users votes, but voting on content could cost some amount of tokens. If the cost of malicious or fraudulent activity is greater than its reward, bad actors will only be wasting their resources. This mechanism is very similar to the proof-of-burn consensus algorithm that Blockstack’s blockchain uses.
Another solution could be a partially decentralized social media platform with some sort of governance structure. Consider Wikipedia: anyone can contribute to Wikipedia but its have special privileges including abilities to block users, protect pages from editing, and editing protected pages. This governance structure has allowed Wikipedia to maintain trustworthy information while being open for anyone to contribute. A network with cryptocurrency rewards could use a similar governance structure. Administrators could be voted in and put tokens at stake. Then only their votes would determine the rewards. Their votes would be entirely transparent so that any fraudulent activity could be reviewed. Malicious admins could then be voted out, causing them to lose their stake. Like how the previous proposal is similar to proof-of-burn, this proposal is similar to the delegated proof-of-stake consensus algorithm that the EOS blockchain uses.
New, Decentralized Layers for the Internet
Blockstack and a social cryptocurrency are both decentralized solutions for the issues with existing, centralized internet companies. As Chris Dixon notes in his article ‘,’ a fundamental issue with centralized platforms is that once they saturate their market, the only way for them to continue growing is to extract more data and attention from their existing users. Centralized platforms inevitably become zero sum, but decentralized platforms remain positive sum. Anyone can continue to build services on top of them without fear that their ability to build will be , or create content without fear that it will be censored. Anyone can build on top of Blockstack, and for a blockchain-based social protocol, other developers could build frontends that give users a custom experience. Some apps could choose not to show or only show specific types of content like photos or videos. Apps could even charge for premium features like an algorithmic news feed focused on finding content users like instead of content to get their attention. Developers will likely find many other ways to profit from making frontends on top of decentralized protocols.
It is impossible to know if new, decentralized alternatives will able to overtake centralized networks by realigning incentives. People want alternatives and ultimately decentralized networks will give users more control. They can allow users to own their data, get rewarded for creating content others like, and determine the standard for good content. More user control is ultimately the benefit that could make decentralized networks a disruptive innovation and allow them to realign online incentives. But probably their largest drawback is that decentralized networks are not free. If users owned their own data, they would have to pay for their own storage, and a social cryptocurrency would need some cost to disincentivize fake activity.
There are many other areas online where incentives can be realigned, too. This article began by discussing the mixed incentives of Google's search engine, but we are unsure how a search engine could realistically realign incentives. Most people are probably fine exchanging their data and attention to use Google search. At the inception of the web, though, nobody predicted a platform like Facebook where any user can contribute. Early web companies primarily created online alternatives to services that already existed like selling books. Blockchain technology will likely create many opportunities to realign incentives that nobody has predicted yet. Ultimately, if developers create viable, decentralized alternatives to centralized networks, many people will likely switch.
We would like to clarify that the potential for decentralized platforms does not mean it is time to start shorting Facebook and Google. At least based on , legacy companies will likely be around for another 20 years. Lindy’s law, first described by Albert Goodman in 1964, says that the life expectancy of a non-perishable thing is typically proportional to its current age. So based on Lindy’s Law, Google, now 21, has a total life expectancy of 42 years. Most ideas, companies, and technologies do not survive for long, so when a company is good enough to last for 21 years, it is typically good enough to last another 21. Facebook and Google’s advertising platforms give users enough value and have a large enough network effect that they are unlikely to go away any time soon. But there is certainly an opportunity for startups to improve upon their flawed business model.
If decentralized platforms indeed outcompete their centralized alternatives, a good parallel is how desktop computers disrupted the computer industry in the 1980s. As previously stated, Muneeb Ali compares digital ownership of data to desktop computers and Marc Andreessen, has the development of bitcoin to that of desktops. When everyone was using mainframes in the 1960s and 1970s, IBM was the biggest computer company. Then IBM fell behind when they didn’t start selling desktops. Creating desktops would have taken a sizeable amount of time and money. IBM would have lost money in the short term if they choose to invest in creating desktops so it would not have made sense. Then Apple came along and built one of the first successful desktops in 1976. Desktops ultimately gave users more control like how decentralized networks could give users control over their data and content. Today Apple is consistently one of the most valuable companies in the world. IBM is certainly still a massive company, but it is no Apple.