Student Challenge Quadriptych – “Shaping Competition Policy in the Era of Digitisation”
July 2019
CPI Europe Column edited by Anna Tzanaki (Competition Policy International) & Juan Delgado (Global Economics Group) presents:
CPI Europe Column: Student Challenge Quadriptych – “Shaping Competition Policy in the Era of Digitisation”
Competition Policy International is pleased to present our readers with this Special Edition of the Europe Column:
Student Challenge Quadriptych – “Shaping competition policy in the era of digitisation”
Following DG Competition’s conference on January 17, 2019 “Shaping competition policy in the era of digitisation,” the European Commission organized a Student Challenge to “encourages university students with an interest in competition policy and enforcement to bring their views to the main topics of the Shaping Competition Policy in the Era of Digitisation conference.”
The students submitted 250-word abstracts related to one of the Conference’s panel discussions. The four panel categories were as follows:
- Competition, data, privacy, and artificial intelligence;
- Digital platforms’ market power;
- Competing with data – a business perspective;
- Preserving digital innovation through competition policy.
CPI has asked the winner from each panel to expand on her/his short abstract submission for this special “Quadriptych” edition of the CPI Europe Column.
The four winners (in order of the panels) are: Guillaume Thébaudin (Télécom ParisTech), Rossi Abi-Rafeh (Toulouse School of Economics), Oriane Limouzin (Toulouse School of Economics), and David Pérez de Lamo (College of Europe, Bruges).
As always, a big “Thank You” to our Europe Column editors Anna Tzanaki & Juan Delgado.
– Sam Sadden
CPI Editor in Chief
Click here for a PDF of the full version of the article
Software is Eating Competition – Rossi Abi-Rafeh (Toulouse School of Economics)
Competing with Data – A Business Perspective – Oriane Limouzin (Toulouse School of Economics)
Preserving Innovation Competition in the Digital Era: “Killer Acquisitions” – David Pérez de Lamo (College of Europe, Bruges)
Big Data, Consumers’ Privacy, and Competition in Online Markets – Guillaume Thébaudin (Télécom Paris)
Software is Eating Competition – Rossi Abi-Rafeh (Toulouse School of Economics)1
Click here for a PDF version of the article
“Software is eating the world.”2 Software is also eating competitors and potential competitors. Nowadays, firms not only use software and the internet to deliver products and services to their customers. They use it to understand the market, crush the current competition, and implement potential barriers to entry. Software underlying digital products and services enables changing prices and product attributes in a targeted way, at a quicker rate than ever, and at a fraction of the cost. This brought a lot of value to consumers by making the platforms and apps they rely on more responsive and tailored to their needs, but it also brought in new ways to restrict consumer choice, increase consumer switching costs, and new venues for predatory behavior.
The EU Commission opened an inquiry into the software of Google in the Google Shopping case in 2010. In the same vein, antitrust authorities ought to inquire more about software features and changes platforms implement in their apps and websites. However, the time frame of antitrust enforcement is not adapted to the time frame at which the software changes take place. It used to be that firms had to engage in lengthy product changes or negotiations and deal-making to restrict competition. Software, however, increased the speed and scale of pricing, design, and product positioning that can restrict competition. Compounded with the concentration of markets in Europe and the U.S., and the network effects in platform economies, anti-competitive software can seal the market power of large tech platforms before antitrust authorities take action. Competition authorities can learn from the tech firms themselves and adapt their own toolbox: A/B testing, online experiments, and open-data policies can be adopted and used more regularly by antitrust authorities.
What is Anti-Competitive Software?
Tech firms offer software-based products, and incrementally alter them at quick pace: Airbnb and sellers on Amazon Marketplace resort to algorithmic pricing to change listing prices without human input, Uber offers targeted discounts, and the value and timing of price discounts can be modified algorithmically. Automated changes in other product attributes are also possible: design, interface, search, ranking, diversion, salience… On top of that, the effects of these software changes can be tested online and quickly: companies like Optimizely, Inc. for instance offer automated online testing of design changes in websites and apps. Some of these algorithms can lead to anti-competitive outcomes: they increase consumers’ engagement, their switching costs, or restrict their choice, steering them away from competitors, and sealing the dominance of a platform.
The speed and scale of restricting competition through software has been brought by the prevalence of computed-mediated transactions, electronic record keeping, and improved computational power. Software features vary in their level of automation, however as per the touted culture of Silicon Valley, they are tested and implemented quickly. For instance, a food-delivery app starts offering you discounts through phone notifications: company employees made the decision to give these discounts; however, the targeting, the amount, and the timing is all automated. Take for example a marketplace app that demotes the ranking of some products on its search engine in order to increase traffic towards its own affiliated products: implementing the feature by teams at the company, and testing its effects online takes only weeks.3
Software features on a platform’s app and website can help regulate the platform: surge pricing on Uber matches the demand and the supply through automatic price increases of rides; Airbnb recommends personalized prices to its sellers to help them understand demand; and most platforms have a ranking algorithm that directs consumers to good matches in whatever it is they are looking for. These algorithms are innovative and create value for the consumer. However, others such as the one implemented by Google to promote Google Shopping, or targeted discounts offered by Uber if the user has Lyft installed on their phone intentionally serve to undermine the competition.4,5 These software changes also have outsized direct or indirect external effects on the product shopping market, or the ride-hailing markets. Take the following examples:
-
- The Google Shopping case led by the EU Commission in 2010 concerns a software change in the search engine of Google. Google entered the comparison-shopping space with Froogle (a play on the word Frugal). Froogle was initially a separate tab in the general search page, and it did not take off initially.6 Google revamped it as Google Shopping, and placed it in a top banner above the general search results. Google changed its Google Search ranking software to allow two things: first, to show its own shopping results on top; and second, to demote competing comparison-shopping websites to lower rankings or second or third pages of search results. By doing so, Google leveraged its dominance in the search engine market to monopolize the comparison-shopping market. The EU Commission fined Google in 2017.
- Another example is a ride-sharing platform that recently started promoting entrant drivers by ranking them on top of search results. This small change makes it easier for new drivers to build a reputation, all the while allowing them to sell at a higher price than they would have otherwise. Making it easier for new sellers to shop is essential for the survival of a platform, particularly due to high turnover and low retention rates on sharing economy platforms.7
However, this same dynamic would result into a locked-in base of sellers. Sellers, like buyers, have switching costs. No viable competing ride-sharing platform exists now, but a locked-in driver base does not increase the chances of it emerging anytime soon: drivers have already established a reputation on the incumbent platform; it will take quite some innovation for a new platform to attract them, and even that is unlikely to attract investor money. Airbnb, the sharing economy rental platform, has also a policy of promoting entrants, by increasing their ranking; sharing economy platforms generally have incentive schemes for new sellers on them, some of them informational, i.e. software-based.
Why Should Antitrust Authorities Care?
In the examples above, small software changes may have outsized anti-competitive effects on markets. Comparison shopping websites suffered enormous losses due to Google’s demotion. The contrast between the speed of implementation and the scale of the effects of software changes are specific to the tech sector, and do not show up – yet – in other industries like drug development for instance. Antitrust authorities should look deeper into them and adapt their own pace when they do.
First, the high concentration in the tech sector compounds the effects of the speed in software changes. In each business line, a large platform or two dominate the market. If a platform changes its software to increase search or switching costs for its users, a large user base is directly impacted. Second, these platforms are concentrated due to network effects: You want to lease your house in Toulouse when you are away on the platform that attracts the most buyers. You are more likely to sign up to the social network on which your friends are already signed up. A software change that increases the switching costs of users today acts as a barrier to entry for competitors and potential competitors tomorrow: not only the change will affect the current – already large – user base of the platform, but network effects will increase the size of the user base over time, and make it harder for smaller competitors to compete or raise funds.
Restricting competition by software changes in platforms has a predatory flair. Pricing below cost for multi-homing consumers or aggressively demoting competitors of an affiliated product can drive less deep-pocketed competitors out of business, or delay competitors’ entry until network effects on the platform are high enough to fend off entrants by themselves. The innovative value of these software changes for the consumer is also debatable. Classical predatory pricing is argued to be unlikely to arise because its cost is high and current, while the reward through future recoupment is highly uncertain and quickly dissipated by future entrants. Software-enabled predation however has lower current costs: targeted discounts allow pricing below cost to be sustained for longer since they are offered for a subset of consumers; aggressively promoting entrant sellers on a platform only harms incumbent sellers who are not likely to exit, demoting competitors of an affiliated brand does not seem to drive consumers away, and online experimentation allows companies to test and implement these changes gradually.
Additionally, the network effects and tipping dynamics in platform markets make the threat of entry less credible once a critical mass is attained by a platform. It is true that software makes the adjustment of product attributes quicker and less costly for new startups as well, but not to an extent that enables them to catch up with platforms that already dominate the market. In fact, changing a digital product successfully relies on having a large user base: larger platforms can draw more precise insights about the behavior of consumers, and can experiment with new features on a very small fraction of their users and risk losing only those, whereas a startup platform has fewer margins of maneuver in terms of size.
Lastly, antitrust authorities should look into these software changes and features early on and swiftly: even if anti-competitive software is retracted, its effects on market structure may not be reversible. The comparison-shopping market now is dominated by Google Shopping, and it is unclear whether it would revert to a more competitive one. Antitrust is law enforcement, and thus it tends to be slow. But restricting competition by software is systematized for speed, and speed is encouraged by the tech company culture, e.g. “Move fast and break things” 8… With the possible irreversibility of markets, the possible dangers of market power in digital sectors for competition, democratic processes, and labor markets, and the branching of the digital behemoths in more regulated sectors such as healthcare, the speed of regulators becomes more important.
How can Antitrust Authorities deal with Fast Software Changes?
Promoting entrant sellers on the platform is likely to increase welfare, and ought not be stopped, but enforcing cross-platform portability of reputation, and tools to facilitate multi-homing for sellers would avoid the anti-competitive effects. Great! Now that is only a potential solution for one specific software change. What can be done when these changes happen on a weekly basis? The regulator ought to learn from the regulated: the court decision may still take a long time, but the antitrust watchdogs can open their toolbox to quicker processes and adopt not only the metrics the companies use, but also the tools. Two elements stand out as useful and rather easy to adopt: A/B testing and Open Data.
A/B Testing
In a nutshell, A/B testing consists of showing a randomly selected subset of users a version of the platform with an experimental feature and comparing their behavior to the rest of the control users. Tech companies routinely resort to these quick, scalable, and cheap experiments to test out new features. Web publishers also have access to pre-packaged or bespoke services to implement online experimentation and testing.
Google for instance could have implemented the downgrading of the competing comparison shopping services in its “organic” search results, tracked the behavior of the searchers for 100, then 1000, then 100,000 searches, and measured the effect on key metrics: e.g. the number of consumers who clicked on a Google Shopping product suggestion, the number of consumers who clicked on a competing comparison shopping suggestion, and the number of consumers who were dissatisfied with the Google Shopping suggestions and exited. This information can be invaluable for the competition authorities for two reasons: first, by analyzing the results of past A/B tests ran by the company the regulator can identify effects better; second, by looking at the metrics used in past A/B tests it can also better identify intent.
Competition authorities ought to have the mandate to analyze the results of past A/B tests or request new ones to be made. Imagine a world where the regulator asks Facebook to actually increase the price of ads randomly on its platform in an experiment and see how advertisers substitute to alternative channels; or to actually increase the levels of salient privacy and observe the effect on the behavior of users, and assess whether Facebook can successfully degrade the privacy quality of its product due to its monopoly power.9
Open Data
Open Data diversifies the sources of analysis authorities can rely on, tapping into the human capital of researchers capable and willing to derive insights from the data. It is becoming commonplace for governments to publish aggregate datasets on their Open Data portals, and to allow access to anonymized samples or even administrative datasets to researchers under a review process. Competition authorities can follow suit.
Even in law enforcement, it is not uncommon to publish data related to lawsuits and settlements after they are done. In the U.S., the Enron Email Dataset was made public and posted to the web by the Federal Energy Regulatory Commission during its investigation. It contains half a million email records from top management of Enron prior to the scandal. Several research projects in information economics, computer science, and machine learning were made possible thanks to that data being public.
Another example is the Open Payments dataset: Since the Sunshine Act in 2010, pharmaceutical companies are required by law to declare any payment to a healthcare provider. Initially however, that data became public in the course of several legal actions against pharmaceutical firms. Releasing the data was part of the legal settlement with a number of firms, including Pfizer and AstraZeneca. Research based on this dataset has allowed us to better understand the effects of the physician-industry payments and sparked a debate about healthcare provider incentives, and the welfare and competitive effects of these payments.10
The EU Commission could consider releasing aggregate or engineered data in the context of its digital sector-level inquiry for instance. Privacy is an issue, but the authorities can follow existing best practices from governmental Open Data initiatives, or restricted access data initiatives such as healthcare records data used for research.
Some Final Considerations
Antitrust authorities can leverage their unique position as the market watchdog and look into software changes that platforms use to impose monetary or non-monetary switching costs for users, thereby driving their competitors out of business and creating barriers to entry. The speed at which software makes these changes possible and the concentration in the tech sector make these inquiries important.
What to do in practice? Within DG COMP or the FTC, a team of computer scientists, software engineers, economists, and behavioral scientists can request access to platforms’ previous feature and design test results, and request that new one-time analyses be made by the platforms using the platform’s own digital tools. They can also establish Open Data policies on ongoing inquiries. This setup has the advantage of fitting within the existing institutional framework: in a sense, only new forms of evidence would be collected in antitrust cases. The downside is that the time frame of enforcement can still be long. Adopting online experimentation as evidence, and open-data policies can also be accommodated within a separate digital authority that some scholars are calling for,11 given that this authority gets the mandate to request information from the firms, and request tests being made by the firms.
These changes are not without cost, but this is not a call for blocking every single software feature that is implemented by a tech platform: not all software changes on a platform restrict competition. It is a call to make the assessment of whether a software change is anti-competitive, abusive or predatory, possible quicker and at a lower cost for the regulator and the regulated. In 2010, Andreessen predicted that: “Over the next 10 years, the battles between incumbents and software-powered insurgents will be epic.” If antitrust authorities do not pick up the pace, in the next 10 years the battles between software-powered incumbents and insurgents may be quite boring.
1 Toulouse School of Economics, University of Toulouse Capitole. E-mail: rossi.abi-rafeh@tse-fr.eu, Twitter: @rossihabibi. This article builds on my short essay, winner of the Student Challenge of the conference “’Shaping competition policy in the era of digitization,” hosted by Commissioner for Competition Margrethe Vestager in January 2019 http://ec.europa.eu/competition/information/digitisation_2018/challenge_en.html. I would like to thank Anna Tzanaki, Samuel Sadden, Luise Eisfeld, and Ralph Bourdoukan for their thorough comments.
2 Andreessen, M. (August 20, 2011), “Why Software Is Eating the World,” Wall Street Journal, available at https://www.wsj.com/articles/SB10001424053111903480904576512250915629460.
3 For anecdotal evidence, what became Google’s AdWords engine was created and implemented over a weekend by a team in Google, and Salar Kamangar, former head of Youtube, decided to launch high-definition playback in YouTube the next day instead of in weeks when the prototyping and preliminary testing were done; respectively pp. 47-50 and pp. 85-86; from Schmidt, E. & Rosenberg, J. (2014), How Google Works.
4 EUROPEAN COMMISSION – Competition. (2017), CASE AT.39740 Google Search (Shopping), ANTITRUST PROCEDURE, available at http://ec.europa.eu/competition/antitrust/cases/dec_docs/39740/39740_14996_3.pdf.
5 Zingales, L. & Waldock, K. (2019), Capitalisn’t Podcast – Regulating Facebook and Google Pt 1: Markets, available at https://simplecast.com/s/80db3816.
6 Duhigg, C. (2018, February 20), “The Case Against Google,” The New York Times Magazine, (February), 35, available at https://www.nytimes.com/2018/02/20/magazine/the-case-against-google.html.
7 According to preliminary results from ongoing project “The Price is Right!” with Emil Palikot. For a theoretical discussion of the effects of promotion of new sellers on a platform, see Hagiu, A., Wright, J. (In Press). “Platforms and the exploration of new products”, Management Science.
8 “Move fast and break things” was Facebook’s motto. “M0vefast” is the password to Facebook’s HQ guest wifi. Thomson, N. & Vogelstein, F. (April 16, 2019), “15 Months of Fresh Hell Inside Facebook,” WIRED, available at https://www.wired.com/story/facebook-mark-zuckerberg-15-months-of-fresh-hell/.
9 Srinivasan, D. (2019), “The Antitrust Case Against Facebook: A Monopolist’s Journey Towards Pervasive Surveillance in Spite of Consumers’ Preference for Privacy,” Berkeley Business Law Journal, 16(1), available at https://scholarship.law.berkeley.edu/bblj/vol16/iss1/2.
10 Grennan, M., Myers, K., Swanson, A. & Chatterji, A. (2018), “Physician-Industry Interactions: Persuasion and Welfare”, Cambridge, MA. https://doi.org/10.3386/w24864.
11 Scott Morton, F., Jullien, B., Katz, R. & Kimmelman, G. (2019), Subcommittee Report for the Study of Digital Platforms Market Structure and Antitrust, available at https://research.chicagobooth.edu/-/media/research/stigler/pdfs/market-structure-report.pdf
Competing with Data – A Business Perspective – Oriane Limouzin (Toulouse School of Economics)1
Click here for a PDF version of the article
“Scientia potentia est,” or in English “Knowledge is power,” is a maxim that applies particularly well nowadays, when it has never been so easy to collect so much information directly from a multitude of people. Such an opportunity and the benefits it can provide has been particularly well understood and used by some companies.
Indeed, with digitization, websites and platforms are collecting a huge amount of data on consumers. These data have become a crucial element of business strategies, and carry high value in themselves: they make it possible to differentiate consumers, to offer targeted advertising, and even to discriminate. Considering the mass and diversity of information collected, and the lack of transparency of the platforms, it is difficult to know exactly what the value of data is. However, it certainly provides a competitive advantage that businesses are determined to gain.
Given the importance of data, one might think it could be worth putting a price on it. However, in reality consumers give their data for free. Is it because they are not aware of their value? Do they feel like they have no other alternative? Or because the opportunity to enjoy a free or better-quality product seems like a fair trade?
Giant online platforms and use of data are definitely a hot topic and a major challenge for the future. Rethinking data exchange to move towards a more efficient system requires to consider both competition, consumers’ privacy right, and the constant evolution characterizing the digital world. Enabling consumers to gain a more informed insight and active position in the process must be the starting point for this reflection.
The Magic of Data – Good Fairy or to be Feared?
If we wish to go deeper on this, I think it is interesting to start by taking a look at the use and benefits that data can provide, as they are not desired by companies and platforms only for the sake of having them or for some shady, greedy reasons. Acquiring data allows a firm to have a better knowledge of its consumers and their behavior, and can thus improve product quality, either through a personalized and more relevant offer for consumers, or through a wider use of data to improve processes and functionalities, or to enable machine learning. Data also makes it possible to offer targeted advertising, that can be of interest to consumers in itself, and provides an important source of revenue for digital companies, which can in turn enable them to improve their products and lower their price.
Efficient data use therefore allows consumers to enjoy a better-quality product or experience, often personalized, and which can go as far as becoming a kind of “life assistant.” Personally, I do not count the times when I was happy to find that I could log in automatically to an account (whose password I had forgotten a long time ago), when I enjoyed a play list composed of (eclectic) songs that I like and songs that I did not know I liked yet, or when a notification on my GPS lets me know that I better keep complying with the speed limits on this part of the road. Besides, what I am even less counting are all the times when I have been able to enjoy a new feature or product made possible by the use of data without realizing it.
However, it is not all rainbows and butterflies. Advertisements that do not stop popping up after an unfortunate click can be annoying. Starting to realize the extent of data gathered and the presence of the major digital groups across various platforms, applications, websites, etc. may however leave a true feeling of unease.
In short, data are an indispensable resource in the digital world, whether because they are at the heart of the companies’ business model, representing a fundamental source of income and (often) enabling them to offer a product for free or at a low price to consumers, or because they are necessary to improve the products or the experience that they provide. As a result, acquiring more, or at the very least enough, (relevant) data is the focus for most digital companies, without which they cannot effectively compete.
Before continuing, and even if the purpose here is not to go into too much detail, it is important to talk about the distinctions between data, and the fact that “data” is indeed plural. First, data collection can either be volunteered (actively given), observed, or inferred. Then, there is individual level data, either non-anonymous or anonymous, aggregated data, and contextual data. Obviously, the use and benefits that can be derived from data will depend on their type. Among these, individual level data which are not anonymized, and can therefore be linked to a given individual, are the only ones that constitute so-called private data. They are the ones that raise most issues and concerns in terms of privacy, and also probably those whose use and benefits are most directly perceived by the consumer. Given their sensitivity, they are subject to specific regulations, such as the General Data Protection Regulation in the EU, strengthening and unifying data protection for the individual.
Data Certainly Gives a Competitive Advantage that Companies are Determined to Gain…But How Large is It?
Now, going back to competition, here is a brief overview of the main mechanisms at play in the digital market. Platforms, the central players of digitization, already benefit from direct network effects: the utility for a consumer to use the platform increases as more other users use it as well. Data generate further network effects: the more data a firm has, the more it will be able to improve the quality of its products (in the broad sense), thereby attracting more consumers, who will provide it with more data, and so on. This snowball effect has the effect of strengthening the position of a company, and in a “winner takes all” kind of market it can clearly reinforce market power and have anticompetitive effects. However, in markets in which consumers typically multi-home then the network effects of data are much less to be feared, and can even have some pro-competitive effect in as much as they encourage companies to compete on quality. The actual impact of data on competition probably lies in between. A case-by-case analysis would obviously be necessary to get a more accurate answer. What is however common to the digital world is that a lot of multi-homing happens as consumers typically use multiple platforms, applications, websites, etc. Indeed, it is quite easy to switch between them as they are accessible from the same tool (computer or smartphone), almost instantly, and often free of charge. However, it would be false to say that consumers are browsing limitlessly between these different offers. This may be due to time constraints, lack of information, the existence of an offer already meeting their needs, etc., and a very simple illustration is that data could have the potentially discouraging effect of having to create – again – an account.
Overall there are both a large number of new apps and platforms emerging regularly, some with innovative concepts and/or rapid success; and some unavoidable giants (the so called “GAFAs”) reaching the majority of consumers, with a vast and complex data collection system and a wide offer, quickly adjusting to consumer needs.
Thus, even if the competitive advantage given by the data is real, it is still possible for an entrant to successfully access a market, and be able to collect enough data to grow. As examples, let us mention the success of Instagram, WhatsApp, or Snapchat, which have successfully flourished in social networking and consumer communication apps markets. The reaction of the incumbent, Facebook, is quite interesting: seeking to acquire them (successfully for Instagram and WhatsApp), at a price well beyond the one their turnover could suggest at the time, shows both the fear of the competitive pressure that these apps could generate, and the value of the data to which they give access.
What Room for Consumer Choice and Conditions for Sharing this so Desired Good?
On the one hand there are data, very valuable for companies for the functionalities they enable, the revenues they can generate, and more generally the competitive advantage they create. On the other hand, there are the users, the initial “creators” of this much desired good, whose behavior is in general quite ambiguous. Indeed, there is a growing mistrust of private data collection by large groups and the abuses it can generate (either legitimate or rather conspiracist), and privacy is often considered as a right of the utmost importance. At the same time, consumers are giving more and more data, through increased use of different networks, connected devices, etc., and for free. Why? Is it because users do not feel they have a choice to do otherwise? Because they do not realize the value of the data they hold? Or are privacy preferences ultimately offset by preferences for a free and/or high-quality product? Lastly, maybe in the end privacy is not so much of an issue for most users, either because they actually do not care that much, or because they trust companies enough not to use their data for questionable purposes, or they trust regulation to protect them.
To all these questions, there are no definite answers. However, consumers on a subject such as this certainly have heterogeneous preferences, some valuing privacy more than others. Second, it is also clear that, at least to some extent, consumers often feel constrained to share their data. In the case of a dominant platform, such as Facebook, this can be especially problematic. Having to give up using Facebook is not really an option for many users, since it is the only network of that size and with the widest functionalities. As a result, Facebook takes advantage of this position to extract “consent” from its users, which goes as far as collecting user data through other apps or websites (Instagram, WhatsApp, websites with a Facebook interface or relying on Facebook, etc.) without them realizing it.
In February, the German Bundeskartellamt issued a much awaited and possibly pioneering decision, concluding that Facebook was infringing competition law and violating the GDPR, by abusing its dominant position. Contrary to previous cases, the exploitative abuse here does not consist of an excessive price, but rather of an excessive collection of data, for which the user cannot give free and informed consent. By stating that the excessive collection of data, in violation of the GDPR, may constitute an abuse of a dominant position, this decision also shows the blurred line and interconnection between users’ data protection and competition law.
Balancing Competition and Privacy…In Different Settings
Bearing in mind the challenges posed by these competition and privacy issues, what are the paths for improvement? First of all, it is necessary to distinguish between two cases, depending on the type of data.
Indeed, with regards to non-private data, in other words anonymized individual level, aggregated and contextual data, privacy issues are not really relevant (provided that they truly cannot be linked to a specific individual). Consequently, and given the advantage they provide, the best means to support competition would be an easy, global, and fair access to relevant data for companies. For more precision, in the report “Competition policy for the digital era” by Jacques Crémer, Yves-Alexandre de Montjoye, and Heike Schweitzer, they discuss in detail the benefits of greater data dissemination and different possible mechanism of data sharing as well as the question of access to data under 102 TFEU and the set of potential anticompetitive effects that can be generated by these practices.
With regard to private data, the situation is even more complex. Indeed, even if from a competition point of view wider dissemination of data is also desirable, it is also a matter of respecting the privacy rights of users. As said previously, private data cannot (or should not) be collected or transferred without consent. The GDPR provides for that, and in order to enhance data exchange within the limits of privacy regulation, it establishes the right to data portability: the right of users to receive the personal data they have provided to a “collector” (volunteered and observed data, but not the ones inferred), and the right to transfer their data from one controller to another.
Such possibility is certainly a step forward, in that it allows users to be more aware of the data that companies possess about them, and can allow them to switch between competitors in a more convenient way. However, the fact that users will exercise this right, at least on their own initiative, does not seem obvious.
Additionally, another alternative could be the idea of an intermediary to manage user data, which was heard for example during the conference “Shaping competition in the era of digitisation.” The idea is for a third party to collect the private data that an individual agrees to share, and then use it efficiently. In particular, since such third parties would no longer be isolated and poorly informed individuals, these parties would have the opportunity to monetize the data and offer return on them to users. Although the concept is interesting, in that it would allow users to have a real knowledge of the data they share and their value, from which they would benefit directly, many questions arise. Indeed, it would be a very different system from the one currently in place, leading to major changes in the model used by digital companies. This raises questions of feasibility and efficiency, as well as competition issues. These third parties would therefore have very substantial power, raising problems of effective competition at this level, such as foreclosure and various anticompetitive behavior or abuses.
Thinking about a framework that can effectively balance competition and privacy is complex, especially since it requires considering the plurality of data and company needs. This should often be reflected in an analysis on a case-by-case basis. In any event, it is clear that ensuring greater awareness and effective consent of users is the necessary starting point for moving towards a better, more competitive, and sustainable business ecosystem in the digital world.
1 Student in Master 2 “Economics and Competition Law” at Toulouse School of Economics – contact: limouzinoriane@gmail.com.
Preserving Innovation Competition in the Digital Era: “Killer Acquisitions” – David Pérez de Lamo (College of Europe, Bruges)1
Click here for a PDF version of the article
A phenomenon that has lately raised increasing concern within the antitrust community is the issue of the so-called “killer acquisitions.” By this term, the law and economics literature refers to acquisitions by incumbent firms of promising companies, able to potentially and significantly threaten their position, with the objective of eliminating future competition. Indeed, the topic has become quite in vogue, as it can be noticed by the numerous references to it in the conference organized by DG Competition in January 2019: “Shaping competition policy in the era of digitization” (hereinafter, “the DG Competition conference”).2
However, even if this issue is lately a recurring one, it is barely developed from a substantive standpoint and commentators often base their reasoning on mere intuitions. In fact, there is only one notable contribution on killer acquisitions and it is written by academics in economics or management science.3 In addition, that contribution only focuses on the pharmaceutical industry, which means that the issue of killer acquisitions in the digital sector has not been addressed in the competition law literature. Typically, conferences and academic research have only incidentally addressed the topic in the context of broader competition policy discussions.
Against this background, two expert panel reports have been published recently with the purpose of providing recommendations on whether to adapt the competition law rules to digitization: the Furman and the European Commission Special Advisers’ Reports.4 Both reports have tried to shed some light into the debate on killer acquisitions, hence these proposals will be frequently referred to in the ensuing paragraphs.
The Majority of Incumbents’ Acquisitions are Pro-Innovative “Bolt-on Acquisitions” or, at the very Least, Neutral to Competition
The DG Competition conference left attendees with the impression that incumbent digital firms are systematically eliminating future innovation competition by acquiring and “killing” promising incipient companies; nevertheless, this view greatly mischaracterizes a far different reality. In fact, it is safe to assume the vast majority of incumbents’ acquisitions are pro-innovative or, at the very least, neutral to competition.
Most of the acquisitions by incumbents do not have the objective of eliminating competition but, on the contrary, greatly foster innovation by exploiting synergies and/or incorporating complementary technologies and capabilities. In this sense, numerous academics have found that if the acquirer owns a complementary technology, the merger will increase the innovation performance of the resulting undertaking,5 so long as these are carefully integrated.6 In this way, companies often “bolt-on” the newly acquired complementary technologies and capabilities to their current offerings in order to enhance their value proposition. These “bolt-on acquisitions” pervade the merger record of big tech firms. For instance, Google acquired and integrated into its Google Maps offer plenty of companies with complementary technologies and capabilities ranging from traffic and map analysis to location-based analytics and local recommendations/reviews apps – like ZipDash, Where2, Keyhole Inc, Endoxon, ImageAmerica, Quiksee, Zagat, Clever Sense, Skybox Imaging, Urban Engines, etc. – which have allowed it to substantially improve its value proposition. Among these, only Google/Waze generated any competition concerns.
In this respect, it was surprising to see that some voices at the DG Competition conference called the process by which start-ups are launched for the very purpose of being bought (what is called in economics “entry for buyout”) “bad innovation.” This view greatly mischaracterizes the whole process, firstly, because it wrongly assumes that the acquired companies are in a position to challenge the incumbent’s position, whereas very frequently that is not the case and, secondly, because it obviates the fact that most of these companies rely on the financial, reputational, and organizational support of the acquirer to be able to innovate successfully. As the EC Report acknowledged:
[…] mergers between established firms and start-ups may frequently bring about substantial synergies and efficiencies: while the start-up may contribute innovative ideas, products and services, the established firm may possess the skills, assets and financial resources needed to further deploy those products and commercialize them. Simultaneously, the chance for start-ups to be acquired by larger companies is an important element of venture capital markets: it is among the main exit routes for investors and it provides an incentive for the private financing of high-risk innovation.7
This process should be viewed through a different lens: the products developed by incumbent digital firms spur significant innovation around them which is often later incorporated to make even better products. It is therefore clear that the majority of acquisitions by big tech companies are pro-innovative or at the very least, neutral to competition. Nevertheless, it is equally reasonable to assume that some transactions would have risen serious concerns provided that, as Jean Tirole said at the conference, “it is too easy for incumbents to buy out their future rivals” and they have all the incentive to do so. Most, however, have inevitably gone unnoticed because they are not caught by the current EU merger regime.
How to Catch Killer Acquisitions under the Current EU Merger Regime?
An important impediment to the competitive assessment of these transactions is that they often escape the EU Merger Regulation (hereinafter, the “EUMR”) notification requirements. The main reason is that the notification thresholds only take into consideration the turnover of the merging parties. In contrast, “start-ups attempt [first] to build a successful product and attract a large user-base without much regard for short term profits: they hope either to be acquired or to begin monetizing their user base at a relatively late stage.”8
Until now, some of the transactions that escaped the EUMR thresholds were caught through the referral mechanism of the EUMR.9 This was, for instance, the case of the Apple/Shazam merger which was referred to the Commission by the Austrian authority, together with some other national competition authorities, in accordance with Article 22(1) EUMR. However, this mechanism is limited in effect given that the Commission is only able to look at the implications of the concentration in the Member State territory of the referring authorities. Other acquisitions, like Facebook/WhatsApp, were referred to the Commission by the notifying parties under Article 4(5) EUMR. In the latter case, unlike Article 22(1) EUMR, the Commission acquires full jurisdiction over the transaction.
Nevertheless, the referral system has proven to be insufficient considering that some very controversial transactions never reached the Commission’s hands, including Facebook/Instagram and Google/Waze. Both of these transactions were instead caught by the UK merger framework and scrutinized by the Office of Fair Trading.10 For this reason, and following Austria and Germany’s lead, many have called for a reform of the EUMR to adopt transaction value-based thresholds. This proposal shows several important problems, as highlighted by the EC Report,11 and that is why the Special Advisers suggested taking stock from the Austrian and German reforms before drawing conclusions at EU level. The position taken by the expert panel on this topic can be considered to be rather conservative since no other alternatives were examined.
For its part, the Furman Report made the recommendation to “[require] digital companies that hold a “strategic market status” to make the CMA aware of their intended acquisitions [to] allow the CMA to determine in a timely manner which cases warrant more detailed scrutiny.”12According to the report, the strategic market status would be granted to those companies holding market power over a strategic bottleneck market.13 However, in my view, this approach is somewhat deficient in that not all the firms over whose transactions we should worry about operate as gatekeepers of a market. This may be true, for instance, for Google or even Amazon, but it is certainly not for Apple, Samsung, or Facebook, among others. Furthermore, Google may be a gatekeeper in general search, but it is definitely not in other relevant markets.14 For these reasons, I consider that a broader definition of “strategic market status” based on a more comprehensive approach similar to the assessment of (super)dominance would constitute a more viable alternative. This approach would thus also take into consideration other factors, such as particularly high market shares and substantial barriers to entry in the form of strong network effects, availability of large data sets, and intellectual property rights inter alia. In this regard, even if this would force incumbents to notify all of their transactions, big corporations have more than enough resources to do so and, in any case, the burden could be minimized by establishing an ad hoc fast-track procedure.15 Lastly, by opting for this alternative, we would also be able to catch alleged killer acquisitions in other industries where the value of the transactions is not that high, like in the pharmaceutical sector.
Finally, another – practically uncharted – alternative would be to apply Article 102 TFEU directly to these transactions, as the Commission did in Tetra Pak I. In that case, the General Court found that:
the acquisition by an undertaking in a dominant position of an exclusive patent license for a new industrial process constitutes an abuse of a dominant position where it has the effect of strengthening the undertaking’s already very considerable dominance of a market where very little competition is found and of preventing, or at least considerably delaying, the entry of a new competitor into that market, since it has the practical effect of precluding all competition in the relevant market.16
This case shares many traits with the killer acquisition scenario and its rationale could perfectly be extrapolated here. There is nothing that would impede the application of Article 102 TFEU to these cases and it would provide the Commission with a more complete enforcement toolbox. In this respect, it would be necessary, as proposed by the Furman Report in the UK, to establish a digital markets unit “with new powers available to impose solutions and to monitor, investigate and penalize non-compliance”17 that would enable the Commission to speed up enforcement and, therefore, to achieve an adequate level of deterrence in an area where dynamism is key.18 The proposal for the establishment of a dedicated unit for digital markets has been backed by officials of different competition authorities19 and it has recently been endorsed by the UK government.20
In light of the above, a combination of an ex ante control mechanism requiring those undertakings holding a “strategic market status” to notify their transactions, in parallel with an ex post application of Article 102 TFEU by a dedicated digital markets unit, could constitute a solution to the problem of catching these transactions.
Substantive Assessment
Apart from establishing a system to catch these transactions, their competitive assessment should also be rethought. The analysis will vary depending on whether the acquirer and the target have directly overlapping products.
A. Horizontal Mergers: Transactions with Overlap
In these cases, the assessment will be relatively simple since the acquisition would not pass the substantive test of the EUMR, provided that there are no relevant countervailing efficiencies, as it would lead to a significant impediment of effective competition (“SIEC test”): acquiring a promising start-up would strengthen the incumbent’s dominant position by protecting it from a potential challenger. As noted above, the Commission used the same rationale in Tetra Pak I. In that case, the General Court upheld the Commission’s finding that Tetra Pak’s acquisition of the only relevant competing technology constituted an abuse of dominance, as it had the effect of strengthening the undertaking’s already very considerable position in a market where very little competition was to be found. In my view, this should have been the case of the Facebook/Instagram merger.21
B. Non-Horizontal Mergers: Transactions without Overlap
Conversely, when the target company has fringe products or services and operates in an adjacent market, it will be significantly more complicated to assess the competitive effects of the transaction. The problem arises because, in principle, the Commission will have to prove to the requisite legal standard that the target is a potential competitor in the core market of the acquirer.
- The Proposals from the Expert Panels in the Furman and EC Reports
In that sense, the Furman Report laid down a much-discussed proposal22 according to which the CMA should be bolder and “more economically oriented” by changing the evidentiary standard from a “balance of probabilities”23 to a “balance of harms.” In essence, the idea would be to relax the evidentiary standard in mergers with a “potentially very large scale of lost benefits.” That would mean that, when the magnitude of the harm is considerable, the evidentiary standard would be lowered from a “more likely than not” to a “realistic prospects”24 standard. According to the Furman Report, this should be amended in spite of some “occasional rare false positive along the way.” The latter is an inaccurate premise given that, as explained above, the vast majority of acquisitions of small firms by large digital incumbents are pro-innovative bolt-on acquisitions or, at the very least, neutral to competition. Most worryingly, such an evidentiary asymmetry25 would leave the competition authorities with an incommensurate level of unbacked (and thus, incontestable) discretion. As the famous astronomer Carl Sagan once put it, “extraordinary claims require extraordinary evidence” or, equally, “what can be asserted without evidence can be dismissed without evidence.”26
For its part, the EC Report circumvented the issue of establishing the requisite evidentiary standard by suggesting a novel theory of harm based on a “broader view of the position of the incumbent in a market for the digital ecosystem.”27 The harm would derive from the strengthening and enclosing of a particular “user space,” by expanding the network effects from one platform to another. However, this novel theory of harm displays some critical flaws. First, even if it would have possibly worked for transactions such as Facebook/Instagram and Google/Waze,28 a range of cases would nevertheless escape where (i) there is no extension of the network effects; or (ii) in the event of the extension, users are not locked in because the value derived from the network effects is not the primary reason to stay on the platform. Second, it is also difficult to grasp what the actual harm is in this theory: are users, as a consequence of the acquisition, paying a higher price, enjoying lower quality or less choice? If anything, it seems that users stay on the newly created platform because they derive more value from the strengthened network effects. It is for these reasons that, in my view, the proposal of the EC Report is equally unsatisfactory.
- A Sounder Alternative: Applying the Innovation Competition Approach
An “innovation competition” approach would provide the necessary tools to tackle the intricate problem at stake. In a series of cases ranging from Novartis/GlaxoSmithKline29 and GE/Alstom30 to Dow/DuPont31 and Bayer/Monsanto,32 the regulated framework of those industries (pharmaceutical, industrial manufacturing, and agro-chemical) allowed the Commission to capture restrictions of competition at an early stage, that is, before any anticompetitive effect on the relevant market could be predicted with enough certainty. This means that, if we managed to extrapolate the innovation competition methodology to digital transactions, it would not be necessary to establish a “potential competition” relationship to the (highly demanding) requisite legal standard. Instead, we would need to show that the target company is pursuing a discernible innovation objective, consisting in creating a potentially competing product from an adjacent market and that it has the ability and incentive to carry it through. In this respect, it would not matter if it is still uncertain ex ante whether the developing product will end up actually competing with the existing product or whether it will eventually reach the market at all: as it was established in the abovementioned cases, the object of protection would be the incentive of the parties to innovate, that is, the innovative process per se.33
The EC Report has explicitly rejected the application of the innovation spaces methodology to digital transactions on the ground that in the digital sector, as opposed to the heavily regulated pharmaceutical and agro-chemical industries, R&D does not take the form of a distinct and well-structured process with clearly identifiable research poles.34 In contrast with this statement, the Commission has managed to shift outside of the pipelines framework in the last agro-chemical cases Dow/DuPont and Bayer/Monsanto to define innovation spaces at the level of early R&D efforts. As shown in these cases, a holistic approach, including an analysis of (i) essential resources (e.g. large data bases, specialized and expensive hardware, access to financing, engineering skills, and computation power inter alia35); (ii) capabilities (as a function of the company’s skillset, strategy, governance structure, and past behavior36); (iii) patent overlaps; (iv) investment plans of both merging parties setting innovation targets; and (v) internal documents of the acquirer with post-merger divestment plans, should allow the Commission to define the relevant innovation space and perform an innovation competition assessment in digital transactions, despite the absence of pipelines.37 As introduced above, the underlying approach would entail a classic two-step test, where the Commission has to prove that the target company displays both (1) the ability;38 and (2) the incentive to pursue an innovate project capable of threatening the incumbent’s position.39 In this regard, instead of a classic innovation competition setup of overlapping pipeline products or early R&D efforts (as in Dow/DuPont), the situation would present an existing product that is being threatened by an incoming innovative product in the pipeline (as was the case in Medtronic/Covidien40).
In fact, the EC Report later accepted that this approach may “obviously” be relevant in some circumstances where essential resources or capabilities are present, to nuance next that, precisely because of the lack of them at an early stage, the methodology would rarely be applicable to the acquisition of incipient start-ups.41 This point seems unconvincing because, in order to raise any competition concerns, early and targeted acquisitions must be triggered for a specific reason. There must be something particularly valuable about the target company, in terms of assets or capabilities, for the incumbent to find it promising to acquire it (usually for an important sum) instead of just replicating the technology or product in question. If no essential assets or capabilities are detected, on the contrary, the transaction should logically not raise any competition concerns at all. In that case, the acquisition by the incumbent firm would be merely speculative (or just neutral to competition) and any competition concern raised by the authorities would be equally unsubstantiated. This should not, however, constitute an argument for the non-application of the innovation competition approach.
The innovation competition approach would provide the Commission with a more suitable methodology to deal with the killer acquisitions issue in situations where the target company operates in an adjacent market, as opposed to the proposals of the Furman Report, based on a “balance of harms” approach, and the EC Report, based on a novel theory of harm entailing a “broader view of the position of the incumbent in a market for the digital ecosystem.” The expert panel of the EC Report should have paid more careful consideration to the innovation competition alternative and it should not have dismissed it that promptly. By extrapolating this methodology to digital transactions, the Commission’s assessment of innovation concerns would also be consistent across the board in merger control.
Final Conclusions
The issue of “killer acquisitions” has recently attracted increasing concern within the antitrust community, in particular, because of their important harm to digital innovation. However, the topic has barely been developed from a substantive standpoint. This paper has taken the opportunity to explore and propose solutions to the different problems in dealing with killer acquisitions. In this regard, the main findings are:
- The majority of small firm acquisitions by incumbents do not have the objective of eliminating competition but, on the contrary, greatly foster innovation by exploiting synergies and implementing complementary technologies. These are the so-called “bolt-on acquisitions.” However, there are good reasons to suspect that digital incumbents may have at times eliminated potential competition by means of “killer acquisitions.”
- The current enforcement system should be adapted to include a combination of (i) an ex ante control requiring those undertakings to which the “strategic market status” has been granted to notify their transactions; and (ii) an ex post application of Article 102 TFEU by a dedicated digital markets unit.
- Finally, the Commission should adopt the innovation competition approach, as developed in the line of cases Novartis/GlaxoSmithKline, GE/Alstom, Dow/DuPont, and Bayer/Monsanto, to the substantive assessment of alleged killer acquisitions.
1 Student of the College of Europe (Bruges), LL.M. Candidate in European Legal Studies, European Law and Economic Analysis Option, Promotion Manuel Marín (2018-19). This article would not have been possible without the very valuable contributions of my thesis supervisor and Professor of Competition Law at the College of Europe, Philip Marsden, my Professor of Economics of Competition Law, Lorenzo Coppi, and my colleagues Laura Somaini, Rubén Perea Molleda, Juan González-Moya, Nicolas Fafchamps, and Maxime Lambilliote, who generously took a moment of their time during the stressful exam period to read it and provide me with their best advice.
2 January 17, 2019: https://webcast.ec.europa.eu/shaping-competition-policy-in-the-era-of-digitisation.
3 C. Cunningham, F. Ederer & S. Ma, “Killer Acquisitions,” (2018), SSRN.
4J. Furman et al., “Unlocking digital competition. Report of the Digital Competition Expert Panel,” March 2019 (hereinafter, “Furman Report”); J. Crémer, Y. De Montojye & H. Schweitzer, “Competition policy for the digital era,” April 2019 (hereinafter, “EC Report”).
5 B. Cassiman et al., “The Impact of M&A on the R&D Process: An Empirical Analysis of the Role of Technological and Market Relatedness,” (2005) 34, Research Policy, p. 197. Other contributions in the same sense include Gans & Stern (2003), Arora & Gambardella (2010), Arora et al. (2014).
6 https://hbr.org/2016/07/the-problem-of-bolt-on-acquisitions-in-a-digital-world, accessed 14 June 2019.
7 EC Report, p. 116. See, in the same vein, Furman Report, pp. 49-50.
8 Ibid, p. 116.
9 The referral mechanism of the EUMR consists, in essence, of a system which allows for transactions that would normally have to be assessed by the Commission to be transferred to the National Competition Authorities (“NCAs”) and vice versa. The relevant provisions of the EUMR are Arts. 4(5) and 22, for referrals from the NCAs to the Commission, and Arts. 4(4) and 9, for referrals from the Commission to the NCAs.
10 Since 2014, the OFT has been replaced by the Competition and Markets Authority (the “CMA”).
11 EC Report, p. 119.
12 Furman Report, p. 18.
13 Ibid, p. 16.
14 Why should we scrutinise then its acquisitions in other relevant markets where it does not have that gatekeeping position?
15 This notification procedure would specifically apply to firms holding the “strategic market status” qualification, it should be more summary than the simplified procedure already foreseen by the Commission and it should focus on providing the necessary elements to make a preliminary substantive assessment, that is, information relating to essential assets and capabilities of the target (in connection with this, read infra), as well as the rationale of the transaction, expected results, pro-innovative effects, etc.
16 Judgment of July 10, 1990, Tetra Pak Rausing SA v. Commission, Case T-51/89, EU:T:1990:41, see summary.
17 Furman Report, p. 10. Albeit, this proposal was made in that report solely in relation to an ex ante control function.
18 This would also help the Commission to deal effectively with all the new notifications arriving from those companies holding “strategic market status” as suggested above.
19 https://globalcompetitionreview.com/article/1193812/ex-ante-regulation-is-crucial-in-digital-sector-uk-and-australian-enforcers-agree?utm_source=linkedin&utm_medium=social&utm_campaign=news, accessed June 13, 2019.
20 https://www.gov.uk/government/speeches/pm-speech-opening-london-tech-week-10-june-2019, accessed June 13, 2019. Theresa May: “And I am pleased that Professor Furman has today agreed that he will advise on the next phase of work on how we can implement his recommendation to create a new Digital Markets Unit.”
21 In that case, the OFT dismissed with a strikingly shallow level of analysis the potential competition issue in relation to the supply of social network services (see OFT Decision of August 22, 2012, Case ME/5525/12 – Facebook/Instagram, pars. 22-24, 29). The highlighted differences in functionalities should have been considered negligible in the eyes of the users and, in that sense, a strong case could have been made for a broader market from the users’ side (or even for attention markets).
22 Also discussed within the DG Competition conference by one of the Special advisors who co-authored the EC Report (H. Schweitzer), but then it was never included in the final version probably because of the problems that will be described next.
23 The Commission has a very similar legal standard: “significant likelihood” (Horizontal Merger Guidelines, par. 60).
24 That the negative effects are merely “likely to occur.”
25 It is a basic evidentiary principle that the size of one’s claims should be directly proportional to the evidence put forward.
26 The so-called Hitchens’s razor.
27 EC Report, p. 122.
28 https://digital.hbs.edu/platforms-crowds/google-maps-doubles-network-effects-stave-off-formidable-competition/, accessed March 15, 2019.
29 Commission Decision of January 28, 2015, Case M.7275 – Novartis/GlaxoSmithKline Oncology Business.
30 Commission Decision of September 8, 2015, Case M.7278 – General Electric/Alstom.
31 Commission Decision of March 27, 2017, Case M.7932 – Dow/DuPont.
32 Commission Decision of March 21, 2018, Case M.8084 – Bayer/Monsanto.
33 In the range of pharmaceutical and agro-chemical mergers, the Commission has repeatedly established that it is irrelevant to the innovation competition assessment that the innovative process is highly uncertain, that is, the fact that ex ante the relevant developing products may still have a low probability of getting to the market or, even if they do, of ending up competing against each other in the future. Instead, what matters is that if two competing innovation projects fall in the same hands as a result of a merger, the incentive to innovate will disappear and, consequently, the projects will be stopped. Unless the incentives of the parties to keep innovating are maintained, the potential innovative outcome at stake will never take place (provided that the resulting company does not have other incentives to still carry it through).
34 EC Report, p. 125.
35 W. Kerber, “Competition, Innovation, and Competition Law: Dissecting the Interplay,” (2017) 42, MAGKS Joint Discussion Paper Series in Economics, pp. 15-16.
36 J. G. SIDAK & D. J. TEECE, “Dynamic Competition in Antitrust Law,” (2009) 5(4), Oxford Journal of Competition Law and Economics, pp. 614-617.
37 Similar suggestions have been made by M. Bourreau & A. De Streel, “Digital Conglomerates and EU Competition Policy,” (2019), p. 27-28; W. Kerber, supra note 35, pp. 15-16.
38 Entailing the analysis of the first three elements.
39 Including the two last factors inter alia. This means that the Commission would not necessarily have to rely on finding explicit plans in the internal documents of the parties. If it can build the case why it may be attractive for the target company to compete against the core business of the acquirer, the innovation competition relationship would be established and the incumbent would actually be constrained by the target.
40 Commission Decision of November 28, 2014, Case M.7326 – Medtronic/Covidien, pars. 247-250.
41 EC Report, p. 125.
Big Data, Consumers’ Privacy, and Competition in Online Markets – Guillaume Thébaudin (Télécom Paris)1
Click here for a PDF version of the article
At the dawn of the Internet of Things, consumers are increasingly required to disclose their private information to online firms. With the use of data analytics, these firms are able to increase their knowledge about the preferences and characteristics of their users. This knowledge is highly valuable for them as it generates revenues through disclosure to third parties (e.g. advertisers) as part of their business models and enables the delivering of more personalized and valuable products to users.
Online behavioral research pioneered by Miyazaki & Fernandez (2001) showed that consumers heterogeneously provide their private information to online firms due to different perceptions of the risk of privacy breaches.2 Recent scandals such as Cambridge Analytica as well as the increasing number of cyberattacks have shown that these concerns are justified. Barnes (2006) has emphasized a “privacy paradox”: consumers concerned about their online privacy are increasingly engaged in data disclosure activities.3 This privacy paradox is largely explained by the increasing personalization of online services. Chellappa & Sin (2005) pointed out a trade-off faced by users between their value for personalization and concerns for privacy.4
Competition authorities (Autorité de la Concurrence and Bundeskartellamt, 2016) argued that existing firms’ access to users’ data can represent a source of increasing market power if these data are hardly replicable by potential entrants.5 Competition concerns arise as online markets are highly concentrated and consequently, only a small number of firms are able to engage in such massive personal data collection process. These data enable large online firms to offer valuable and personalized features of their service which are likely to increase consumers’ lock-in. Consumers may indeed find it, psychologically and timely, too costly to re-enter the same amount of data to obtain a similar degree of personalization with another firm. The recent implementation of the General Data Protection Rule (“GDPR”) in Europe is an illustration of attempts to regulate big data activities and moderate this ongoing process of ever-increasing market power.
This note develops a dynamic framework aiming at better understanding how online firms are able to incentivize consumers to disclose more of their data, despite their privacy concerns, and gain market power. It focuses on the interactions between online firms offering “free-of-charge” services, which are able to collect, analyze, and sell data, and a continuum of consumers who heterogeneously care about their privacy. This modeling framework appears to be relevant to assess the difficulty to introduce innovation in data-driven markets and challenge current dominant players. It is also found to be useful to evaluate the effects of a new regulatory instrument implemented recently by the GDPR: the right to data portability. It can further be used to rationalize the establishment of data sharing contracts between competitors, a growing phenomenon occurring online.
Benchmark Model
To study the interactions between a firm and its user base, consider an online monopolist whose business model is based on revenues from the disclosure to third parties, such as advertisers or data brokers, of consumers’ data it is able to collect. In order to subscribe to the service it offers, users are required to provide a “fixed” amount of basic information about themselves. Once they have provided these basic data, they can enjoy different features of the service. But in order to enjoy such features, they have to provide additional data of an amount that varies with the usage intensity of the service. If a consumer wishes to have a deeper usage of the service, she would need to provide more data compared to one who decides to have a more moderate usage. For instance, Facebook requires basic information such as gender and age in order to subscribe to the platform. Then, in order to consume the different personalized functionalities of the platform, such as sharing photos, today’s mood, or outside activities, consumers need to provide additional information about them to the platform. Therefore, a deep user of Facebook will have provided more of her data compared to a more moderate user. In light of the view of data as a currency, this feature is equivalent to the monopolist charging a two-part tariff to its consumers.
The monopolist has some skills in data analytics, which enables it to have a better knowledge of its consumers. It uses it for two purposes. The first one is to disclose meaningful information to third parties, enabling them to make targeted advertising for instance, so as to generate revenues. The second is to develop new personalized features which will be available to users in the future.
On the demand side, consumers are heterogeneous in their willingness to disclose private information online. In other words, they have different perceptions of the risk of data breaches, such as cyberattacks, which could harm their privacy, irrespective of the websites they patronize. This risk perception increases with the amount of data online firms disclose to third parties. Users all have the same initial preference for the service but depending on their degree of privacy concerns and the amount of data the service requires and discloses to third parties, some users will decide to subscribe and some will not. They may find it hard to anticipate that by disclosing their information, the value of the service could increase for them in the next period, through new personalized features of the service, as online firms often maintain a culture of secrecy over their R&D activities.
The equilibrium in the initial period is characterized by a level of required and disclosed data to third parties and a privacy threshold above which consumers choose not to subscribe. Consumers balance the benefit they get from subscription and the cost they incur from disclosing their data. Those who decided to provide the amount of data required for subscription, i.e. the fixed cost, but located close to the privacy threshold, will have a moderate use of the service compared to others who do not care much about their online privacy.
At the beginning of the second period, the monopolist has been able to analyze the data consumers have heterogeneously been providing it in the first period. If the amount of data collected is large enough and provided consumers are relatively homogeneous beside their privacy concerns, the monopolist is able to make inferences about each consumer; even those that did not provide much data initially. The monopolist is now able to offer additional personalized features that increase every consumer’s valuation for the service. In the second period, each consumer is therefore incentivized to disclose more data in order to enjoy the new features of the service. This could even incentivize consumers who decided in first period not to subscribe to do so. The interesting feature is the externality that consumers who have a low valuation for privacy exert on the others, incentivizing them to disclose more data than they would have initially delivered. This externality could be stronger in the presence of direct network effects.
Thereby, in a dynamic setting, consumers have heterogeneous and increasing valuations for the service due to personalization, enabled by data analytics. Consumers find themselves increasingly locked-in, as the costs of switching while preserving an equivalent level of personalization increases, and the monopolist gets escalating revenue from disclosing information as the amount of data it collects increases over time. Therefore, this model could explain the ever-increasing market power of online firms, the tendency for market tipping, and the data disclosure behavior of the most privacy concerned users.
Market Entry and the Right to Data Portability
Laid down in the European Union’s GDPR passed in April 2016, the right to data portability allows internet users to obtain personal data they had transmitted to an online service and transfer them to another data controller. The right to data portability aimed at reducing consumers’ lock-in by reducing the switching costs related to re-entering the same amount of data they already have provided in order to obtain a similar degree of personalization and therefore value of the new service. The implementation of the right to data portability by the GDPR distinguishes between data provided by users, which are portable, and data derived and inferred by the firm through data analytics, which users are not able to obtain.
Consider an innovative entrant in the monopolistic market developed in the baseline model where, at first, consumers are not able to port their data. It is able to provide a greater initial value than the incumbent at subscription to all users, besides the fact that it has no access to users’ data. However, not all consumers will switch because some, who have a deeper usage of the incumbent’s service, derive from it a higher utility than they would initially get from the entrant. Some consumers may have a higher preference for the new service, but have to incur switching costs related to getting used to and re-entering data to the new service which deters them from switching. Only the most privacy-concerned users, which are less locked-in and have a relatively lower valuation for the incumbent’s service, will end up switching, and deliver the fixed amount of data required by the entrant. Users who will switch are the ones that care the most about their privacy, and the ability of the entrant to collect data, and therefore to increase the value of its service and generate revenue via disclosure, appears to be limited as compared to the incumbent. Imbalances between the incumbent and the entrant are likely to persist in a dynamic setting as the incumbent may be able to increase the value of its service at a higher rate than the entrant due to differences in the composition of their respective consumer base. Thereby, some users who previously switched may decide to switch back if the value offered by the previous monopolist is higher than the entrant’s. Incentives for switching back are facilitated by lower switching costs as the incumbent’s service is already known and the previous degree of personalization can be recovered if the incumbent has kept their data.
If users are able to port their data, switching costs are reduced as they do not have to re-enter data they have already provided to obtain a similar level of personalization. More consumers, who on average care less about their privacy, will end up switching, which increases the entrant’s data collection ability. However, not all consumers will switch as the incumbent is able to deliver greater value to least privacy-concerned, through data it has inferred of them which are not portable, compared to the entrant. A possible strategy for the entrant would be to pay these consumers in exchange of porting their data, so as to attract them and increase the probability of successful entry. In anticipating entry, the incumbent also faces an ex ante trade-off between lowering its data collection so as to limit the amount of data which will be available to the entrant, and increasing it, in order to increase the consumer’s preference for its service thereby deterring them from switching.
Data Sharing Contracts between Differentiated Competitors
Online platforms are observed to establish data sharing contracts with their competitors. The “Facebook login” API is an example of such contracts. It enables Facebook users to login on other platforms which will receive some data they have provided to Facebook. In exchange, these platforms share with Facebook data users provide on their website. Such conduct of Facebook has recently been investigated by the Bundeskartellamt which concluded that it constitutes an abuse of dominance, enabling it to limitlessly amass consumers’ data from other sources. The framework previously developed is found to be useful to give a rationale to such contracts in two respects, and to assess whether they should be allowed under competition law.
First, a dominant firm could find it profitable to offer a data sharing contract to a smaller differentiated competitor with some users multi-homing the two platforms. Consider a contract which grants the smaller firm access to the dominant firms’ database in exchange of the small firm continuously sharing some of the data she collects over time with the dominant one. This contract can be in the interest of the smaller competitor, which will be able to increase the value of its service and therefore attract more consumers. The dominant firm could also be interested in this contract as it will be able to continuously amass consumers’ data of a different scope than it is able to collect if the two firms are horizontally differentiated, and consequently increase the output of its data analytics activities. From the smaller competitor’s perspective, an increase in its consumer base enabled by this contract comes at the cost of a restricted privacy policy. Such contract may constitute an abuse of dominance, aiming at increasing the dominant player’s data collection possibilities and consequently its market power.
Second, still in a horizontally-differentiated framework with two or more competitors of similar size, this model enables to study the establishment of data sharing contracts from a collusive perspective. Users, who can multi-home, provide data to each of the competitors individually, according to the disclosure strategy each of them has set. By contracting on continuous data sharing, along with an increasing complementarity and interoperability between the different services, firms could collectively acquire more data of different scopes, thereby increasing consumers’ valuation for all services. As data dissemination across entities increases the risk of privacy breaches, consumers highly concerned with their privacy could decide to stop patronizing the services, but new entry, which could best suit their privacy concerns, is likely to be deterred as consumers find themselves increasingly locked-in with the existing services.
1 Upcoming PhD student at Télécom Paris.
2 Miyazaki, A. D. & Fernandez, A. (2001), “Consumer Perceptions of Privacy and Security Risks for Online Shopping,” Journal of Consumer Affairs, Vol. 35, 27-44.
3 Barnes, S. B. (2006), “A privacy paradox: Social networking in the United States,” First Monday.
4 Chellappa, R. K. & Sin, R. G. (2005), “Personalization versus Privacy: An Empirical Examination of the Online Consumer’s Dilemma,” Information Technology and Management, Vol. 6, 2-3.
5 Autorité de la Concurrence and Bundeskartellamt, (2016), “Competition law and data.”
Featured News
Electrolux Fined €44.5 Million in French Antitrust Case
Dec 19, 2024 by
CPI
Indian Antitrust Body Raids Alcohol Giants Amid Price Collusion Probe
Dec 19, 2024 by
CPI
Attorneys Seek $525 Million in Fees in NCAA Settlement Case
Dec 19, 2024 by
CPI
Italy’s Competition Watchdog Ends Investigation into Booking.com
Dec 19, 2024 by
CPI
Minnesota Judge Approves $2.4 Million Hormel Settlement in Antitrust Case
Dec 19, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand