Imagine you are watching your favourite TV show and its protagonist walks to the fridge and gets out a drink (or: gets into a car, answers their phone, walks past a billboard, pours breakfast cereal into a bowl – you get the picture). Unbeknownst to you, the actual drink that you see them take out of the fridge is dependent on factors about you, such as your geographic location: you see the character pull out a Diet Coke, for instance, while another person on the other side of the world watching the same episode of the same programme at the same time sees them take out a particular brand of iced tea.
You have (both) just been subject to a new type of advertising practice known as ‘digital insertion’ – a much more technologically-sophisticated, targeted version of product placement. This process of inserting virtual, CGI products and logos into content occurs in the editing suite; so, with big-budget TV shows often selling to 200 territories worldwide, this process can be tailored to localised deals and can be altered over time to fit current trends. It is one of a number of new persuasive advertising practices, united by the idea of personalisation. For other prominent examples, think of the personalised adverts that you see when using Google, Facebook, and Amazon. These work via algorithms that are, even as you read this, busy refining models of who you are and the sort of lifestyle you (want to) lead, based on data trails that you leave online. Companies buy this data as a means of matching their advertising to the particular market segment you occupy, as indicated by your data ‘profile’.
Like all advertising strategies, these emerging techniques raise interesting normative questions about the balance between manipulation and informed decision making. But is there anything distinctive about the concerns raised by these new practices, in contrast to more traditional advertising techniques? In this short post, I outline one way in which these personalised practices may differ normatively from more established techniques, and offer an explanation – based on what I call ‘the psychology of free’ – for why this transition has occurred (and why it’s probably here to stay).
Arguing against certain forms of advertising as morally wrong is far from new. Roger Crisp, for instance, in a 1987 paper (‘Persuasive advertising, autonomy, and the creation of desire’) argues that certain methods of persuasive advertising are morally problematic because they override the autonomy of consumers – and he shows this by reference to a set of arguments about the manipulative and covert processes by which persuasive advertising seeks to create desires and preferences. These critiques are clearly still applicable to our normative assessment of the new advertising practices. But in thinking about the targeted nature of these techniques, another autonomy-overriding aspect comes to the fore. Joseph Raz (The Morality of Freedom, 1986) contends that, in order to lead an autonomous life, a person must be given an adequate range of options to choose between. And this fits with the standard free market-based justification of advertising: a competitive market economy provides individuals with a large amount of options, and freedom to choose between these options: it is the role of advertising to provide information to individuals about this wide range of products, thereby allowing them to make informed choices that best satisfy their preferences. But the trend towards the personalised and algorithmic targeting of adverts looks to undermine both this justification and the ‘adequate options’ condition of autonomy.
Even on the charitable assumption that the information and data used in the targeting process was given via an adequate consent mechanism, its use in algorithmically selecting the products that you are and are not exposed to in online and TV content fails to provide you with an adequate range of options (i.e., it does not offer you the choice of options that have not, up to this point, been part of or close to your past searches or purchases) and, as a result, it fails to provide you with information about a wide range of products, some of which might in fact better satisfy your preferences but of which you are currently unaware.
What is interesting, though, is that these new persuasive practices are a response to a pervasive expectation in our generation that online services will be free of charge (‘the psychology of free’). Such services are, of course, not free; they are paid for by the selling of the personal data that we generate by using them. Companies such as Facebook make huge profits on the back of this ‘freely’ supplied data. But we do not process this psychologically as a payment per se – it is experienced psychologically as being free, even though it is not free in real terms. And it is no real surprise that this ‘payment’ is now being used not only to profile and target consumers with highly personalised advertising, but that this is increasingly becoming integrated into media content. TV content is, by and large (with a few exceptions, such as the BBC), paid for by advertising revenue; however, viewers – often with the help of technology such as online ad-blockers and fast-forward buttons – are getting better at evading this advertising (again, something that might be explained by the psychology of free). This creates a problem for advertisers; and one that they have responded to by digitally inserting advertising into the content itself. But what problems might this clever response cause for us?
“Interrupting programmes every 15 minutes with a commercial break was never a perfect system, but it at least drew a clear line between advertising and content. Now the boundaries have blurred. This is a defining decade for negotiating the balance between personal freedom and really cool free stuff. Just as we happily click “accept” for terms and conditions we never read in return for free services from the likes of Google, so we have signed a new compact with TV advertisers without quite knowing what it is.” (Guardian article, 24/06/14)
This quotation points to the idea that ‘freedom from X simply makes us subject to Y’, where X represents both traditional advertising and having to pay for certain services, and Y represents data collection, profiling, and the uses that can be made of these in a late capitalist economic system in which information is king. The psychology of free is so dangerous precisely because, in (being forced into) ‘freely’ giving information about ourselves in return for “really cool free stuff”, we become entangled in a system in which our personal freedom is further threatened. The problem is that, even if we know this, it is not clear how we ‘opt out’ of such a system in an effort to protect our personal freedom, without opting out of using these data-collecting services that have – for better or worse – become a central part of the logic of the world we inhabit.
Is this logic a necessary one? Jaron Lanier’s book, Who Owns the Future? (reviews here and here), suggests not and presents us with an alternative logic based on a radical reorganisation of worth. But, for now, perhaps simply being aware of these practices is the best way of disabling their power over us.
Acknowledgement: This post came out of a discussion with David Yarrow.