On ‘negotiating the balance between personal freedom and really cool free stuff’
Imagine you are watching your favourite TV show and its protagonist walks to the fridge and gets out a drink (or: gets into a car, answers their phone, walks past a billboard, pours breakfast cereal into a bowl – you get the picture). Unbeknownst to you, the actual drink that you see them take out of the fridge is dependent on factors about you, such as your geographic location: you see the character pull out a Diet Coke, for instance, while another person on the other side of the world watching the same episode of the same programme at the same time sees them take out a particular brand of iced tea.
You have (both) just been subject to a new type of advertising practice known as ‘digital insertion’ – a much more technologically-sophisticated, targeted version of product placement. This process of inserting virtual, CGI products and logos into content occurs in the editing suite; so, with big-budget TV shows often selling to 200 territories worldwide, this process can be tailored to localised deals and can be altered over time to fit current trends. It is one of a number of new persuasive advertising practices, united by the idea of personalisation. For other prominent examples, think of the personalised adverts that you see when using Google, Facebook, and Amazon. These work via algorithms that are, even as you read this, busy refining models of who you are and the sort of lifestyle you (want to) lead, based on data trails that you leave online. Companies buy this data as a means of matching their advertising to the particular market segment you occupy, as indicated by your data ‘profile’.
Like all advertising strategies, these emerging techniques raise interesting normative questions about the balance between manipulation and informed decision making. But is there anything distinctive about the concerns raised by these new practices, in contrast to more traditional advertising techniques? In this short post, I outline one way in which these personalised practices may differ normatively from more established techniques, and offer an explanation – based on what I call ‘the psychology of free’ – for why this transition has occurred (and why it’s probably here to stay).
Arguing against certain forms of advertising as morally wrong is far from new. Roger Crisp, for instance, in a 1987 paper (‘Persuasive advertising, autonomy, and the creation of desire’) argues that certain methods of persuasive advertising are morally problematic because they override the autonomy of consumers – and he shows this by reference to a set of arguments about the manipulative and covert processes by which persuasive advertising seeks to create desires and preferences. These critiques are clearly still applicable to our normative assessment of the new advertising practices. But in thinking about the targeted nature of these techniques, another autonomy-overriding aspect comes to the fore. Joseph Raz (The Morality of Freedom, 1986) contends that, in order to lead an autonomous life, a person must be given an adequate range of options to choose between. And this fits with the standard free market-based justification of advertising: a competitive market economy provides individuals with a large amount of options, and freedom to choose between these options: it is the role of advertising to provide information to individuals about this wide range of products, thereby allowing them to make informed choices that best satisfy their preferences. But the trend towards the personalised and algorithmic targeting of adverts looks to undermine both this justification and the ‘adequate options’ condition of autonomy.
Even on the charitable assumption that the information and data used in the targeting process was given via an adequate consent mechanism, its use in algorithmically selecting the products that you are and are not exposed to in online and TV content fails to provide you with an adequate range of options (i.e., it does not offer you the choice of options that have not, up to this point, been part of or close to your past searches or purchases) and, as a result, it fails to provide you with information about a wide range of products, some of which might in fact better satisfy your preferences but of which you are currently unaware.
What is interesting, though, is that these new persuasive practices are a response to a pervasive expectation in our generation that online services will be free of charge (‘the psychology of free’). Such services are, of course, not free; they are paid for by the selling of the personal data that we generate by using them. Companies such as Facebook make huge profits on the back of this ‘freely’ supplied data. But we do not process this psychologically as a payment per se – it is experienced psychologically as being free, even though it is not free in real terms. And it is no real surprise that this ‘payment’ is now being used not only to profile and target consumers with highly personalised advertising, but that this is increasingly becoming integrated into media content. TV content is, by and large (with a few exceptions, such as the BBC), paid for by advertising revenue; however, viewers – often with the help of technology such as online ad-blockers and fast-forward buttons – are getting better at evading this advertising (again, something that might be explained by the psychology of free). This creates a problem for advertisers; and one that they have responded to by digitally inserting advertising into the content itself. But what problems might this clever response cause for us?
“Interrupting programmes every 15 minutes with a commercial break was never a perfect system, but it at least drew a clear line between advertising and content. Now the boundaries have blurred. This is a defining decade for negotiating the balance between personal freedom and really cool free stuff. Just as we happily click “accept” for terms and conditions we never read in return for free services from the likes of Google, so we have signed a new compact with TV advertisers without quite knowing what it is.” (Guardian article, 24/06/14)
This quotation points to the idea that ‘freedom from X simply makes us subject to Y’, where X represents both traditional advertising and having to pay for certain services, and Y represents data collection, profiling, and the uses that can be made of these in a late capitalist economic system in which information is king. The psychology of free is so dangerous precisely because, in (being forced into) ‘freely’ giving information about ourselves in return for “really cool free stuff”, we become entangled in a system in which our personal freedom is further threatened. The problem is that, even if we know this, it is not clear how we ‘opt out’ of such a system in an effort to protect our personal freedom, without opting out of using these data-collecting services that have – for better or worse – become a central part of the logic of the world we inhabit.
Is this logic a necessary one? Jaron Lanier’s book, Who Owns the Future? (reviews here and here), suggests not and presents us with an alternative logic based on a radical reorganisation of worth. But, for now, perhaps simply being aware of these practices is the best way of disabling their power over us.
Acknowledgement: This post came out of a discussion with David Yarrow.
Hi Fay, thanks for this interesting post. It’s fascinating what advertisement companies come up with – I had no idea!
While I agree that the “psychology of free” and advertisement are linked, I would say that there is still a question about what kinds of advertisement are legitimate. I find the point you draw from Raz very important: opportunity for choice. What’s so tricky about targeted product placement is not only that I don’t get a choice – I don’t even realize that I don’t have a choice. If a website tells me about “things you might also like” I can at least know that these are personalized recommendations. I then have a meta-choice, as it were, about whether or not to take these recommendations into account. In other cases, however, I have no idea that there is personalization going on in the first place. This seems problematic in a deeper way: one can imagine a scenario in which I have no clue about which contents are shown to everyone, which ones are randomly generated, and which ones are personalized. And we can imagine that scenario not only for advertisement for products and services, but maybe also for news items, and then it gets really scary.
So while I agree with you that the “psychology of free” can play a problematic role, I would hold that there are still important differences about HOW the advertisement is done. I don’t know how much rational control we have – in the sense that we can “decide” to be more or less influenced by different items – about these things anyway, but we should have a chance of finding out which game we are in, as it were. As a minimum, there could be a legally mandatory pop-up window saying “this website uses personalized product placement” (like the warning that a website uses cookies) – but maybe there would also be better ways of doing it. Your last point about awareness seems really important to me, but how difficult it is to be aware of these things depends also on how they are presented.
Lisa, I find your point that it is important to know which kind of advertising we are receiving intuitively appealing. And I’ve trying to think of why it might be so. One reason I could think of is that personalised advertising is more likely to be successful in changing our behaviour and that we in so knowing can consciously exercise additional self control. But did you have something else in mind?
Hi Siba, yes, it is in part about the effectiveness of our self-control. But it seems to me that there is also some kind of fairness issue going on there. It was to do with „the rules of the game“, as it were. If we all know what game is being played, we can adjust our levels of trust and awareness accordingly. There seems to be some kind of disrespect in not letting everyone know what game they are in. A consequentialist might say that the „badness“ of his consists entirely in the negative effects on our ability to protect ourselves. I have an intuition that there is something more going on here – about not treating one another as moral equals, as it were – but I need to think more about it.
The tragic thing is, you don’t have to imagine it. The future is here!
A few years ago there was a minor scandal when the US supermarket chain Target was caught using complex computer algorithms to identify pregnant women and tailor their advertising to them – in some prominent cases before the women even knew they were pregnant. They were able to pick up on changing buying habits, such as buying more hand lotion and sanitisers than normal, which are prompted by physical changes and therefore didn’t require the women to be aware of the pregnancy. But rather than simply sending “Congratulations on your pregnancy!” brochures out — and this is where things get even creepier — they mailed out shopping catalogues that looked like regular ones, so as not to arouse suspicion, with a higher frequency of pregnancy- and baby-related products.
Of course, this is an extremely egregious case. But the fact that the technology is out there is in itself deeply problematic, and more than a little scary.
(An article about this: http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/)
Thanks Jesper! This is kind of example that I had in mind when writing the post actually. But it’s a very good illustrative example, so it’s really helpful that you outlined it for us.
Lisa, thanks for your clarifying comment. You are correct – there is an extra epistemic/ opaqueness concern, as it were, with respect to the digital insertion that is not applicable in the case of personalized (“things you might like, if you like X”) adverts used by e.g. Amazon. Imagining the scenario you suggest about personalized news items is super scary! I don’t think that I did enough to separate these two types of personalized persuasive advertising in the post; but I wholeheartedly agree with you, that the ‘how’ matters very much. I’m interested in your idea about regulating this by making it mandatory that companies inform people that this practice in taking place. I’d hadn’t thought enough about this; but you’re right that something like this would be necessary to support the awareness that I point to in the final sentence. Of course, we might agree that much more than this would/ should be required; but something like this might be a good place to start.
Thanks for this Fay. Very interesting!
I’m not sure I see the difference between traditional and personalised advertising as clearly as you do. Take the limiting of choice argument, I would say that traditional advertising is as problematic as personalised advertising in providing only a limited set of choices. True personalised advertising shows you only those products some algorithm decided suits you, but so does traditional advertising also show you only those products that could afford to buy TV advertising time or that someone decided is best suited to the audience of this and that TV show.
This is very helpful, Siba, thanks. It forces me to clarify further why I think that there may be something (normatively) distinctive about the type of personalized advertising that I’m making reference to. You are correct that my introduction of a Razian-style ‘inadequate range of options’ condition doesn’t do all the work that I want it to. As you point out, the ads that air on ITV between 9-10pm have been purchased for this slot with the demographics of the programme’s audience in mind; hence, there is a sense in which this more ‘traditional’ (i.e., non-algorithmic) advertising practice is just as targeted – and just as autonomy-infringing in so far as they expose viewers only to a particular set of ads – as the more personalized practices that I am interested in. I do think, however, that the latter type of persuasive practice is more morally problematic largely because that targeting action happens within the content of the programme, rather than in a specified advert break (where you are aware that the point of such a break is to try to influence your purchasing decisions). There is also a sense in which the targeting is (or at least could be) much more targeted – there might be, say, 2 million watching the ITV programme from 9-10pm tonight, and it is not the case that these people fall into any particular categories/ market segment; while the more personalized, algorithmic ads work on an ever-expanding data set that we all leave on the web each day and can be tailored accordingly. Does this go any of the way to speaking to the problem you raise?
A fascinating topic, and I was unaware of this kind of advertising! Reading the post it occurred to me that there is a relation of direct proportionality between how objectionable this kind (and other kinds of ) advertising is and the cost, to individuals, of avoiding the media channels in question – google, TV, Facebook etc. And the cost, in turn, depends on three factors:
a. how important it is to have access to such media
b. whether there are other, publicly subsidised and controlled, venues that serve the same purpose
and
c. whether people are knowledgeable enough to use them.
On can opt out of watching commercial TV by stopping to watch TV altogether or by watching – say – BBC. One can opt out of Facebook-like applications by cutting down on one’s sociability or by finding alternative places for virtual interaction. (I’ve lots of friends who’d never have a FB account or use google chrome precisely for privacy reasons; but they either have much less need for social interaction than most of us or else are computer geeks who can use other, advertisement-free virtual media.) And so on. It seems to me that, if access to TV, internet and virtual social media is important enough, what we need is a subsidised/maybe free, and easy to use, public version of each.
Anca, thanks! The three factors concerning the relation between objectionableness and avoidance-costs are really interesting. I need to think more about this; but this was the kind of idea that I was trying to get at in the penultimate paragraph (though not with this level of precision!).
Really interesting topic and post, thanks Fay.
Talking about advertising as ‘autonomy-infringing’ as you do seems a bit overstated to me, or at least not intuitively obvious.
You say the role of advertising is to help people make informed choices, and so (I take the argument to be) if people are exposed to a narrower set of adverts, there are fewer informed choices they can make. My issue with this argument is that it seems to overstate the importance of advertising by making it seem like a necessary condition for an informed choice that I have to have been marketed to. But there are plenty of other sources of information eg word of mouth, 3rd party reviews, and so plenty of other ways in which informed choice can be sustained.
It would be helpful to get a bit more detail on how you think the relationship between advertising and giving people adequate ranges of options is (appreciate this was too involved for the original blogpost, but would be interested to hear your thoughts now). Do you think *some* level of advertising is necessary to sustain enough options? Or is the issue with advertising that it ‘stacks the deck’ in favour of certain options? But in that case, how level does the playing field need to be?
Hope that’s clear1
A fascinating post! Thank you for shining a light on this issue, Fay – I think it’s very important. In particular, I think the nature of the transaction between users (not clients) and companies like Facebook/Google etc. is a very pregnant terrain for ethical research. We don’t have a really good normative framework to evaluate these transactions and our traditional ones failed. I’ve heard it said ‘if you are not paying for the service, you are not the client – you are the product’. And I think that’s true to a great extent – facebook sells the information they infer from users or uses it to sell ads, and it’s hard to see this is as a transaction because there are none of the regular negotiation processes – making sure it’s like for like, getting consent, being clear about what’s exchanged etc.
Like Lisa, I think the biggest part of the problem here is the lack of distinction between commercial content and other content. I think we have to know what is being paid for to be displayed and by whom. There’s a podcast called Startup, where they discussed (among other things) the strategy of ads in their podcast start-up company Gimlit – and there are some very interesting insights there. Specifically, they insist on clearly separating ads from content by using a specific background music for ads, though the hosts of the podcasts are themselves involved in recommending the products, which is a big problem. Next, they explore the issue of ‘branded content’ – which is exactly what you’re talking about. Turns our it already exists in news websites, where there’s paid for articles all over the place. Though there’s a small banner somewhere saying ‘paid content’, it still looks a lot like the regular Politico site except it’s totally made to order. What they’ve discovered by venturing into the branded content sphere is that companies want, and wouldn’t pay for anything less, than complete editorial control over something that bears the brand of the news/media company. They are explicitly after exactly that, which is very dangerous.
Lastly, though I think you’re absolutely right that there’s a great danger in the psychology of free, I’m not persuaded by the range of options argument. Partly for reasons Aveek mentions – I’m not sure the ads on my facebook/amazon/google really matter much for the range of options that I have. I can still do all the other things I’ve always done. The problem is much more pernicious with search results on a search engine, but targeted ads aren’t very important, I think. Moreover, I think you are too quick to conclude that the range of options provided by an algorithm would be insufficient. You say “it does not offer you the choice of options that have not, up to this point, been part of or close to your past searches or purchases” but there’s no reason to think an algorithm would expose you to less options that any other form of advertisement. In fact, Netflix and Amazon suggestions are often amazing at letting me know about great books/shows/board games that I wouldn’t have known about otherwise. Blanket advertising, like the Budwiser commercials I get bombarded with, are completely useless for me. I’m not saying there’s nothing there, but I think that a much more careful examination of the algorithms is warranted. In many ways, they act a lot like we do when we ask our friends what they like to read. I am not convinced they are autonomy inhibiting – they might be autonomy enhancing. Or more probably, they don’t have a great influence over autonomy.
Tomer and Aveek, thanks a lot for these really insightful points. Particularly, thanks for pointing out the much more subtle relations (plural) that might be at play between algorithmic advertising and autonomy (enhancement, infringement, etc.), as well as the fact that there might not be any morally interesting relation between the two things.
Hey Fay,
I just re-read your very interesting post. I agree with the other commentators that it is, indeed, a very fascinating topic and that you engage with it very well.
One thing struck me, though (which may or may not be helpful for you to think about – but, at least, I find it interesting). I am wondering about the degree to which you rely on an account of preferences which is very similar to the general one underlying many contemporary theories of political philosophy, but which, I think, is often a bit of a mischaracterization. You bring up a very relevant continuum on which advertising schemes may be placed – from manipulation, making us think that something is in line with our preferences when, actually, it is not, to decision-information which is meant to give us the required information needed to act in line with our preferences.
You might have a different view of preferences, though. You could think that individuals do not really have preferences in the strong sense that Raz, Dworkin, Rawls, etc talk about them. Regarding most things, in fact, people don’t really have any preferences. And regarding many others, these preferences are very malleable. On this view, then, there is no fixed “conception of the good life” or “set of preferences” that we can ascribe to an individual. Looking at it this way, of course, means that advertising is not a *response* to preferences – nor, something that can be more or less in line with these. Rather, advertising is the deliberate creation, shaping, and altering of preferences. The question this raises is no less interesting than the one you raise – for it prompts the very important question; “what kind of preferences, do we want people to have”? That is, of course, a relatively fundamental question that is important in many areas – education, future generations, democracy, virtue ethics, institutional design, etc. However, I think that the question of targeted and individualized advertising brings this out in a new and interesting way.
Now, you may of course think that the latter view of preferences is crazy – some people do. In its pure form, I think I do too. But, maybe there is *some* truth to it – maybe, the truth is somewhere in between the two ways of understanding preferences. And this would mean that your questions would, perhaps, be asking two different and both very interesting questions at the same time – what are the reasonable limits on individually targeted advertising *given* people’s current preferences AND what kind of preferences does this kind of advertising create and how does this relate to the kind of preferences that we *want* people to have.