Categories
Personalization of ads and services

Personalization of ads and services

Consumers are nowadays exposed to various personalized content – from personalized political messages, through social media posts and search results, to movies or music. Such personalization is possible with vast amounts of consumers’ personal data that companies collect and use to train algorithms which are able to predict consumers’ likes, clicks, purchases or voting decisions.

In the EU, for such data collection and processing to be lawful, a company needs to meet the GDPR requirements. When drafting GDPR provisions, the legislator tried to strike a balance between protection of consumers’ autonomy when deciding about their privacy and the interest of the companies as well as the public in the ability to collect and use personal data to develop better products and services. This balance was achieved through various rules and principles such as privacy by design or a list of legal bases for data collection and processing with consent being one of them. When collecting and processing consumers’ personal data, businesses need to make sure that they rely on an appropriate legal basis. However, the choice of a legal basis to rely on when collecting and processing consumer personal data for personalization purposes is not that straightforward. In principle, companies that implement personalization could rely either on consumer consent or on a contract with a consumer if personalization is part of the service that the consumer agreed to be provided with. Differentiating between the two legal bases is particularly important in case of services offered to consumers at a zero price that rely on personalized (i.e., targeted or behavioral) advertising for their revenue. Obtaining consumer consent will most likely diminish the number of consumers whose data will be collected and who could be shown with personalized advertisement, thus potentially decreasing the income of a company relying on such a business model. At the same time, relying on a contract as a legal basis deprives consumers of the control over their data which they can exercise when the collection and processing is based on their consent. 

The question I address in this project, working together with Daria Baltag, an alumni of the European School of Law at Maastricht University, is whether this distinction matters for consumers, i.e., do consumers differentiate between collection and processing of their data for personalized advertisement and personalized services? Does this distinction depend on whether the service is provided for free and thus, advertisement is needed for the service to be offered at a zero price?

To answer those questions we conducted an experimental vignette study where we presented participants with hypothetical scenarios describing an offer of a music streaming mobile application. Participants were assigned to four groups. One group read that the app is free and that users’ data are collected to personalize services. The other group read that the app is free but users’ data are collected to personalize advertisement. Importantly, the content of personalized services and advertisement did not differ – in both cases users would receive suggestions of new artists and songs. The other two groups were informed that the app costs €9.99/month and either include personalized advertisement or personalized services.

Scenario describing an offer for a music streaming application. In this scenario, participants read that the offer is free and it collects their data to personalize services.

Once participants got familiar with the scenario we asked them two main questions: how willing they are to use the app and how willing they are to share their data with the app. The results revealed that the type of personalization (advertisement vs. service) does not impact participants’ willingness to use the app and share their data with it. We did not observe any differences between the two types of personalization regardless of the price of the app. This means that participants do not distinguish between the two types of personalization that do have different legal implications and that this holds regardless of whether the app is free or not. Importantly, the price itself does matter for participants – they are, in general, more willing to share their data with free than with paid apps. It is likely that participants do perceive their data as counter performance and are fine to share it for personalization purposes (both advertisement and services) as long as they receive free services in exchange.

To test whether this generalize across various types of mobile applications, we conducted two further experiments. This time participants read offers of a shopping mobile app and a news mobile app. The manipulation with price and the type of personalization was similar to the previous scenario. We again found no impact of the type of personalization on the willingness to share personal information with the apps. As in the first study, this holds for both – free and paid apps. The results showed only the general effect of the price on the willingness to share personal data with the app – participants were more willing to share their data with a free than with a paid app.

Main results from three studies – with music, shopping and news apps. The graph shows the distribution of responses to the question asking about the willingness to share personal information with the app on a scale from 1 (extremely unwilling) to 7 (extremely willing).

All studies (hypotheses, design and planned analyses) were pre-registered on Open Science Framework. There, you can also find the scenarios of news and shopping apps.

Categories
Rejections of free beneficial offers

Do people reject free beneficial offers?

If misleading ‘free’ offers (i.e., offers at a zero-price but imposing non-monetary costs) are so widespread, are consumers skeptical when offered something for free? Is this mistrust so strong that consumers are willing to reject a truly beneficial deal which would help them earn more money? If so, can we design interventions that would help overcome this effect? I collaborated with Caroline Goukens from the Maastricht University School of Business and Economics and Vicki Morwitz from Columbia Business School to address these questions.

First, we wanted to test if people are more willing to reject a beneficial offer when it is called ‘free’ than when it is described in a more neutral way. To this end, we adapted our previous experiment on zero-price effect where participants solved matrices with ‘p’ and ‘b’ letters. In the experiment, participants first worked on this task in a trial round. Next, they were offered a tool which would help them solve more tasks correctly and, thus, earn more money. One group of participants saw this tool described as a ‘free’ tool. The other group was told that this is a new version of the task which they performed in the trial rounds. Participants in both groups were asked to choose whether they want to perform the task as in the trial rounds or with a free tool/in a new version. The results revealed that people are more likely to reject the offer when it is described as a ‘free’ tool than when it is described as a new version of a task.

Overview of the results of Study 1

In the second study, we wanted to test if this effect decreases once we force participants to think more carefully about their choice. We relied on the same design as in the first study. We only slightly changed the wording of the control offer. Now, some participants were deciding between doing the task as in the trial or using a free tool. Other participants decided between the task as in the trial or in an alternative version. Two other groups saw the same framing of the offer but they were asked to think carefully about their choice. They were also forced to spend at least a minute before making the decision. The results again showed more participants rejecting the offer when it is described as a free tool than when it is presented as an alternative version of the task. Instead of diminishing, this effect was even bigger in treatments where we asked participants to think carefully about their choice.

Overview of the results of Study 2

In the third study, we tested two additional framings and added an intervention. This time two groups were presented with an offer of either an alternative version of the task or an alternative (free) version of the task. The other two groups saw the same offers but were additionally informed that the offer comes with no strings attached. Again, participants were more likely to reject an offer when a word ‘free’ was mentioned. The offer of an alternative (free) version was also perceived as less trustworthy than the offer of an alternative version. The intervention, however, did not significantly decrease the rejection rates.

Overview of the results of Study 3

In Study 4, we tested another intervention. This time we let participants get familiar with the offeror. One group of participants took part in an unrelated neutral experiment on Day 1, the other group directly took part in an experiment similar to the one conducted in Study 2 in deliberate treatments, where some participants chose between doing the task as in the trial or in an alternative version while others decided between doing the task as in the trial or using a free tool. Participants were prompted to think carefully about their choice. On Day 2, the group that did an unrelated task on Day 1, was presented with our core experiment (an offer of a free tool or alternative task). The group that did our core task on Day 1, did an unrelated task on Day 2. We analyzed only observations of participants who completed the experiment on both days. The results showed that the intervention is ineffective. The percentage of participants rejecting an offer of a free tool remains the same whether they are familiar or unfamiliar with the offeror.

Finally, in Study 5 we tested another intervention. This time one group of participants that was either presented with a choice of an alternative version or a free tool saw Maastricht University logo in the upper right corner of the screen throughout the whole experiment (see picture below). The other group saw no logo on the screen.

Screenshot of a Welcome screen from Study 5

The intervention was successful. Participants who saw the university logo on the screen were less likely to reject a free tool than participants who saw no logo.

Main results of Study 5

The results are consistent throughout the five studies – people are more likely to reject an offer that mentions the word ‘free’ than a more neutral offer avoiding such a phrasing. These reactions seem to be driven by participants perceiving the ‘free’ offer as less trustworthy than the offer of an alternative/new task. Finally, only the intervention that makes salient a non-profit motivation of the offeror is successful in diminishing this effect.

All studies (hypotheses, design and planned analyses) were pre-registered on Open Science Framework. There, you can also find a program we used to collect data in Study 5 and Study 4.

Categories
Non-monetary costs of free and paid services

Are offers of free digital content misleading?

Let’s have a look at two educational websites – Quill and Prodigy. Both offer free tools for children. Quill – to develop writing skills, Prodigy to practice math tasks.

Quill is a non-profit. It does not display advertisement, but it does collect users’ personal information such as email address, gender or age as well as gather data on users’ behavior when interacting with Quill. This data is processed to personalize and enhance users’ experience. Quill also offers a premium paid version that contains additional features.

Prodigy is a for-profit. It does not display third-party advertisement but, according to Fairplay organization, it bombards its users with advertisements of a premium paid membership. It also collects users’ personal information (except for students) and data on users’ activity to provide them with personalized experience (including personalized advertisements of the premium version).

Though both websites can be used for free, i.e., without paying any money, some may argue that they do impose non-monetary costs on their users. Courts in several countries (Germany, Italy and Hungary) have dealt with a question whether businesses collecting personal information of their users to provide them with personalized advertising can describe their services as “free”. The courts in Italy and Hungary found such offers misleading because they deemed collection of personal data a counter performance provided in exchange for the service.

This raises important normative questions: What type of a counter performance should be classified as costs that make a ‘free’ offer misleading? Is using personal data to personalize internal advertisement (as in case of Prodigy) such a cost? Or does personal data need to be used to personalize third-party advertisement in order for the personal data collection to count as non-monetary cost? Is a freemium model, (basic free version, paid premium version) when the business hopes to convert a non-paying user into a paying one, misleading?

Before addressing those questions, I decided to first understand what types of non-monetary costs are imposed on the users of free digital content. How diverse are these offers? Do they impose higher non-monetary costs than providers of paid digital content?

I first run a pilot study collecting data about music streaming services. I gathered the following information about 48 services (46% paid):

  • Do they contain ads?
  • Can users opt-out of newsletters?
  • Do users grant a license to the content they generate?
  • Is the scope of the license broad or narrow?
  • What types of personal data does the provider declares to collect?
  • What information is gathered from users upon the registration?
  • What is the content of privacy policies and terms of use? Do they contain consumer-unfriendly provisions?

The preliminary results revealed that free online music streaming services do not differ much from the paid ones. The major difference are the advertisements – whereas the majority of free services contain ads, only 2 out of 22 paid services do so. Paid services do collect more types of personal data. This finding is, however, easily explained by the fact that paid services need to collect payment data. When it comes to the content of terms of use and privacy policies – it does not differ between paid and free services including the same number of disadvantageous terms.

Summary of results comparing free and paid online music services
Number of consumer-unfriendly terms grouped in three categories

Further data has been collected to test if these observations hold also with other types of digital services and products such as navigation applications, personal finance applications and online news. The results will be posted here soon!

Categories
Overreaction to zero-price: Replication Study

What’s next?

The results of the first two studies were surprising, but at the same time very reassuring. In contrast to previous behavioral research showing that consumers overreact to zero-price products, my studies demonstrate that this is not necessarily the case. When the decisions have a real impact on people’s money and the good offered is truly beneficial to them, the reaction to the decrease in price to zero (increase in demand for a zero-price good, decrease in demand for a higher-price good) seems to be explained by people perceiving a drop from 1 Cent to zero as bigger than from 15 to 14 Cents. So maybe consumers are not as ‘irrational’ as previously thought?

If consumers do not overreact to zero-price products when they are truly beneficial to them and have a real impact on their utility, it is unlikely they will do so once those products involve non-monetary costs such as collection of personal data. There is, however, a different puzzling effect that I observed in my data – many participants (35-44%) rejected both offers – they decided to do that task without any help even if it was offered to them for free. This decision had an impact on their payments – those who decided not to use any tools earned less money than those who accepted one of my offers.

This puzzling effect triggered some further questions:

  • Are consumers exposed to free misleading offers, i.e., offers that are advertised as free but impose non-monetary costs on consumers?
  • How prevalent are such offers? Are they easy to distinguish from truly free beneficial offers?
  • Are consumers suspicious about free offers? Does this mistrust lead consumers to reject even truly beneficial free deals?
Categories
Overreaction to zero-price: Replication Study

Zero-price effect with digital content

Previous behavioral research has shown that consumers overreact to zero-price offers of goods such as candies or chocolates (Shampanier, Mazar, Ariely (2007)). In these experiments, researchers offered participants two goods. One – a high-value more expensive good, the other one – a low-value cheaper good. Some participants saw both goods offered at a positive price, e.g., 15 and 1 Cent. Other participants saw the high-value good offered at 14 and the low-value good at 0 Cents. Researchers observed that in the group which was offered the low-value good at a zero-price, the demand for the low-value good dramatically increased and for the high-quality good decreased as compared to the group which was offered the goods for 15 and 1 Cent. This effect was surprising, because when the price of the high-value good decreases we should also observe an increase in demand for it, regardless of the price of the low-value good. This combination of an increase in demand for a zero-price low-value good coupled with a decrease in demand for a high-quality good which price decreased by the same amount as the price of a low-value good is called a zero-price effect.

Zero-price effect – the demand for high-value good decreases and for the low-value good increases when the price of both good drops by one Cent.

Later studies have replicated this effect with multicomponent tourism products (i.e., hotel with breakfast; Nicolau and Sellers (2011)), investigated its neural mechanisms (Votinov, Aso, Fukuyama, & Mima, 2016) and tested it with different types of products (i.e., utilitarian v hedonic; Hossain and Saini (2015)). Recent research by Hüttel, Schumann, Mende, Scott, and Wagner (2018) suggests that the zero-price effect might also be observed when zero-price digital content involves non-monetary costs. Using hypothetical scenarios, Hüttel and colleagues showed that the zero-price effect is present also in case of zero-price online services involving non-monetary costs in the form of exposure to advertisements. Importantly, they demonstrated that the lack of a price leads to both – overvaluation of benefits and undervaluation of non-monetary costs.

There are two crucial limitations of these studies. First, they involve goods which may not be necessarily beneficial to consumers. Some consumers may perceive a chocolate as having no or even negative utility to them (e.g., when someone is on a diet). Second, later studies including the one by Hüttel and co-authors relied on hypothetical scenarios, i.e., participants were presented with scenarios describing the details of an offer and were asked to imagine what they would do if they had seen such an offer in reality. Such studies provide valuable knowledge as to consumers’ attitudes or beliefs, yet they do raise a question as to whether consumers’ choices in such hypothetical scenarios reflect their actual decisions when real money is at stake.

The first study I conducted in collaboration with Caroline Goukens from the Maastricht University School of Business and Economics was designed to address these two limitations. In the experiment, participants performed a real-effort task. They were shown a series of matrices with ‘d’ and ‘b’ letters. Their task was to check the boxes next to all letters ‘d’. They were not allowed to check any ‘b’ letter by mistake. They received a bonus for each correctly solved matrix. After performing the task in trial rounds, participants chose whether to use one of the tools offered to them. The tools could help them solve a real-effort task that they performed in the experiment and, thus, earn more money. One tool was cheaper and had only basic features (Basic tool), the other tool was more expensive but also offered additional features (Premium tool). To some participants (Paid treatment), Premium tool was offered for 15 Pence and Basic tool for 1 Pence. To other participants (Free treatment), Premium tool was offered for 14 Pence and Basic tool – for free. This means that in Free treatment both tools were cheaper by 1 Pence compared to the prices of the tools in Paid treatment and one of them (i.e., Basic tool) was offered for free.

Basic tool highlighting incorrectly checked letters ‘b’ and showing how many letters ‘d’ still need to be checked.
Premium tool: highlighting incorrectly checked letters ‘b’ and all letters ‘d’; showing how many letters ‘d’ still need to be checked.

The results of the first study showed that the share of participants deciding for a Premium tool was lower in Free than in Paid treatments, although its price decreased (17% vs 8%). At the same time, the demand for a Basic tool dramatically increased between the Paid and Free treatments (from 28% to 48%).

In the second study, we wanted to test if this effect is robust. Would we observe it with different prices? In addition, we wanted to exclude a straightforward explanation of the results of the first study, i.e., that the decrease in price from 15 to 14 Cents seem smaller to consumers than a decrease from 1 to 0 Cents (concave utility of money). We conducted an experiment in which we assigned participants to four groups. Each group saw a different combination of the prices of Basic and Premium tool.

TreatmentPremium toolBasic tool
15_215 Pence2 Pence
14_114 Pence1 Pence
13_013 PenceFree
10_010 PenceFree
Overview of the treatments in the second study

The results showed that the share of participants deciding for the Basic tool again dramatically increased when its price dropped to zero. Yet, differently from the first study the decrease in demand for the Premium tool was very small an statistically non-significant comparing participants who were offered the Premium tool for 13 Pence with participants who were offered this tool for 14 or 15 Pence. The share of participants offered with the Premium tool for 10 Pence increased suggesting that indeed a zero-price effect observed in the first study can be explained by consumers perceiving a drop in price from 1 to 0 Pence as a bigger decrease than a drop from 15 to 14 or to 13 Pence.

Results of the second study

Both studies (hypotheses, design and planned analyses) were pre-registered on Open Science Framework. There, you can also find a more detailed report of the results.