The Spillover of Spotlight:
Platform Recommendation in the Mobile App Market
Chen Liang Zhan (Michael) Shi T.S. Raghu
Abstract
Many e-commerce platforms offer editor-curated recommendation to help consumers overcome the
difficulty of discovering and evaluating new products. While it is well recognized that the spotlight
generated by editor recommendation has a positive effect on the sales of featured products, there is much
less understanding in the literature on whether the spotlight influences the sales of other products that are
not featured but are related to the featured products, and how this externality varies depending on the
specific relationship between the featured and non-featured products. By leveraging a novel dataset
collected from the mobile app market, this paper systematically investigates the spillover effect of
platform-provided editor recommendation on three groups of related apps: apps by the same developer,
apps with similar functionality, and the same app marketed on a different platform. We distinguish the
two mechanisms that potentially drive the externality: the exposure spillover and the quality endorsement
spillover. We find that the overall spillover effect is positive for all three groups, but the underlying
mechanisms are different. For the apps by the same developer and the same app on a different platform,
we find a significantly positive spillover of exposure, but the spillover of quality endorsement is weaker
across platforms than within the same platform; for the functionally similar apps, the evidence on
spillover of exposure is weaker than that for the other two groups and the spillover of quality endorsement
is statistically insignificant. In addition, we also find the strength of the spillover effect depends on the
salient characteristics of the featured apps such as price and user rating.
Keywords: platform recommendation, spillover effect, product discovery, mobile app market
1
1. Introduction
The tremendous increase in the number of products available on e-commerce platforms makes it difficult
both for new products to stand out and for consumers to find new products that meet their specific needs.
Effective product discovery via “organic” channels such as sales rank or collaborative filtering based
recommender systems typically requires a sizable customer base which most new products lack. Thus, to
bootstrap growth and promote diversity, many platforms offer editor-curated recommendation to help
consumers overcome the difficulty of discovering and evaluating new products. For instance, Amazon
hires editors to review and recommend new books. Apple’s App Store actively maintains a list of “New
Apps We Love” and highlights the list at a prominent place on its homepage to attract user attention.
While it is well recognized that the spotlight generated by platforms’ editor recommendation can
boost the sales of featured products, there is much less understanding in the literature on the spillover of
platform spotlight, that is, whether and how platform-provided editor recommendation also influences the
commercial performance of other products that are not featured. Formally, this paper investigates the
following research questions:
(1) Does platform-provided editor recommendation influence the sales of other products that are not
featured but are related to the featured products?
(2) If so, how does this spillover effect vary depending on the specific relationship between the featured
and non-featured products? More importantly, what are the mechanisms that drive the spillover? How
does the spillover effect depend on the characteristics of the featured products (e.g., price and user
rating)?
Given the ubiquity of platform recommendation in large online markets, these research questions
have both practical and academic significance. Platform operators have recently ramped up investment in
optimizing store design and providing human-curated discovery mechanisms including editorial
recommendations to improve user experience and market efficiency.
1
Given the size and significance of
1
One recent example is Apple’s redesign of its iOS App Store in September, 2017. With the goal of improving
product discovery, the new store features more editor-curated recommendations, and provides sellers more tools to
2
platform markets, it is important to examine whether platform investments in discovery mechanisms is
indeed effective in aiding consumer search and improving product discovery, and what products are
benefited and to what extent. From a theoretical point of view, scholars have studied not only the direct
effect of platform recommendation on the featured products but also the indirect effect on consumer
behavior (e.g., Adomavicius et al. 2011, 2017; Cosley et al. 2003) and market outcome (e.g., Fleder and
Hosanagar 2009). Exploring the spillover of platform spotlight and investigating the underlying
mechanisms that drive the spillover can inform us on the externality of platform recommendations and, as
a result, help us understand more completely their overall market implications.
In a large differentiated market, promotional activities in general have two potential effects on
product sales. First, they increase consumers’ awareness of the featured products. Promotions reduce the
consumer search cost, and products featured in promotions enjoy more exposure and are more likely to
enter the consideration set of consumers (e.g., Goeree 2008; Fleder and Hosanagar 2009). Second,
depending on the specific context, promotions may also directly influence consumers’ purchase decision
by altering their evaluations of products in their consideration set (e.g., Ackerberg 2001; Clark et al. 2009;
Adomavicius et al. 2017).
Both the awareness and choice effects have potential externality on other products that are not
featured but are related to the featured products. For example, promotional activities may not only attract
consumer attention to the featured products, but also prompt consumers to explore functionally similar
products that they would not otherwise consider. Moreover, in markets of experience goods where
consumers face considerable uncertainty about product quality prior to consumption, recommendation
demonstrate the functionality of their apps. The emphasis on improving editorial recommendation is especially
noteworthy: “Apple decided that the best way to ensure that valuable apps are getting discovered is through editorial
content—shifting even more weight away from algorithm-based suggestions and moving toward human-curated lists
of featured apps that Apple wants to push(https://www.storemaven.com/ios-11-app-store-updates-and-its-impact-
on-app-discovery/). For more discussion on the highlight of editorial recommendation in the App Store, see
https://techcrunch.com/2017/06/05/apple-introduces-a-completely-redesigned-app-store/,
http://www.reuters.com/article/us-apple-iphone/with-new-operating-system-apple-revamps-its-money-making-app-
store-idUSKCN1BU0OO and https://www.emarketer.com/Article/Featured-Status-Drives-App-Store-
Downloads/1016674.
3
from a credible source should increase consumers’ expectation of product quality, which may in turn
affect the ordering of other options in consumers’ consideration set. Therefore, to develop a deeper
understanding of the spillover mechanisms of the product promotional activities, we need to separate their
effects at the awareness and choice stages.
Our research specifically focuses on platform-provided editor recommendation, which has several
salient features that make its spillover effects at both the awareness and choice stages potentially different
from those of other forms of product promotion that have been studied in the literature. First, producer- or
seller-initiated promotions, such as advertising, are often costly and can be prohibitively so for small and
independent producers and sellers. Presumably, they are also perceived differently by consumers (Benlian
et al. 2012) since seller self-interest should play a bigger role in advertising than in platform-provided
recommendation. Moreover, whereas platform-provided editor recommendation typically focuses on the
featured product itself, there is much more heterogeneity in terms of the information provided in
advertisements, which can range from directing consumers to other products of the same seller to
comparing the advertised product to its rivals. Thus, the market effects of the two forms of promotion are
expected to be different. Second, platform-initiated, sales- or ratings-based promotional or
recommendation approaches such as best sellers list and collaborative filters require products to already
have a sizable user base. For example, a commonly-used collaborative filtering approach is to utilize
information about a product’s existing customers by correlating their purchase histories with those of
others who have not purchased the focal product yet. For such systems to work effectively, the existing
customer base needs to exceed a certain threshold, which can be difficult for new products to achieve
organically, hence the term “cold start problem.” As such, editor recommendation, curated by “experts,”
is less dependent on customer purchase data, and is used by many platforms to complement collaborative
filters to promote new products or underexplored niche products,
2
for which consumers face a greater
uncertainty in evaluating quality. Another difference is that while collaborative filters leverage purchase
2
In the mobile app context, for example see https://www.storemaven.com/ios-11-app-store-updates-and-its-impact-
on-app-discovery/.
4
choices made by other consumers and employ personalization, editor-curated recommendation can be
seen by all platform users, and is provided by platform-recognized “experts,” who are supposed to be
more informed and experienced in the product domain and less biased in evaluating the quality of
products, and have test-used the products for an extensive period of time. Therefore, editor-curated
recommendation might serve as a stronger endorsement of quality than collaborative filtering based
recommendation for less-known products (Senecal and Nantel 2014).
The fact that editorial recommendation is curated by humans and it may potentially aid
consumers’ product choice decision makes it also important to compare editorial recommendation with
expert review (also called professional review in the literature), which shares the same two characteristics.
A key distinction between editorial recommendation and expert review is that at the conceptual level,
expert reviews are better categorized as pull-type information cues while recommendations as push-type
information cues (Xu et al. 2009). The consumption of expert reviews generally requires consumers’
voluntary search, and consumers typically search for and read expert reviews after they become aware of
the product and when they need more information to resolve the uncertainty about the product. This
implies expert reviews would mainly affect consumers at the product choice stage but much less so at the
awareness stage. In contrast, platform recommendations are by nature highlighted at the platform level
and often “pushed” to consumers, so their influence on consumer decision-making is likely to take place
at both the awareness and choice stages. Second, the direction and extent of the influence on consumer
choice can be very different between platform recommendations and expert reviews. While platform
recommendation can simply be a curated list of products without any detailed product information (which
is the case in our empirical context), the essence of expert reviews is in their informational content---their
effect crucially depends on the informativeness and sentiment of the review.
The above discussions on the differences between editorial recommendation and other
promotional activities suggest that their market implications and specifically spillover effects can be quite
distinct. This paper empirically investigates the spillover effect of editor-curated recommendation in the
context of the mobile app market. The mobile app market has grown rapidly during the last decade, and
5
the total size of the “app economy” is projected to reach 101 billion dollars by 2020. While the market
becomes more competitive, the distribution of app downloads is highly skewed---one report suggests over
90 percent of the apps are downloaded less than 500 times per day.
3
As such, app discovery channels
including editorial recommendation are recognized to be crucial for both platform owners and app
developers.
4
Hence, studying the effects of editorial recommendation will help platforms understand
which apps are benefited from the spotlight, how they are benefited, and to what extent. Moreover, multi-
homing (an app is released on multiple app markets) and freemium pricing (versus paid pricing model)
are two interesting business models commonly adopted by app developers. Both phenomena provide a
unique angle for examining the nuances of the spillover of platform spotlight.
5
To conduct the analysis, we construct a novel dataset on platform recommendation and product
sales by merging both platform-provided and hand-collected data of the two major mobile app
distribution platforms, Apple’s iOS App Store and Google’s Play Store. The specific editor
recommendation channel we choose to focus on is the “New Apps We Love” column on the iOS
platform, where Apple’s store editors select newly released or recently updated apps to feature at a
prominent place on its home page (see the Figure A2 in Appendix B).
6
Using our dataset, we first
establish the fact that there is a positive impact of editor recommendation on the sales of the featured
apps. We then study the spillover effect of recommendations on three distinct groups of related apps: (1)
iOS apps by the same developer, (2) iOS apps with similar functionality, and (3) the same app’s Android
version marketed on the Google Play platform. We choose to focus on these three groups of related apps
3
The statistics on app market growth can be found at http://www.statista.com/statistics/276623/number-of-apps-
available-in-leading-app-stores/, and the statistics on the skewness of download distribution can be found at
http://www.gartner.com/newsroom/id/2648515.
4
Also refer to footnote 1 for related industry reports.
5
We would like to thank an anonymous reviewer for the suggestion.
6
Store-provided editor recommendation is a very important app discovery channel on the platform. One survey
conducted by a mobile data service company shows roughly 9 percent of iPhone users discover the last app they
downloaded from the list of apps featured on the Apple’s featured screen. See
https://techcrunch.com/2014/10/03/roughly-half-of-users-are-finding-apps-via-app-store-search-says-study/.
Another report on the app discovery channels for users in the United State suggests that 13% of users find apps
based on featured apps or the top charts in the app store. See https://www.statista.com/statistics/607170/smartphone-
app-discovery-channels-usa/.
6
for two reasons. First, as will be elaborated in Section 3, our conceptual framework posits the likely paths
along which the spillover might take place for these groups of related apps. Second, there is evidence that
clearly shows the theoretical paths of spillover that we postulate indeed exist for the three groups of
related apps in our empirical context (see more discussion in Appendix B). In our analysis, we first use
reduced-form regressions to examine the overall spillover effect on each of the three groups of related
apps. The reduced-form regressions provide evidence on the existence and size of the spillover effects,
but they do not inform us on how the effects play out across the awareness-choice stages. To delve into
the underlying mechanism of spillover, we then propose and discuss a two-stage framework of the
product discovery and purchase process, and then use the framework to motivate a bivariate probit model
with partial observability (Poirier 1980; Shi et al. 2014) that allows us to separate the spillover effects at
the awareness and choice stages.
We find that the spillover effects at both the awareness and choice stages can drive the externality
of editor recommendation, but their strength and relative importance vary depending on the specific
relationship between the featured and non-featured products. Furthermore, our heterogeneity analysis
suggests that the strength of spillover effect also depends on the salient characteristics of the featured
products including price and user rating.
Our work complements and extends the extant literature in exploring the spillover mechanisms of
promotional activities. While the informational role of promotion has been recognized in prior studies, to
our knowledge no extant research has simultaneously examined the awareness and endorsement spillover
effects of product promotion activities. Moreover, our work systematically analyzes three distinct groups
of related products and shows how the externality is different across the three groups by disentangling the
spillover mechanisms in an empirical model. Our results regarding the heterogeneity in the spillover
mechanism also bring useful insights to similar contexts where the freemium pricing model and producer
multi-homing phenomenon are popular.
7
2. Related Literature
Our study is related to several strands of literature on e-commerce and platform economics. The
phenomenon of demand spillover has been studied in many different contexts, for example in brand
commitment (Ahluwalia et al. 2001), information policy (Gal-Or and Ghose 2005), sponsored keyword
strategy (Lu and Yang 2017), and advertisements (Garthwaite 2014; Lewis and Nguyen 2015). The
stream of research that is closest to this paper studies the spillover effect of product promotion activities
such as advertising and recommender systems. The spillover effect of promotions on products sold by the
same producer or similar products has been examined in several empirical studies. For example,
Garthwaite (2014) found that advertisements have a positive effect on the sales of other books written by
the endorsed authors. Lewis and Nguyen (2015) found that search-engine advertising increases the
cumulative searches for competitors than for the advertised product, which indicates a strong positive
spillover effect on similar products. Using Amazon’s co-purchase network data, Carmi et al. (2012) found
that an exogenous demand shock brought by an endorsement in the Oprah Winfrey show had a
significantly positive effect on the sales of other books, most of which were either books written by the
same author or books on a similar topic. Also using Amazon data, Oestreicher-Singer and Sundararajan
(2012) and Jabr and Zheng (2013) studied the effects of collaborative filters on similar products in a co-
purchase network. In addition, there is evidence for cross-platform spillover in the contexts of sales
ranking (Ghose and Han 2014) and product reviews (Chevalier and Mayzlin 2006).
However, the existence and extent of spillover of editor-curated recommendation do not directly
follow from the prior studies because, as discussed in the introduction, editor-curated recommendation
differs from advertising and collaborative filtering-based recommender systems in several important
aspects, including its source, method, the way it is delivered to consumers, and perception by consumers.
Moreover, we further the investigation on the spillover effect by examining the mechanisms that drive the
phenomenon. To our knowledge, this paper is the first study to provide a comprehensive understanding
on the mechanisms of spillover at both the awareness and choice stages, and empirically document how
they differ for different types of related products. Table A1 of Appendix A provides a summary on the
8
differences between this research and the related previous literature in terms of the type of product
promotion studied, the mechanism of spillover examined, and the group of related products considered.
As discussed in the introduction, editor-curated recommendations are also related to professional
reviews (although reviews are generally from independent experts as opposed to the case of curated
recommendations provided by platforms). The previous literature has found that expert reviews tend to
have a positive impact on the demand for experience goods (Reinstein and Snyder 2005) and may serve
as contingent moderators in the impact of general user reviews (Chakravarty et al. 2010). In the IS
literature, Zhou and Duan (2016) found that expert reviews could both directly and indirectly influence
software download wherein user-generated WOM serves as a mediator. This stream of research has
shown the important role of expert reviews is in reducing consumer uncertainty on experience goods,
which is consistent with our discussion in the introduction that expert reviews primarily aid consumer
decision-making at the product choice stage. In the context of this paper, we examine the effects of
editorial recommendation on both product awareness and choice. Moreover, whereas the studies of expert
reviews have been emphasizing on measuring the size and examining the mediator of the direct effect of
expert reviews, this paper investigates the externality of editorial recommendation – the existence of
spillover and its underlying mechanisms.
A key novelty of our research is in separating out the spillover effects at the awareness and choice
stages, and the two-stage model we use resembles those in several prior papers that study the role of
information in consumer demand (among others, Roberts and Lattin 1991; Wu et al. 2005; Goeree 2008;
Fleder and Hosanagar 2009; Hendricks and Sorensen 2009; Masatlioglu et al. 2012; Sahni 2016). Given
consumers’ limited cognitive resource for product searching and evaluation, consumer behavior research
has long postulated the importance of awareness (or closely related, the role of consideration set) in
determining product choice (Hauser and Wernerfelt 1990; Kardes et al. 1993; Bettman et al. 1998). In
fact, Roberts and Lattin (1991) find that a two-stage model of consideration and choice performs better
than the one-stage choice model. In simulating the market effect of recommender systems, Fleder and
Hosanagar (2009) used a two-stage consumer decision model where the first stage captures product
9
awareness and the second stage is choice based on preferences. By including recommender systems into
the simulation model, they found that most popular recommenders tend to reduce the product diversity,
though it’s possible to overcome the problem by avoiding recommendations mainly based on popularity
or reviews. Sahni (2016) used randomized field experiments to study the spillover effect of online
advertisements on non-advertised restaurants of the same category. His theoretical model similarly
distinguished between awareness and choice, and showed the two competing effects of advertisements at
the awareness stage. On the one hand, advertising has a positive direct effect by increasing the probability
of consumers considering advertised products. On the other hand, it also leads to a negative effect on the
advertised products by reminding consumers of similar products that are not advertised. Our paper also
employs the two-stage awareness-choice model, but whereas in the prior studies the effect of the variable
of interest, for example, advertising, is assumed to be only on awareness, in our context editor
recommendation can influence choice as well as awareness because it may be perceived as a quality
endorsement from the platform. Additionally, we use the two-stage model directly in the empirical
analysis to disentangle the effects of editor recommendation on the two probabilities.
Our paper also joins the literature in showing the importance of search cost in shaping market
outcome on large e-commerce platforms. Consumer search plays a critical role in crowded marketplaces
and search cost has been recognized to be instrumental in determining product variety, market share, price
dispersion, and firm strategy (e.g. Bakos 1997; Brynjolfsson et al. 2011; Overby and Forman 2014). High
search cost is a prevalent issue in many electronic commerce settings (e.g., Smith et al. 2001; Chen et al.
2004; Ghose et al. 2012), and it has been found that recommendation is an effective decision aid tool for
consumers (Häubl and Trifts 2000). Recommendation facilitates consumers’ search effort, reduces search
cost, and may increase the space of consideration set and enhance the quality of consumer decision-
making. To our knowledge, no existing research has examined the impacts of platform recommendation
on consumer search and market outcome in the mobile app market. The previous literature on the mobile
app market has studied the demand effect of version updates (Lee and Raghu 2014), information cues
(Lee and Raghu 2016), mergers and acquisitions (Li and Agarwal 2016), and app characteristics (Ghose
10
and Han 2014). Our paper contributes to the literature by showing the externality of platform
recommendation. At the product level, we show how the reduction of search cost introduced by the
platform spotlight drives discovery of a broad set of related apps and increases their sales. Moreover, our
study also relates to the strand of literature on free-sampling which mainly focuses on the spillover from
consumption of the free sample or the free version to the demand of the paid version (e.g., Bawa and
Shoemaker 2004; Liu et al. 2014; Lee and Tan 2013; Deng et al. 2018; Bond et al. forthcoming). Our
analysis complements the previous research by showing the spillover effect to related products is stronger
when the recommended products use the paid model rather than the free model. In addition, our paper is
also complementary to the literature on the cross-platform effect of reviews or user activity (Idu et al.
2011; Koh and Fichman 2014; Ghose and Han 2014) by examining the magnitude and mechanism of the
spillover effect of editorial recommendation across platforms.
3. Potential Spillover Mechanisms of Platform-Provided Editor Recommendation
3.1. A Two-Stage Process: Awareness and Choice
In a large differentiated market, consumers typically consider only a small subset of the products when
making their purchase decisions. The existence of a positive search cost prohibits consumers from
searching the whole market (Kim et al. 2010). Even if consumers were aware of all the products, the large
number of alternatives would render a complete evaluation and comparison impractical due to the huge
cognitive burden (Hauser and Wernerfelt 1990). Given the combination of search cost and consumers’
limited cognitive resource for product evaluation, the literature on consumer demand in both the offline
(Hauser and Wernerfelt 1990; Kardes et al. 1993; Bettman et al. 1998) and online settings (Wu and
Rangaswamy 2003; Wu et al. 2005; Fleder and Hosanagar 2009; Sahni 2016) has suggested that
consumers’ purchase decision can be modeled as a two-stage process. In the two-stage decision process, a
consumer becomes aware of a subset of available products (or a subset of products that a consumer is
aware of becomes salient), thereby forming a consideration set in the first stage, and in the second stage
the consumer decides which product(s) in the consideration set to purchase by ordering the options
11
according to her preference. Following the previous literature (Fleder and Hosanagar 2009; Sahni 2016),
we term the first stage the awareness stage
7
and the second the choice stage, and illustrate the process in
Figure 1. The probability that a consumer buys a particular product is then the probability that she is
aware of the product multiplied by the probability that she purchases the product conditional on
awareness.
The two probabilities are functions of many factors including product characteristics, marketing
efforts, information technology, word of mouth, and social learning. Many of these factors have been
studied in the literature and the focus in this research is on platform-provided editor recommendation. In
an experience goods market, platform-provided editor recommendation can influence the purchase
decision through both awareness and choice probabilities. First, editor recommendation reduces the
search cost for the featured products and increases the probability for an average consumer to be aware of
the featured products. Editor recommendation also promotes the featured products to a more prominent
position on the platform, thus making these products more salient to an average consumer. Second, when
consumers are uncertain about product quality in the choice stage, they leverage available information to
form expectations about product quality and make the purchase decision according to the expectations. As
argued in the introduction, platform-provided editor recommendation may be perceived as a strong
endorsement from a credible source. As such, everything else equal, the recommendation might increase a
consumer’s expectation of the recommended product’s quality. Editor recommendation can therefore
directly change the ordering of product options in the consideration set, and contribute to an increase in
the purchase probability.
3.2. Spillover of Exposure and Spillover of Quality Endorsement
7
In the marketing literature, the first stage is also often named as “consideration.
12
Both the awareness effect and choice effect of editor recommendation can potentially spill over to other
products that are not featured but are related to the featured products. However, the underlying spillover
mechanisms may be different depending on the specific relationship between the featured and non-
featured products. Our discussion here focuses on three distinct product groups: products by the same
producer, products with similar functionality, and the same product on a different platform.
The awareness effect has potential externality on the three groups of products, creating an
exposure spillover. First, platform-provided editor recommendation causes a featured product to have a
larger customer base, and possibly a subset of the new customers will become familiar with the producer
and willing to consider the other products in the producer’s catalog. Then the other offerings by the
producer will become salient to a group of potential customers who would not consider these products
otherwise (Garthwaite 2014; Carmi et al. 2012). This is thus spillover of exposure to products in the same
producer’s catalog. Second, platform-provided editor recommendation might also prompt consumers to
explore other products that are functionally similar. For example, a consumer who has tried the featured
product may find that she needs the features provided in the product but does not like the specific
implementation or design of the featured product. The desire of its features and dissatisfaction of its
implementation would probably prompt the customer to spend more effort in searching for functionally
similar products (Carmi et al. 2012; Oestreicher-Singer and Sundararajan 2012; Jabr and Zheng 2013;
Lewis and Nguyen 2015). Third, editor recommendation may also have a spillover effect on the
awareness of the same products sold on a different platform. There are several mechanisms that
potentially drive the cross-platform spillover. For instance, there is a direct effect through the (probably
small portion of) consumers who are multi-homing on different platforms (for our empirical context, that
means users who use mobile devices of different operating systems). Additionally, the new users of the
featured product on the focal platform may initiate word of mouth that spills over to consumers on a
different platform. For example, opinion leaders on social media and reporters on mass media may share
information on those trending products, which may encourage customers from another platform to try out
the same products. For products that exhibit a network effect, the increased user base on the focal
13
platform can also make users of a different platform more willing to check out the product. Therefore,
mechanisms such as word of mouth, network effect, and consumer multi-homing may drive the spillover
of exposure across platforms.
Figure 2. The Spillover Effects in the Two-stage Decision Process
Editor recommendation on the platform can also directly influence the choice probability for the
three groups of related products. We call the effect the spillover of quality endorsement. First, since editor
recommendation is a positive signal of product quality, consumers might infer a higher capability for its
producer and even extrapolate the quality endorsement to other products by the same producer. Hence we
expect the quality endorsement of editor recommendation to have a positive spillover effect on the other
products by the same producer. Second, editor recommendation should have a negative spillover effect on
the choice probability of other similar products because the fact that platform editors have chosen to
recommend the featured product over another product. As such, products with similar functionality can
experience negative effect on their relative position in a consumer’s consideration set. Third, for a product
sold on more than one platform, being recommended by one platform may also increase consumers’
expectation of its quality on the other platform through mechanisms similar to those discussed for
products by the same developer. However, the size of spillover may exhibit great variation if there are
significant differences in the competitive environment.
Taken together, for other products sold by the same producer and the same product on a different
platform, we expect there exists positive spillover on both the first-stage awareness and the second-stage
purchase choice, while the relative importance of these two mechanisms (awareness and choice) may
vary. For similar products sold by other producers, platform-provided editor recommendation may have a
14
positive spillover effect on the first-stage awareness but a negative effect on the second-stage purchase
choice. Next, we test the existence of spillover and investigate the mechanisms for these related products
empirically in the context of the mobile app market.
4. Data
We collect our data from the two major app distribution platforms - Apple’s iOS App Store and Google’s
Play Store (hereafter, the iOS Store and Play Store respectively). The main dataset is collected from the
iOS Store and it consists of three parts. The first part contains apps that were recommended by the store’s
editors in the “New Apps We Love” column between February 1, 2016, and October 31, 2016. We call
these apps the featured apps. Two important points need to be made about the “New Apps We Love”
column. First, the column is the major channel of new app recommendation at the store level and apps
from all different categories can be selected to be featured in this column. Second, while the exact
procedure and criteria of choosing featured apps are not public information, side evidences suggest it is
not (or at least not purely) based on popularity. Apple’s official guideline for developers states that in
selecting recommended apps, the store editors consider factors including design, accessibility,
innovativeness and uniqueness.
Downloads is not listed among the factors.
8
Moreover, we have surveyed
industry reports discussing app recommendation in the iOS Store, and the consensus is that (1) the impact
of platform featuring is real and significant, and (2) it is virtually impossible to predict what apps will be
featured and when.
9
Thus, being selected for recommendation is not merely a relabel of popularity.
Nonetheless, these are not conclusive evidences, and there are still challenges on identification. We will
address the related concerns after presenting the first set of results.
8
In fact, no metric related to performance is listed. See the guideline at https://developer.apple.com/app-
store/discoverability/.
9
For example, see https://blog.apptopia.com/new-app-stores-app-of-the-day-gets-an-average-download-boost-of-
1747, http://www.businessinsider.com/apple-app-store-2015-2 and http://www.oneskyapp.com/blog/app-store-
feature/.
15
Since the iOS Store does not provide an API endpoint for retrieving the list of featured apps
programmatically, we first took screenshots of the “New Apps We Love” webpage every day and
manually identified the featured apps from the images. To rule out human error in recording the featured
apps, we checked each identified app on App Annie, a mobile app market intelligence website, to make
sure the manually identified featured app matches with the archived information provided there.
10
We
found all the manually recorded featured apps were validated by the App Annie data. The “New Apps We
Love” column typically features an app for a number of days. For each featured app, the recommendation
window is the time period from the first day to the last day when the app was featured. During the data-
collection period, the recommendation window ranges from 3 days to 17 days with a mean equal to about
10 days.
11
In total, we identified 416 distinct featured apps, and none of them was featured for more than
one episode. For these apps, we provide descriptive statistics of selected app attributes and the
distribution over app categories in Appendix D.
The second part of the iOS dataset comes from the listing information provided by the Store APIs
and a proprietary data archive that contains information of iOS apps as of January 2016. First, for each
featured app, we identified the other apps by the same developer and apps of similar functionality. Apps
by the same developer were identified by the unique developer ID. In total, we found 1,789 related apps
by the same developers. Apps of similar functionality were selected by analyzing the textual data of app
descriptions in our data archive. The app description (see an example in Appendix C) is a piece of text
provided in each app’s listing information that summarizes the app’s functionality---apps of similar
functionality tend to have similar descriptions. Thus, for each featured app, we measured the similarity
between the featured app’s description and all other apps’ descriptions in our data archive. In particular,
we used the term frequency-inverse document frequency (tf-idf) similarity measure
12
to retrieve the ten
10
For example, the link https://www.appannie.com/apps/ios/app/anchor-lets-talk/features/#device=iphone shows the
information on store featuring for the app named “Anchor - Radio by the people.”
11
We have excluded featured apps whose recommendation window started before Feb 1, 2016 or ended after Oct
31, 2016 to ensure our data contains the complete recommendation window.
12
The underlying assumption of our tf-idf based approach is that an app’s function(s) are reflected in the keywords
of app description. The tf-idf method identifies the important keywords in a document through two steps: the term-
16
most similar apps, and then excluded those that are not in the same category as the focal featured app.
Overall, we found 1,223 related apps with similar functionalities. Next, we combined the featured apps
and these two groups of related apps. For each app in the combined set, we collected both the static listing
attributes, such as title, developer, category and recommended age group, and the dynamic listing events
including version update and price change (for paid apps). The third part of the iOS dataset is the store’s
“Top Free” and “Top Paid” charts, which we tracked daily since January 1, 2016. For each day, we
observe the 100 top-ranked free apps and 100 top-ranked paid apps in the whole store as well as in each
of the 24 categories. Note that the iOS Store has country-specific versions and our programs and manual
data collection tasks were all implemented in the U.S. Also note that in this main dataset we do not
directly observe the apps’ sales in terms of download volume. As in many prior studies (Lee and Raghu
2014; Garg and Telang 2013; Carare 2012; Wen and Zhu 2017), we use entering the top charts to proxy
the sales performance.
The main dataset is complemented by information collected from the Play Store. For each
featured app identified in the iOS “New Apps We Love” column, we searched whether it had an Android
version in the Play Store. We developed a computer program for the search and match task, and then had
a research assistant manually validate the results. Only a subset of the featured apps was multi-homing
we identified the Android version for 131 of the featured apps. We also collected these Play Store apps’
static attributes, dynamic change events, and sales performance (from Play Store top-100 charts) data.
The collected data is then used to construct four samples for empirical analysis, one for the
featured apps themselves and one for each of the three groups of related apps: apps by the same
developer, apps with similar functionality, and the same app marketed on a different platform. In the first
frequency or tf step that puts high weights on words that appear frequently in the focal document, and the inverse-
document-frequency or idf step that scales down the weights on words that are common in all documents. The tf-idf
method translates each document (in our case, the app description) into a vector that represents the importance of
keywords (in our case, words that highlight the key functions of the app) in the document. Then, the similarity of
two documents can be measured by the Cosine distance between the two documents’ representing vectors. The tf-idf
measure is a widely-used document similarity measure in the area of information retrieval. For the comprehensive
treatment of the method, please refer to (Manning et al. 2008).
17
sample, for each featured app, we compute daily observations for the period from 30 days prior to the
recommendation window to 30 days after the recommendation window. On each day, we observe whether
the app was listed on the top 100 chart of its category, whether it was featured as well as its version age
and price. Figure 3 plots the daily proportion of featured apps that entered top 100 during the observation
window. The plot shows the proportion of top-100 apps increased significantly after recommendation
started.
13
For the three samples on the related apps, the data structure is the same. Take apps by the same
developer for example. Suppose
!
is a featured app and
"
is a different app by the same developer of
!
. For
examining the spillover to
"
when
!
is featured, we compute daily observations for the period from 30
days prior to
!
’s recommendation window to 30 days after
!
’s recommendation window. The daily
observations include whether
"
was on the top 100 chart, whether
!
was featured, and
"
’s price and version
age. For the descriptive statistics of these four data samples, please refer to Appendix D.
In Table 1 we list the definitions of the time-variant variables in our samples. We do not report on
the time-invariant variables since we control for app fixed effects in the empirical analysis so the time-
invariant variables (such as category and the paid dummy) won’t be included.
14
The dependent variable
we will model in our main analysis is
#$%
&'
, the dummy variable indicating whether app
"
is in top 100 on
day
#
, rather than app
"
’s actual rank on the top chart. This is because the ranking information is only
available for the top-100 apps so if an app did not make into the top 100 chart on a particular day, its rank
on that day would be missing in our dataset. Given that for each app we would need both a sufficient
number of observations to estimate a fixed effect and also consecutive observations to allow for state
dependence (which we explain in the next section), using top-100 ranking as the dependent variable
would lead to severe attrition of observations. Instead, modeling the binary outcome allows us to use as
many daily observations as possible so we can control for unobserved app heterogeneity without making
restrictive assumptions. The drawback is that we will only have estimates of the extent to which the
13
We aligned the observation windows for different featured apps, with day 0 being the first day of featuring (the
last day of featuring varies for different featured apps).
14
Though avg rating and rating count are in theory time-varying, most of the related apps in the sample do not have
any ratings.
18
potential spillover effect will change the likelihood of an app entering the top 100 chart, but not how
much it will change the downloads or ranking. To complement the main analysis, we managed to obtain a
dataset of app downloads for a part of the apps in our sample from a mobile app analytics company. We
report the details on the supplemental data and related analysis in Section 7.
Table 1. Definitions of the Time-Variant Variables
Variable
Definition
#$%
&'
dummy variable, =1 if app " is in top 100 of its category on day #
()*#+,)
&'
dummy variable
for the featured apps, =1 if app " is featured on day #
for the related apps, =1 if the featured app that " is related to is
featured on day #
%,"-)
&'
price of app " on day #
.),/"$01*2)
&'
15
number of days since the last major update to day #
Figure 3. Proportion of Featured Apps that Entered Top 100 during the Observation Window
5. Existence of Spillover Effects
This section presents our estimation of the spillover effects. We first document that editor
recommendation is positively associated with the likelihood of entering the top 100 chart for the featured
apps. We then turn to examining the spillover to related apps. Lastly we discuss the concerns related to
potential identification issues.
5.1. Editor Recommendation and the Sales of Featured Apps
15
We define a version update to be a major update when there is a change in the second digit of the app’s version
number. For example, under our definition, an update from 1.8.0 to 1.9.0 is a major update, and an update from
version 2.0.0 to 2.0.1 is not a major update.
0 .2 .4 .6 .8 1
Proportion in Top 100
-20 -10 0 10 20 30
Day
19
We first study the impact of recommendation from “New Apps We Love” on the featured apps
themselves. To estimate the effect, we use the following reduced-form specification:
#$%
&'
3
I
4
5
&
6 7#$%
&'89
6 :()*#+,)
&'
6 ;
&'
< 6 =
&'
> ?
@
A1
(1)
where I
4@
on the right-hand side is an indicator function that takes 1 if the condition in the argument holds
and 0 otherwise. Therefore, it is a binary outcome regression model, and we will estimate the linear
probability, probit, and logit versions of it and present the results. Several points are noteworthy regarding
the specification. First, there is a high degree of serial persistence in an app’s sales performance, as
indicated by entering the sales chart of top 100 (
#$%
&'
), and our specification allows two distinct sources
of this serial persistence. The first source is the unobserved app characteristics that affect the choice
probability. This is thus app-specific heterogeneity that is constant over time and is captured by
5
&
. The
second source is called “state dependence” in Heckman (1978) and, consistent with previous literature,
we model it by including
#$%
&'89
in the specification (Liu 2006; Zhou and Duan 2016). The rationale for
using the lagged variable in our context is that the top ranking chart is used by some consumers as an app
discovery channel,
16
so being listed on the top chart in the previous period enhances the app’s exposure to
consumers, thereby increasing its awareness probability in the population. With
5
&
and
#$%
&'89
included,
our specification is a binary outcome model with both heterogeneity and state dependence. Second,
;
&'
includes observed app characteristics including
%,"-)
&'
and
.),/"$01*2)
&'
. The time-invariant app
characteristics such as category and developer are not included due to the way we control for
5
&
. By the
same logic, an app will be excluded from estimation if
#$%
&'
3 ?
in all periods or
#$%
&'
3 B
in all periods.
Lastly, unlike the typical panel-data setting where the number of observations per individual is small, our
dataset contains a large number of time periods for each app
"
(30 days before the recommendation
window to 30 days after that), so we can estimate the specification by taking
5
&
as fixed effect and using
joint maximum likelihood method (Wooldridge 2002).
16
For example, see https://venturebeat.com/2017/09/24/your-chances-of-making-a-successful-mobile-app-are-
almost-nil/ and https://marco.org/2013/06/17/app-store-top-lists.
20
We report the estimation results under linear probability, probit, and logit assumptions in the
three columns of Table 2 respectively. As the table shows, the effect of editor recommendation is
consistently estimated to be significantly positive across the three columns. The average estimated partial
effect of editor recommendation of “New Apps We Love” on the likelihood of entering top 100 is 23.2%
using linear probability, 18.0% using probit, and 18.5% using logit. We also find strong evidence for state
dependence, as indicated by the significantly positive coefficient of the lagged dependent variable.
Consistent with the intuition, app sales is negatively associated with version age. We do not find a
statistically significant effect for
%,"-)
&'
, probably due to the fact that there is very little variation during
the observation period. Before we move on, we emphasize that we do not interpret the positive
relationship between being featured and the sales performance of featured apps as a conclusive evidence
for causal effect. Rather, we show the observed positive relationship between platform-provided editor
recommendation and sales as the premise for the potential spillover effect, which is our main research
question.
Table 2. Regression Results Model (1): Editor Recommendation and the Sales of Featured Apps
Featured Apps (DV: #$%
&'
)
Model (1)
Linear Probability
Model (1)
Probit
Model (1)
Logit
#$%
&'89
0.602***
1.890***
3.321***
(0.009)
(0.036)
(0.069)
()*#+,)
&'
0.232***
2.021***
3.982***
(0.008)
(0.061)
(0.129)
.),/"$01*2)
&'
-0.018***
-0.183***
-0.335***
(0.002)
(0.017)
(0.033)
%,"-)
&'
-0.000
-0.018
-0.0271
(0.007)
(0.043)
(0.089)
app-specific
fixed effects
yes
yes
yes
Obs
20,335
20,335
20,335
* p<0.1, ** p<0.05, *** p<0.01
Notes: a. App-specific fixed effects (for both featured and related apps) are controlled in
all columns. b. Apps with no variation in the dependent variable#$%
&'
are
automatically dropped. Therefore, the sample size here is smaller than the original
sample. c. Standard errors clustered at the category-date level.
5.2. Spillover Effects of Editor Recommendation on Related Apps
21
We use a similar reduced-form specification to test the existence of spillover and estimate the overall
spillover effect on each of the three groups of related apps: apps by the same developer, apps with similar
functionality, and the same app on a different platform. We rewrite the regression specification here:
#$%
&'
=I
C
5
&
6 7#$%
&'89
6 :()*#+,)
D'
6 ;
&'
< 6 =
&'
> ?
E
A
(2)
where we use
"
to denote a related app and
!
the featured app that app
"
is related to, and
()*#+,)
D'
indicates whether
!
is recommended at time
#
. Therefore, a nonzero
:
will indicate the existence of
spillover from the recommendation of app
!
to the sales of app
"
. Again,
5
&
and
7
capture individual
heterogeneity and state dependence respectively.
We estimate the specification under the probit assumption for each of the three groups of related
apps and report the results in Table 3.
17
As shown in the three columns, we find that the editor
recommendation of “New Apps We Love” has an overall positive spillover effect (
: F ?
) on all three
product groups. Moreover,
:
is statistically significant at the 1% level for all three groups of related apps.
When conditioning on the related app entering the top chart at least once during the observation window,
the partial effect of the featured app’s recommendation on the probability of the related app entering top
100 list is estimated to be 5.6% for an app by the same developer, 2.5% for a functionally similar app, and
4.8% for the same app sold in the Play Store. The finding that the positive spillover effect is larger for
apps by the same developer than for apps of similar functionality is consistent with our expectation. Note
that
%,"-)
&'
is not included in the Play app group because very few price change events happened in the
observation window so
%,"-)
&'
is highly correlated with the fixed effects. The above effects are the
average effects across pairs of featured and related apps. We also find that the magnitude of spillover
effect is weaker among apps in the well-established developers’ portfolios or when the similar apps had
been previously entered the top charts. These heterogeneity analyses are reported in Appendices E1 and
E2.
17
The results of the logit and linear probability specifications are qualitatively similar. We choose to present the
probit result for consistency with the later analyses on spillover mechanism.
22
Table 3. Probit Regression Results Model (2): The Spillover Effect of Editor Recommendation on
the Sales of Related Apps
Apps by the same
developers
(DV:
#$%
&'
,
probit)
Similar apps
(DV: #$%
&'
,
probit)
Same app on the
other platform
(DV: #$%
&'
,
probit)
Model (2)
Model (2)
Model (2)
#$%
&'89
1.179***
1.013***
2.915***
(0.031)
(0.053)
(0.099)
()*#+,)
D'
0.302***
0.146***
0.522***
(0.034)
(0.055)
(0.123)
.),/"$01*2)
&'
-0.055***
-0.013
0.114***
(0.013)
(0.026)
(0.043)
%,"-)
&'
-0.169***
-0.190***
(0.031)
(0.048)
app-specific
fixed effects
yes
yes
yes
Obs
24,345
6,761
2,545
* p<0.1, ** p<0.05, *** p<0.01
Notes: a. App-specific fixed effects (for both featured and related apps) are controlled in
all columns. b. Apps with no variation in the dependent variable#$%
&'
are
automatically dropped. Therefore, the sample size here is smaller than the original
sample. c. In the third group, because few of the Play apps changed price in the
observation window, the coefficient of %,"-)
&'
is not identified. d. Standard errors
clustered at the category-date level.
5.3 Potential Threats on Identification and Robustness Analyses
Before delving into the mechanisms of spillover, we focus on addressing potential identification concerns
regarding our analysis on the spillover effect in the preceding subsection. Given that the App Store only
publishes a general guideline on the featured apps (cited in Section 4) and the exact criteria for selecting
the featured apps remain opaque, there is the important concern that unobserved factors can be correlated
with both the selection of the featured apps and the download performance of the related apps.
To begin with, we note that under our fixed-effects specification, all the time-invariant factors that may be
used for selecting featured apps will not pose a threat to our identification of the spillover effect. For
example, app quality is a factor considered by store editors in selecting featured apps. However,
unobserved app quality of either featured or related apps will not cause an endogeneity problem since app
quality of both featured and related apps can be considered to be constant during the observation window
(given that we also control for version updates), so the effect of app quality is absorbed into the fixed
23
effects. The critical threat to the validity of our spillover estimates comes from the possible presence of
time-varying factors that could simultaneously cause a featured app to be selected for recommendation
and the sales of its related apps to increase. Below we consider various possibilities and provide an
overview of our robustness analyses for alleviating the concerns.
First, while app developers in the industry generally consider it impossible to predict what apps
will be featured and when (as we discussed in Section 4), there might be the concern that the store editors
could be influenced by the unobserved marketing effort by developers. If marketing effort caused both an
app to be selected for recommendation and the demand of its related apps to increase, then the spillover
effect we reported would be spurious. To address this concern, in Appendix F1, we utilize Google Trend
data to identify a subset of the featured apps that were unlikely to have been influenced by significant
marketing during the recommendation window, and then rerun the regressions on the subsample of our
data corresponding to these featured apps. We find the results are highly consistent with those reported
above.
Second, it is still possible that the Google Trend data cannot fully capture the effect of developer
marketing, or there are other unobserved events that could simultaneously affect recommendation and
demand. This concern is fundamentally about the selection on unobservables issue. In Appendix F2, we
formally assess the sensitivity of our spillover estimate to the selection issue, following the method of
selection on unobservables (Altonji et al. 2005; Oster 2016). We find that both the upper bound and the
lower bound of the spillover effect on all three groups of related apps are greater than zero, still
supporting the existence of a positive spillover effects.
Third, there could be the concern that the recommendation of featured apps and increase in
popularity for the related apps are driven by some common trends (e.g. trends at the store level or the app-
pair level). Accordingly, we conduct an additional analysis to address the concern about potential
common trends at the store level. Specifically, in Appendix F3, we include weekly dummies into our
specification to explicitly control for time fixed effects, and we find the results remain consistent.
24
Fourth, note that the time-fixed effects can control for the overall time trend at the store level, but
they may not be able to adequately capture app-pair specific time trend. To alleviate the concern about
app-pair specific time trend, we further conduct placebo tests by randomly shuffling the recommendation
window (Abadie et al. 2015; Bertrand et al. 2004; Ranganathan and Benson 2017) in Appendix F4 and by
creating fake recommendation windows outside the observation window in Appendix F5. These tests will
alleviate the concern of the potential co-moving tendency between the demand of a featured app and that
of its related apps which may cause a spurious correlation between platform recommendation and the
increase in sales of related apps. For example, one might worry that if the sales of the featured and related
apps increased together during the observation window for some reason other than recommendation, and
if being selected for recommendation is just a result of increased sales, then the positive effects we have
documented can simply reflect the common trend rather than the spillover effect. If that was what had
happened, then the shuffled and fake recommendation windows in our placebo tests would still pick up
positive “spillover” effects. However, no such spurious effects are found in the placebo tests. Together,
all the robustness analyses lend support to our main findings. We summarize these analyses in Table 4.
The details of the analyses are reported in Appendix F.
Table 4. Overview of Robustness Analyses
Section
Analysis
Concern to Address
Appendix F1
Google Trend subsample
Unobserved developer marketing
Appendix F2
Bounding the spillover effects
Potential selection on unobservables
Appendix F3
Controlling for the time fixed effects
Trend at the store level
Appendix F4,
F5
Placebo tests
Trend at the app-pair level
Lastly, we note that the focus of our study is on the spillover effect of recommending apps which
are likely to be selected by platforms. As platforms tend to recommend apps from the subpopulation of
apps with high quality, our point estimates of the spillover effect may not be directly applied to the
situation when a random set of apps are being recommended. Nevertheless, this does not degrade the
credibility or importance of our findings. First, as mentioned above, the selection of featured apps does
not pose a threat to the identification of the spillover effect under the fixed effects specification. More
25
importantly, our findings have high policy relevance since platforms are most interested in the market
effects of curated apps that they would want to recommend. In contrast, investigating the spillover effect
of recommending low-quality apps or randomly selected apps might be theoretically interesting, but such
practice is highly unlikely to be adopted by platforms. Similar to our study, existing research on the
spillover effect of events that are not under the producer’s control generally focuses on a non-random
subpopulation that is policy-relevant (among others, see Garthwaite 2014 on celebrity endorsement, and
Borah and Tellis 2016 on negative online chatter).
6. Spillover Mechanisms
6.1. Spillover Mechanisms, Reduced-Form Probit Model
Considering again the awareness-choice two-stage model discussed in Section 3, the results from the
regression model in Section 5.2 provide evidence on the existence of spillover effects of the
recommendation of “New Apps We Love,” as illustrated by the left subfigure in Figure 4. Though the
results show the size of the spillover effect varies across the three related-app groups, whether the
difference is due to the spillover of exposure or due to the spillover of quality endorsement cannot be
distinguished based on the regressions.
Figure 4. The Spillover of Editor Recommendation:
Overall Effect (left) and Mediator (right)
Our discussion in Section 3 reveals that the key channel for the exposure spillovers of editor
recommendation to a related app is through the increased popularity of the featured app. The
recommendation of featured app
!
increases the number of users for app
!
, and some of the new users
26
would become aware of or be reminded of the other apps by
!
’s developer, apps that share similar features
of
!
, and
!
’s Android version if any. As such, the key mediating variable in the spillover of exposure is the
sales of app
!
. This motivates us to include the variable
#$%
D'
, a proxy of app
!
’s sales, into the regression
specification as we have shown that
()*#+,)
D'
is positively associated with
#$%
D'
. Specifically,
#$%
D'
, the
mediator, satisfies the two conditions for the specification of a mediation model (Baron and Kenny 1986):
(1) as discussed in the empirical context and supported by the results in Table 2, our main variable,
editorial recommendation of featured app
!
(
()*#+,)
D'
), influences the mediator (
#$%
D'
) but not vice
versa; (2) based on our discussion above, the mediator (
#$%
D'
) influences the outcome variable, the
spillover of exposure from featured app
!
to related app
"
, given that the exposure spillover is mainly
dependent on the number of consumers who download featured app
!
. If our theoretical discussion is to be
confirmed by data, we expect the variable
#$%
D'
should capture a sizable if not the whole portion of the
exposure spillover effect, and meanwhile, the remaining effect of
()*#+,)
D'
would be smaller, mainly
reflecting the spillover of quality endorsement. The idea is illustrated in the right subfigure in Figure 4: If
we control for the spillover of exposure by adding the sales of app
!
as a control variable, we could test
the existence of the spillover of quality endorsement by examining the remaining effect of editor
recommendation. Formally, we run the following regression
#$%
&'
=I
C
5
&
6 7#$%
&'89
6 :()*#+,)
D'
6 G#$%
D'
6 ;
&'
< 6 =
&'
> ?
E
(3)
for each of the three related app groups and report the respective results in Table 5. The results confirm
our expectation – after including
#$%
D'
, both the size and significance level of
:
, the coefficient of
()*#+,)
D'
, decrease, compared with those in Table 3. Specifically, the remaining effect of
()*#+,)
D'
is
no longer statistically significant for the groups of similar apps and the same app in the Play Store; the
coefficient of
()*#+,)
D'
is still significantly positive at the 1% level for the group of same-developer
apps, but its magnitude decreases 47% from 0.302 to 0.161. For the three groups, the average partial
effect of
()*#+,)
D'
is estimated to be 3.0%, 1.8%, and 0.7% respectively. The coefficient of the newly
27
included
#$%
D'
is not statistically significant for similar apps and the same app on the other platform.
However,
()*#+,)
D'
and
#$%
D'
are jointly significant.
The results in Table 5 provide suggestive evidence that there are both exposure spillover and
quality endorsement spillover at work, but their relative importance appears to differ across the three
groups of related apps. While their aggregate effects are found to be positive for all three groups, it seems
for the latter two groups the exposure externality is driving most of the spillover, and the quality
endorsement effect is the most pronounced for the apps by the same developer. In the next subsection, we
use a two-stage model to further investigate the differences due to the two mechanisms.
Table 5. Probit Regression Results Model (3): The Spillover Effect of Editor Recommendation on
the Sales of Related Apps
Apps by the same
developers
(DV:
#$%
&'
, probit)
Similar apps
(DV: #$%
&'
, probit)
Same app on the
other platform
(DV: #$%
&'
, probit)
Model (3)
Model (3)
Model (3)
#$%
&'89
1.168***
1.013***
2.852***
(0.031)
(0.053)
(0.103)
()*#+,)
D'
0.161***
0.102
0.085
(0.039)
(0.064)
(0.140)
#$%
D'
0.235***
0.070
0.857***
(0.037)
(0.058)
(0.138)
.),/"$01*2)
&'
-0.056***
-0.012
0.139***
(0.013)
(0.026)
(0.047)
%,"-)
&'
-0.174***
-0.191***
(0.031)
(0.047)
app-specific
fixed effects
yes
yes
yes
Obs
24,345
6,761
2,545
* p<0.1, ** p<0.05, *** p<0.01
Notes: a. App-specific fixed effects (for both featured and related apps) are controlled in all
columns. b. Apps with no variation in the dependent variable#$%
&'
are automatically dropped.
Therefore, the sample size here is smaller than the original sample. c. In the third group, because
few of the Play apps changed price in the observation window, the coefficient of %,"-)
&'
is not
identified. d. Standard errors clustered at the category-date level.
6.2. Spillover Mechanisms, Two-Stage Bivariate Probit Model
Though the results of our reduced-form probit regressions have lent support to the existence of an overall
positive spillover effect and suggested that the mechanism of spillover might vary across different groups
of related apps, they did not allow us to disentangle the editor recommendation’s effects on awareness
28
and choice. In this subsection, we specify a bivariate probit model to separately estimate the spillover of
exposure and spillover of quality endorsement.
The motivation of our bivariate probit model is the two-stage awareness-choice decision process
discussed in Section 3, and it is based on the idea that a new user must be aware of an app before she can
download or buy it. So the probability that a representative user downloads an app is the product of two
probabilities: the probability that the user knows about the app and the probability that the user chooses to
download the app conditional on discovering the app. Aggregating the individual decision to the
population level, the sales of an app is the proportion of potential new users who are aware of it times the
proportion of users who would like to download it. Formally, we estimate two specifications as follows:
#$%
&'
=I
C
5
9
6 7#$%
&'89
6 :
9
()*#+,)
D'
6 =
9&'
> ?
E
H
I
C
5
I
6 5
&
6 :
I
()*#+,)
D'
6 ;
&'
< 6 =
I&'
> ?
E
A
(4)
#$%
&'
=I
C
5
9
6 7#$%
&'89
6 :
9
()*#+,)
D'
6 G#$%
D'
6 =
9&'
> ?
E
H
I
C
5
I
6 5
&
6 :
I
()*#+,)
D'
6 ;
&'
< 6
=
I&'
> ?
E
A
(5)
where in both (4) and (5), the first term on the right-hand side corresponds to the awareness probability
and the second term the choice probability. In the awareness probability equation, we include
#$%
&'89
to
reflect the reality that the top chart of the previous day is often used by users as an app discovery channel,
and
()*#+,)
D'
in (4) and
()*#+,)
D'
and
#$%
D'
together in (5) to capture the exposure spillover. As
discussed in the previous subsection, we expect the coefficient of
()*#+,)
D'
,
:
9
to be insignificant in (5)
if
#$%
D'
fully mediates the exposure spillover effect. In the choice probability equation, we include
()*#+,)
D'
to capture the effect of quality endorsement spillover and apps’ version age and price,
;
&'
, as
well as app-fixed effects. A nonzero
:
I
, the coefficient of
()*#+,)
D'
in the choice probability equation,
will indicate the existence of quality endorsement spillover. Lastly,
=
9&'
and
=
I&'
are unobserved factors
that affect the two stages respectively. Under a similar assumption as in probit, they follow a bivariate
normal distribution and are allowed to be arbitrarily correlated. Because the awareness and choice
probabilities are not independently observed, the specification is a bivariate probit model with partial
29
observability (Poirier 1980). We estimate the specifications using maximum likelihood method for each
of the three groups of related apps and report the results in Table 6.
As shown in Table 6, in the awareness probability,
()*#+,)
D'
is significantly positive for the
other apps by the same developer and the same app on the other platform, which implies that there exists
a positive spillover of exposure from the featured app to these two groups of related apps.
()*#+,)
D'
becomes insignificant for the similar apps, which together with the reduced-form result in Table 5 shows
the spillover of exposure to similar apps is weaker than that for the other two groups of related apps.
()*#+,)
D'
is consistently found to be statistically insignificant after we control for
#$%
D'
. The evidence
appears to support that the increased sales of featured apps fully mediates the exposure spillover. As for
the second stage, the choice probability, the coefficient of
()*#+,)
D'
is estimated to be significantly
positive in the first column. Thus, for the apps by the same developer, we find positive evidence of both
the spillover of exposure and that of quality endorsement. The result supports the view that if a developer
has an app featured by the platform, the other apps by the developer would also experience an increase in
exposure and users also tend to expect that the other apps in the portfolio to have high quality. For the
functionally similar apps, however, the coefficient of
()*#+,)
D'
is estimated to be insignificant as shown
in Table 5. This result is consistent with the expectation that the platform’s recommendation of the
featured app over the similar apps shouldn’t be perceived as an endorsement of quality for the similar app.
Lastly, for the same app’s Android version released in the Play Store, both the spillover of exposure and
that of quality endorsement are positive. This suggests that when an app is featured in the iOS Store, the
same app’ Android version also benefits from the incremental exposure. However, the spillover of quality
endorsement is weaker across platforms than within the same platform (marginally insignificant using
Model (4) and 5% significant using Model (5)).
30
Table 6. Two-Stage Model Results Models (4) (5): The Spillover Effect of Editor Recommendation
on the Sales of Related Apps
For ease of comparison, we summarize the average partial effect of
()*#+,)
D'
estimated from the
different models (2-5) in Table 7. Using the main results from the two-stage model (Model (4)) reported
in the third row for each group in Table 7, the estimation shows that having an app featured in “New
Apps We Love” tends to increase the awareness probability of the other apps by the same developer by
3.9% and boost the choice probability by 4.0%. There is an associated increase in the awareness
probability of other apps with similar functionality by 5.6% but also a decrease in the choice probability
for those apps by 1.0%. For the Android version of the featured app, there is an increase in the awareness
probability by 3.9% and an increase in the choice probability by 4.8%.
Apps by the same
developer
(two-stage)
Similar apps
(two-stage)
Same app on the
other platform
(two-stage)
Model (4)
Model (5)
Model (4)
Model (5)
Model (4)
Model (5)
Stage 1 - awareness
#$%
&'89
2.036***
2.040***
1.802***
1.759***
7.645***
7.747***
(0.127)
(0.098)
(0.178)
(0.187)
(0.107)
(0.106)
()*#+,)
D'
0.153*
-0.040
0.224
0.088
0.609***
0.219
(0.090)
(0.073)
(0.195)
(0.233)
(0.145)
(0.168)
#$%
D'
0.312***
0.117
0.697***
(0.055)
(0.132)
(0.154)
Stage 2 - choice
()*#+,)
D'
0.141**
0.150***
-0.037
-0.035
0.474
0.583**
(0.058)
(0.055)
(0.116)
(0.123)
(0.300)
(0.293)
.),/"$01*2)
&'
-0.063***
-0.065***
-0.001
0.001
-0.003
0.029
(0.016)
(0.015)
(0.024)
(0.021)
(0.132)
(0.124)
%,"-)
&'
-0.207***
-0.211***
-0.269***
-0.236*
(0.055)
(0.048)
(0.100)
(0.139)
app-specific
fixed effects
yes
yes
yes
yes
yes
yes
Obs
24,345
24,345
6,761
6,761
2,545
2,545
* p<0.1, ** p<0.05, *** p<0.01
Notes: a. App-specific fixed effects (for both featured and related apps) are controlled in all columns. b. Apps
with no variation in the dependent variable#$%
&'
are automatically dropped. Therefore, the sample size here
is smaller than the original sample. c. In the third group, because few of the Play apps changed price in the
observation window, the coefficient of %,"-)
&'
is not identified. d. Standard errors clustered at the category-date
level.
31
Table 7. Summary of the Average Partial Effect of Editor Recommendation on the Awareness and
Choice of Related Apps from Models (2)-(5)
App Group
Model
Overall
Awareness
Choice
Apps by the
Same
Developer
probit
5.6% ***
-
-
probit, #$%
D'
included
3.0% ***
-
-
bivariate probit
4.6% ***
3.9% *
4.0% **
bivariate probit, #$%
D'
included
2.2% ***
-1.0%
4.2% ***
Apps with
Similar
Functionality
probit
2.5% ***
-
-
probit, #$%
D'
included
1.8%
-
-
bivariate probit
2.1% **
5.6%
-1.0%
bivariate probit, #$%
D'
included
0.6%
2.1%
-1.0%
Same app on
the other
platform
probit
4.8% ***
-
-
probit, #$%
D'
included
0.7%
-
-
bivariate probit
5.1% ***
3.9% ***
4.8%
bivariate probit,
#$%
D'
included
3.3% ***
1.4%
5.8% **
7. Heterogeneity of Spillover Mechanisms and Long-Run Spillover Effects
In this section, we first investigate how the spillover mechanism is dependent on the characteristics of the
featured apps including pricing model and user rating. Second, we use the subset of our data where daily
downloads information is available to supplement our main analysis. We also use the estimates to
calculate the long-run spillover effects on the daily downloads of the related apps.
7.1. Heterogeneity of Spillover Mechanisms with Respect to Featured App Characteristics
The results in Section 6 suggest that both the spillover of exposure and spillover of quality endorsement
play an important role in influencing the related apps’ sales. Given that there is considerable
heterogeneity among the featured apps, we further leverage the variation in our sample to explore whether
and how the spillover mechanisms may differ depending on the characteristics of featured apps. Two
salient characteristics of the featured apps are their pricing model (paid versus free) and user rating, both
of which can potentially affect the degree to which consumers who are exposed to them would want to
explore related apps
18
. Specifically, when a featured app is a paid app, consumers who are very price-
sensitive may turn to exploring other related apps instead of downloading the featured app. In addition,
18
We would like to thank an anonymous reviewer for the suggestion.
32
the high user rating of a featured app, which is a signal of high quality, may also increase consumers’
intention to explore other apps by the same developer while decreasing their willingness to explore
similar apps by other developers.
Thus, we examine the heterogeneous spillover effects with respect to whether the featured app is
paid or free (
%*"J
D
) and whether the featured app is high-rated or not (
K"2KL,*#)J
D
).
19
We change the
specification of Model (4) by including the interaction of
()*#+,)
D'
and
%*"J
D
in both stages (Model (6))
and including the interaction of
()*#+,)
D'
and
K"2KL,*#)J
D
in both stages (Model (7)). The estimation
results are reported in Table 8. As the results show, the spillover effect of exposure on apps by the same
developer is higher when the featured app is a paid app or it is highly rated. The spillover effect of
exposure on similar apps by other developers is lower when the featured app is highly rated. Moreover,
the spillover on the same app marketed on the other platform is stronger when the featured app is a paid
app. These results on heterogeneity are all consistent with the expectation. Additionally, it appears that
the spillover of quality endorsement does not depend on whether the featured app is paid or highly rated.
Table 8. Two-Stage Model Results: Heterogeneous Spillover Mechanisms
(Free versus Paid Featured App; High-rated versus Low-rated Featured App)
19
Specifically, we use the median rating as the cutoff point. The possible range of rating is between 1 and 5.
According to the report by https://www.statista.com/statistics/879855/customer-ratings-of-ios-applications/, the
median rating is close to 4. Therefore, we classify a featured app to be high-rated if it has a rating above 4 on the
first day of its feature window.
Apps by the same
developer
(two-stage)
Similar apps
(two-stage)
Same app on the
other platform
(two-stage)
Model (6)
Model (7)
Model (6)
Model (7)
Model (6)
Model (7)
Stage 1 - awareness
#$%
&'89
2.049***
2.025***
1.809***
1.788***
7.549***
7.934***
(0.115)
(0.117)
(0.181)
(0.204)
(0.120)
(0.110)
()*#+,)
D'
-0.046
-0.100
0.082
0.773*
0.455***
1.046***
(0.118)
(0.133)
(0.251)
(0.432)
(0.167)
(0.341)
%*"J
D
M ()*#+,)
D'
0.326***
0.386
0.656**
(0.117)
(0.314)
(0.281)
K"2K1,*#)J
D
H ()*#+,)
D'
0.314**
-0.744*
-0.428
(0.128)
(0.429)
(0.360)
Stage 2 - choice
33
7.2. Supplemental Analysis Using Downloads Data
In all of our analyses above, we estimated the spillover effect of editorial recommendations on the
likelihood of a related app entering the top-100 charts. While the
#$%
&'
dummy is an important metric,
relying only on it may miss the spillover effect on apps that are constantly ranked on the top charts and
those never ranked on the top charts. Thus, we purchased daily downloads data from a mobile analytics
firm. The downloads data would help us to not only assess the spillover effects at a more granular level,
but also to better examine the long-run effects of editorial recommendation. However, one important
limitation of this dataset is that the historical downloads data the mobile analytics firm provides is
constrained in time frame so the period the downloads data covers only overlaps with the last few months
of our data collection period. In addition, the historical downloads data for Play apps is limited. Because
of these limitations, only the featured apps which were recommended after August 12
th
, 2016 and their
related apps can be included in the downloads regressions. The analysis below uses this relatively small
dataset.
The specification of the supplemental analysis is as follows:
NOP14J$Q0R$*J
&'
@ 3 5
&
6 7 NOP
4
J$Q0R$*J
&'89
@
6 :()*#+,)
D'
6 ;
&'
< 6 (
4
J*S
&'
@
6 =
&'
T
(6)
()*#+,)
D'
0.161
0.313**
-0.099
-0.246
0.343
1.154
(0.101)
(0.133)
(0.198)
(0.216)
(0.332)
(0.773)
%*"J
D
M ()*#+,)
D'
-0.025
0.127
0.762
(0.113)
(0.218)
(0.674)
K"2K1,*#)J
D
H ()*#+,)
D'
-0.214
0.319
-0.892
(0.138)
(0.255)
(0.891)
.),/"$01*2)
&'
-0.066***
-0.063***
0.002
-0.006
-0.013
-0.020
(0.016)
(0.015)
(0.027)
(0.024)
(0.137)
(0.186)
%,"-)
&'
-0.211***
-0.205***
-0.297**
-0.255**
(0.052)
(0.050)
(0.126)
(0.108)
app-specific
fixed effects
yes
yes
yes
yes
yes
yes
Obs
24,345
24,008
6,761
6,687
2,545
2,207
* p<0.1, ** p<0.05, *** p<0.01
Notes: a. App-specific fixed effects (for both featured and related apps) are controlled in all columns. b. Apps
with no variation in the dependent variable#$%
&'
are automatically dropped. Therefore, the sample size here
is smaller than the original sample. c. 60 featured apps did not have rating information by time they were
recommended. Thus, the sample size for Model (7) is smaller than that of Model (6). d. In the third group,
because few of the Play apps changed price in the observation window, the coefficient of %,"-)
&'
is not
identified. e. Standard errors clustered at the category-date level.
34
We take log transformation of number of downloads to account for the skewness of its distribution. Since
number of downloads is an absolute performance measure (unlike ranking which is relative), it is
important to control for its time trend. In the equation,
J*S
&'
denotes the day number since the beginning
of the recommendation window, and
(
4
J*S
&'
@
controls for the time trend in downloads that potentially
correlates with selection for recommendation. We specify
(
4
J*S
&'
@
3 G
9
J*S
&'
6 G
I
J*S
&'
H ()*#+,)
D'
,
which allows the time trend to be different between inside and outside the recommendation window.
20
For
comparison, we also estimate the direct effect of editor recommendation on featured apps with a similar
functional form. We report the OLS regression results in Table 9. As the results show, the coefficient of
()*#+,)
D'
is significantly positive in all four columns, which lends further support to the existence of
direct effect and spillover effect across three groups.
The combination of our downloads data and OLS model specification also permits us to calculate
the cumulative spillover effects of recommendation on related apps’ daily downloads. Note that the model
equation determines how the spillover effects would unfold over time: On the first day of
recommendation the spillover effect on (log) daily downloads is captured by
U
; on the second day of
recommendation, the cumulative effect on (log) daily downloads is (
U 6 7U@
, where the latter term comes
through the increased downloads from the previous day. By iterating this calculation, the cumulative
spillover effect on (log) daily downloads on the 10
th
day of recommendation (10 days is the mean length
of recommendation window in our dataset) is
U
498V
WX
@
98V
. Based on our estimates in Table 9, the cumulative
effect on daily downloads after 10 days of recommendation is an increase of 961.5% for the focal featured
apps. The corresponding numbers for the long-run spillover effects are 22.3% for apps by the same
developer, 10.8% for similar apps, and 337.0% or the same app in the Play Store.
20
The specification can be considered a regression discontinuity (RD) design using day as the running variable (see
similar specifications in Imbens and Lemieux (2008) and Anderson (2014)). The potentially endogenous
relationship between
=
&'
and the day of featuring is eliminated by the function
(4J*S
&'
@
, assuming that
=
&'
does not
change discontinuously near the recommendation window.
35
Table 9. Linear Regression Results Using Downloads Data: The Spillover Effect of Editor
Recommendation on the Sales of Featured Apps and Related Apps
Featured apps
Apps by the same
developer
Similar apps
Same app on the
other platform
DV: NOP14J$Q0R$*J/
&'
@,OLS
Model (6)
Model (6)
Model (6)
Model (6)
J$Q0R$*J
&'89
0.722***
0.504***
0.305***
0.653***
(0.027)
(0.031)
(0.072)
(0.139)
()*#+,)
D'
0.683***
0.100**
0.071*
0.519*
(0.088)
(0.040)
(0.039)
(0.300)
J*#)
&'
0.001
-0.001**
0.000
-0.002
(0.001)
(0.001)
(0.001)
(0.002)
J*#)
&'
H ()*#+,)
D'
-0.031***
-0.008
-0.003
-0.053
(0.008)
(0.005)
(0.007)
(0.033)
.),/"$01*2)
&'
0.005
-0.025***
0.002
0.122*
(0.028)
(0.009)
(0.015)
(0.067)
%,"-)
&'
-0.153***
-0.070***
-0.127
(0.026)
(0.017)
(0.102)
app-specific
fixed effects
yes
yes
yes
yes
Obs
2,002
4,252
539
235
* p<0.1, ** p<0.05, *** p<0.01
Notes: a. App-specific fixed effects (for both featured and related apps) are controlled in all columns. b. In the
third group, because few of the Play apps changed price in the observation window, the coefficient of %,"-)
&'
is not identified. c. Standard errors clustered at the category-date level.
We also plot the decay of editor recommendation’s effect on (log) daily downloads after
recommendation ends in Figure 5. For a featured app, its own downloads increases roughly 98.0% on the
same day of recommendation, and the effect gradually decreases during the post-recommendation period.
Meanwhile, the spillover effects of editor recommendation on the three groups of related apps are much
smaller than the direct effect but are also significantly positive during the first two days after
recommendation. Even though the magnitude of spillover on the same app on the other platform is the
largest among the three groups, the confidence interval of the spillover effect in this group is relatively
large due to the small sample size.
36
Figure 5. Estimates of the Over-time Direct Effect and Spillover Effects on Daily Downloads
Note: This figure plots the marginal effect of editor recommendation on the (log) downloads of featured apps and
related apps. Error bars represent the 90% confidence interval calculated using the delta method.
8. Discussion and Conclusion
In this research, we analyzed the potential externality of platform-provided editor recommendation. We
empirically examined the existence and mechanisms of such externality in the mobile app market using
data collected from Apple’s iOS App Store and Google’s Play Store. Our main finding has been that
platform-provided editor recommendation has an overall positive spillover effect on three groups of
related apps: apps released on the same platform by the same developer, apps with similar functionality
on the same platform, and the same app on a different platform. Specifically, for the apps by the same
developer and the same app on a different platform, we find a significantly positive spillover of exposure,
but the spillover of quality endorsement is weaker across platforms than within the same platform. For the
similar apps, however, we found the editor recommendation has a weak positive effect on their
awareness, and an insignificant effect on their choice probability. The evidence documented by our
research suggests that both the spillover of exposure and spillover of platform endorsement can drive the
externality of editor recommendation, but their strength and relative importance vary depending on the
37
specific relationship between the featured and non-featured products. Further, the spillover effect also
depends on the characteristics of the featured products including pricing model and consumer ratings.
Our research not only furthers our understanding, but also provides salient managerial
implications on the mobile app market. Editor recommendation can help consumers to discover new apps,
and help the platform to diversify consumer traffic and potentially reshape the distribution of sales.
Recommending high-quality apps that have had a difficult time attracting enough consumer attention
could encourage developer innovation and increase platform profitability in the long run. We found that
platform-provided editor recommendation not only increases the sales of recommended apps, but also the
sales of other apps by the same developer and those with similar functionality. The result implies that
platforms managers could leverage editor recommendation to promote lesser-known high-quality
developers or even a whole market niche that is underexplored by featuring just a small number of apps in
the relevant catalog. Our finding on cross-platform spillover also sheds light on a potential benefit of
developer multi-homing – a positive demand shock on one platform could generate a feedback loop that
will create a multiplying effect on aggregate sales.
We note some caveats in interpreting our findings on the spillover effect. First, in discussing the
spillover mechanisms, we posited that platform recommendation attracts new users to the featured apps,
and then some of these users would explore other related apps. Here we emphasize that for the spillover
effect to exist, it only needs a portion of users to follow this path. The result of spillover certainly does
not rely on the assumption that all users would explore other related apps or even the average user would
explore other related apps. In the real setting, there are many special situations that may affect the size or
even existence of spillover. For instance, though it did not occur in our data collection period, the same
app could be featured multiple times, and the size of spillover effect during the different recommendation
episodes might be different. Another situation is that two related apps might be featured concurrently or
very close in time. In that case, the size of the spillover effect from one to the other can vary. Another
moderator of spillover is recommendation intensity. If a featured app is recommended in multiple places,
either inside or outside the store, or an app is related to multiple featured apps, the spillover effect should
38
also be different. Analyzing all these possibilities is beyond the scope of this research, but they are
important for the practitioners to consider in the real-world setting.
The study on platform-provided editorial recommendation can be extended in several directions.
Our study has examined the impact of editor recommendation on the sales of related products. For the
group of similar products, it will be interesting to investigate whether the spillover is driven by
complementarity or substitution, and how their long-term effects differ. To answer this question, more
data needs to collected on user engagement after purchase. Researchers can also investigate whether and
to what extent the existence of editor recommendation will change consumers’ product search habits. If
editor recommendation alters consumer search behavior in a significant way, then one can try to
incorporate editor recommendation into a canonical consumer search model to systematically study its
effects on the emergent market structure, such as equilibrium price and sales distributions. On the supply
side, it will also be interesting to study the long-term impact of editor recommendation on strategic
decisions of entry, pricing, product design, and innovation.
The questions discussed above are also very important from the standpoint of the platform owner,
because as a two-sided market, it needs to design market institutions including recommendation in a way
to optimize and balance two objectives: encouraging consumer search and promoting producer
innovation. Specifically, on editor recommendation, many practical questions still exist. For example,
what is the best way to feature editor recommendation and how frequently should it be updated? Further,
how the mix of product offerings in editor recommendation should be selected and what should be the
objective function in the selection decision process? There are other important open research questions
related to the complementarities between sales- or rating-based product lists, expert-curated
recommendations, and AI-powered search and recommendation systems. Addressing these open
questions can greatly improve our understanding of long-term sustainability of product offerings in
platform markets.
39
References
Abadie, A., Diamond, A., and Hainmueller, J. (2015). Comparative politics and the synthetic control
method. American Journal of Political Science, 59(2), 495-510.
Ackerberg, D. (2001). Empirically distinguishing between informative and prestige effects of advertising.
Rand Journal of Economics, 32(2), 316–333.
Adomavicius, G., Bockstedt, J., Curley, S., and Zhang, J. (2011). Recommender systems, consumer
preferences, and anchoring effects. In RecSys 2011 Workshop on Human Decision Making in
Recommender Systems, 35-42.
Adomavicius, G., Bockstedt, J., Curley, S. and Zhang, J. (2017). Effects of online recommendations on
consumers’ willingness to pay. Working paper.
Ahluwalia, R., Unnava, H. R., and Burnkrant, R. E. (2001). The moderating role of commitment on the
spillover effect of marketing communications. Journal of Marketing Research, 38(4), 458-470.
Altonji, J. G., Elder, T. E., and Taber, C. R. (2005). Selection on observed and unobserved variables:
Assessing the effectiveness of Catholic schools. Journal of Political Economy, 113(1), 151-184.
Anderson, M. L. (2014). Subways, Strikes, and Slowdowns: The Impacts of Public Transit on Traffic
Congestion. American Economic Review, 104(9), 2763-96.
Bakos, J.Y. (1997). Reducing buyer search costs: Implications for electronic marketplaces. Management
science, 43(12), 1676-1692.
Baron, R. M., and Kenny, D. A. (1986). The moderator–mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality
and Social Psychology, 51(6), 1173.
Bawa, K. and Shoemaker, R. (2004). The Effects of Free Sample Promotions on Incremental Brand Sales.
Marketing Science, 23(3), 345-363.
Benlian, A., Titah, R. and Hess, T. (2012). Differential effects of provider recommendations and
consumer reviews in e-commerce transactions: An experimental study. Journal of Management
Information Systems, 29(1), 237-272.
Bertrand, M., Duflo, E., and Mullainathan, S. (2004). How Much Should We Trust Differences-In-
Differences Estimates?. The Quarterly journal of economics, 119(1), 249-275.
Bettman, J.R., Luce, M.F. and Payne, J.W. (1998). Constructive consumer choice processes. Journal of
Consumer Research, 25(3), 187-217.
Bond, S., He, S., and Wen, W. Speaking for “Free”: Word of Mouth in Free- and Paid- Product
Settings. Journal of Marketing Research, forthcoming.
Borah, A., & Tellis, G. J. (2016). Halo (spillover) Effects in Social Media: Do Product Recalls of One
Brand Hurt or Help Rival Brands?. Journal of Marketing Research, 53(2), 143-160.
Brynjolfsson, E., Hu, Y. and Simester, D. (2011). Goodbye pareto principle, hello long tail: The effect of
search costs on the concentration of product sales. Management Science, 57(8), 1373-1386.
Carare, O. (2012). The impact of bestseller rank on demand: Evidence from the app market. International
Economic Review, 53(3), 717-742.
Carmi, E., Oestreicher-Singer, G. and Sundararajan, A. (2012). Is Oprah contagious? Identifying demand
spillovers in online networks. Identifying Demand Spillovers in Online Networks (August 3,
2012) .NET Institute Working Paper, 10-18.
Chen, P.-Y., Wu, S.-y., and Yoon, J. (2004). The impact of online recommendations and consumer
feedback on sales, ICIS 2004 Proceedings, 58.
Chakravarty, A., Liu, Y., and Mazumdar, T. (2010). The differential effects of online word-of-mouth and
critics' reviews on pre-release movie evaluation. Journal of Interactive Marketing, 24(3), 185-197.
Chevalier, J.A. and Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews.
Journal of Marketing Research, 43(3), 345-354.
Clark, C. R., Doraszelski, U., and Draganska, M. (2009). The effect of advertising on brand awareness
and perceived quality: An empirical investigation using panel data. Quantitative Marketing and
Economics, 7(2), 207-236.
40
Cosley, D., Lam, S. K., Albert, I., Konstan, J. A., and Riedl, J. (2003). Is Seeing Believing?: How
Recommender System Interfaces Affect Users' Opinions. In Proceedings of the SIGCHI conference
on Human factors in computing systems, 585-592.
Deng, Y., Lambrecht, A., & Liu, Y. (2018). Spillover Effects and Freemium Strategy in Mobile App
Market.
Fleder, D., and Hosanagar, K. (2009). Blockbuster culture's next rise or fall: The impact of recommender
systems on sales diversity. Management Science, 55(5), 697-712.
Gal-Or, E. and Ghose, A. (2005). The economic incentives for sharing security information. Information
Systems Research, 16(2), 186-208.
Garg, R. and Telang, R. (2013). Inferring App Demand from Publicly Available Data. MIS Quarterly, 37
(4), 1253- 1264.
Garthwaite, C. L. (2014). Demand spillovers, combative advertising, and celebrity endorsements.
American Economic Journal: Applied Economics, 6(2), 76-104.
Ghose, A., Goldfarb, A., and Han, S. P. (2012). How is the mobile Internet different? Search costs and
local activities. Information Systems Research, 24(3), 613-631.
Ghose, A., and Han, S. P. (2014). Estimating demand for mobile applications in the new economy.
Management Science, 60(6), 1470-1488.
Greenwood, B. N., and Wattal, S. (2015). Show me the way to go home: an empirical investigation of
ride sharing and alcohol related motor vehicle homicide. Working paper.
Goeree, M. S. (2008). Limited information and advertising in the US personal computer industry.
Econometrica, 76(5), 1017-1074.
Häubl, G., and Trifts, V. (2000). Consumer decision making in online shopping environments: The
effects of interactive decision aids. Marketing science, 19(1), 4-21.
Hauser, J. R., and Wernerfelt, B. (1990). An evaluation cost model of consideration sets. Journal of
Consumer Research, 16(4), 393-408.
Heckman, J.J. (1978). Simple statistical models for discrete panel data developed and applied to test the
hypothesis of true state dependence against the hypothesis of spurious state dependence. In Annales
de l'INSEE , 227-269. Institut national de la statistique et des études économiques.
Hendricks, K. and Sorensen, A. (2009). Information and the skewness of music sales. Journal of Political
Economy, 117(2), 324-369.
Idu, A., van de Zande, T., & Jansen, S. (2011). Multi-homing in the Apple Ecosystem: Why and How
Developers Target Multiple Apple App Stores. In Proceedings of the International Conference on
Management of Emergent Digital EcoSystems (pp. 122-128). ACM.
Imbens, G. W., & Lemieux, T. (2008). Regression Discontinuity Designs: A Guide to Practice. Journal of
Econometrics, 142(2), 615-635.
Jabr, W. and Zheng, E. (2013). Know yourself and know your enemy: An analysis of firm
recommendations and consumer reviews in a competitive environment. MIS Quarterly, 38(3), 635-
654.
Kardes, F.R., Kalyanaram, G., Chandrashekaran, M. and Dornoff, R.J. (1993). Brand retrieval,
consideration set composition, consumer choice, and the pioneering advantage. Journal of Consumer
Research, 20(1), pp.62-75.
Kim, J.B., Albuquerque, P. and Bronnenberg, B.J. (2010) Online Demand Under Limited Consumer
Search. Marketing Science 29(6):1001-1023
Koh, T. K., & Fichman, M. (2014). Multihoming Users' Preferences for Two-Sided Exchange
Networks. MIS Quarterly, 38(4), 977-996.
Lee, G. and Raghu, T.S. (2014). Determinants of mobile apps' success: evidence from the App Store
market. Journal of Management Information Systems, 31(2), 133-170.
Lee, G. W., and Raghu, T. S. (2016). The Role of Quality in Mobile App Markets. Working paper.
Lee, Young-Jin, Yong Tan. (2013). Effects of Different Types of Free Trials and Ratings in Sampling of
Consumer Software: An Empirical Study. Journal of Management Information Systems, 30(3) 213–
246.
41
Lewis, R. and Nguyen, D. (2015). Display advertising’s competitive spillovers to consumer
search. Quantitative Marketing and Economics, 13(2), 93-115.
Li, Z., and Agarwal, A. (2016). Platform Integration and Demand Spillovers in Complementary Markets:
Evidence from Facebook’s Integration of Instagram. Management Science, 63(10), 3438-3458.
Liu, Y. (2006). Word of mouth for movies: Its dynamics and impact on box office revenue. Journal of
Marketing, 70(3), 74-89.
Liu, C. Z., Au, Y. A., & Choi, H. S. (2014). Effects of Freemium Strategy in the Mobile App Market: An
Empirical Study of Google Play. Journal of Management Information Systems, 31(3), 326-354.
Lu, S. and Yang, S. (2017). Investigating the Spillover Effect of Keyword Market Entry in Sponsored
Search Advertising. Marketing Science, forthcoming.
Manning, C.D., Raghavan, P. and Schütze, H. (2008). Introduction to Information Retrieval. Cambridge:
Cambridge University Press, 1(1), 496.
Masatlioglu, Y., Nakajima, D., and Ozbay, E. Y. (2012). Revealed Attention. The American Economic
Review, 102(5), 2183.
Reinstein, D. A., and Snyder, C. M. (2005). The influence of expert reviews on consumer demand for
experience goods: A case study of movie critics. The Journal of Industrial Economics, 53(1), 27-51.
Oestreicher-Singer, G., and Sundararajan, A. (2012). The visible hand? Demand effects of
recommendation networks in electronic markets. Management Science, 58(11), 1963-1981.
Oster, E. (2016). Unobservable Selection and Coefficient Stability: Theory and Evidence. Journal of
Business and Economic Statistics, forthcoming.
Overby, E., and Forman, C. (2014). The effect of electronic commerce on geographic purchasing patterns
and price dispersion. Management Science, 61(2), 431-453.
Poirier, D. J. (1980). Partial observability in bivariate probit models. Journal of Econometrics, 12(2), 209-
217.
Ranganathan, A. and Benson, A. (2017). Hemming and Hawing over Hawthorne: Work Complexity and
the Divergent Effects of Monitoring on Productivity. Management Science, forthcoming.
Roberts, J.H. and Lattin, J.M. (1991). Development and testing of a model of consideration set
composition. Journal of Marketing Research, 28(4), 429-440.
Sahni, N. S. (2016). Advertising Spillovers: Evidence from Online Field Experiments and Implications
for Returns on Advertising. Journal of Marketing Research, 53(4), 459-478.
Senecal, S., and Nantel, J. (2004). The influence of online product recommendations on consumers’
online choices. Journal of Retailing, 80(2), 159-169.
Shi, Z., Rui, H., and Whinston, A. B. (2014). Content Sharing in a Social Broadcasting Environment:
Evidence from Twitter. MIS Quarterly, 38(1), 123-142.
Smith, M. D., Bailey, J., and Brynjolfsson, E. (2001). Understanding digital markets: Review and
assesment. Working Paper.
Wen, W., & Zhu, F. (2017). Threat of Platform-Owner Entry and Complementor Responses: Evidence
from the Mobile App Market. SSRN Working Paper.
Wooldridge J. (2002) Econometric Analysis of Cross Section and Panel Data. MIT Press, Cambridge,
MA.
Wu, J. and Rangaswamy, A. (2003). A fuzzy set model of search and consideration with an application to
an online market. Marketing Science, 22(3), 411-434.
Wu, J., Cook Jr, V.J. and Strong, E.C. (2005). A two-stage model of the promotional performance of pure
online firms. Information Systems Research, 16(4), 334-351.
Xu, H., Teo, H. H., Tan, B. C., and Agarwal, R. (2009). The role of push-pull technology in privacy
calculus: the case of location-based services. Journal of Management Information Systems, 26(3),
135-174.
Zhou, W., and Duan, W. (2016). Do Professional Reviews Affect Online User Choices Through User
Reviews? An Empirical Study. Journal of Management Information Systems, 33(1), 202-228.