Jump to Content
Gagan Aggarwal

Gagan Aggarwal

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    Simple Mechanisms for Welfare Maximization in Rich Advertising Auctions
    Divyarthi Mohan
    Alex Psomas
    Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track
    Preview abstract Internet ad auctions have evolved from a few lines of text to richer informational layouts that include images, sitelinks, videos, etc. Ads in these new formats occupy varying amounts of space, and an advertiser can provide multiple formats, only one of which can be shown.The seller is now faced with a multi-parameter mechanism design problem.Computing an efficient allocation is computationally intractable, and therefore the standard Vickrey-Clarke-Groves (VCG) auction, while truthful and welfare-optimal, is impractical. In this paper, we tackle a fundamental problem in the design of modern ad auctions. We adopt a Myersonian'' approach and study allocation rules that are monotone both in the bid and set of rich ads. We show that such rules can be paired with a payment function to give a truthful auction. Our main technical challenge is designing a monotone rule that yields a good approximation to the optimal welfare. Monotonicity doesn't hold for standard algorithms, e.g. the incremental bang-per-buck order, that give good approximations to knapsack-like'' problems such as ours. In fact, we show that no deterministic monotone rule can approximate the optimal welfare within a factor better than 2 (while there is a non-monotone FPTAS). Our main result is a new, simple, greedy and monotone allocation rule that guarantees a 3-approximation. In ad auctions in practice, monotone allocation rules are often paired with the so-called \emph{Generalized Second Price (GSP)} payment rule, which charges the minimum threshold price below which the allocation changes. We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded. Under standard no-overbidding assumptions, we prove bounds on the a pure and Bayes-Nash PoA. Finally, we experimentally test our algorithms on real-world data. View details
    Preview abstract Autobidding is becoming increasingly important in the domain of online advertising, and has become a critical tool used by many advertisers for optimizing their ad campaigns. We formulate fundamental questions around the problem of bidding for performance under very general affine cost constraints. We design optimal single-agent bidding strategies for the general bidding problem, in multi-slot truthful auctions. We show that there is an intimate connection between bidding and auction design, in that the bidding formula is optimal if and only if the underlying auction is truthful. We show how a MWU algorithm can be used to learn this optimal bidding formula. Next, we move from the single-agent view to taking a full-system view: What happens when all advertisers adopt optimal autobidding? We prove that in general settings, there exists an equilibrium between the bidding agents for all the advertisers. Further, we prove a Price of Anarchy result: For the general affine constraints, the total value (conversions) obtained by the advertisers in the bidding agent equilibrium is no less than 1/2 of what we could generate via a centralized ad allocation scheme, one which does not consider any auction incentives or provide any per-advertiser guarantee. View details
    Biobjective Online Bipartite Matching
    Yang Cai
    George Pierrakos
    Workshop in Internet and Network Economics, Springer (2014), pp. 218-231
    Preview abstract Motivated by Online Ad allocation when there are multiple conflicting objectives, we introduce and study the problem of Biobjective Online Bipartite Matching, a strict generalization of the standard setting of Karp, Vazirani and Vazirani, where we are allowed to have edges of two colors, and the goal is to find a matching that is both large and balanced at the same time. We study both deterministic and randomized algorithms for this problem; after showing that the single color upper bounds of 1/2 and 1 − 1/e carry over to our biobjective setting as well, we show that a very natural, albeit hard to analyze, deterministic algorithm achieves a competitive ratio of 0.343. We next show how a natural randomized algorithm matches this ratio, through a simpler analysis, and how a clever – and perhaps not immediately obvious – generalization of Ranking can beat the 1/2 bound and get a competitive ratio of 0.573, coming close to matching the upper bound of 0.63. View details
    Online Selection of Diverse Results
    Debmalya Panigrahi
    Atish Das Sarma
    Proceedings of the 5th ACM international Conference on Web Search and Data Mining (2012), pp. 263-272
    Preview abstract The phenomenal growth in the volume of easily accessible information via various web-based services has made it essential for service providers to provide users with personalized representative summaries of such information. Further, online commercial services including social networking and micro-blogging websites, e-commerce portals, leisure and entertainment websites, etc. recommend interesting content to users that is simultaneously diverse on many different axes such as topic, geographic specificity, etc. The key algorithmic question in all these applications is the generation of a succinct, representative, and relevant summary from a large stream of data coming from a variety of sources. In this paper, we formally model this optimization problem, identify its key structural characteristics, and use these observations to design an extremely scalable and efficient algorithm. We analyze the algorithm using theoretical techniques to show that it always produces a nearly optimal solution. In addition, we perform large-scale experiments on both real-world and synthetically generated datasets, which confirm that our algorithm performs even better than its analytical guarantees in practice, and also outperforms other candidate algorithms for the problem by a wide margin. View details
    Preview abstract We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1 − 1/e)-competitive algorithm in their seminal work [KVV90]. Our main result is an optimal (1 − 1/e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [KVV90] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [MSVV05] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [MSVV05, BJN07] which apply when the bids are much smaller than the budgets. View details
    Achieving anonymity via clustering
    Tomás Feder
    Krishnaram Kenthapadi
    Samir Khuller
    Rina Panigrahy
    Dilys Thomas
    ACM Transactions on Algorithms, vol. 6 (2010), 49:1-49:19
    Preview abstract Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code. Sweeney proposed the k-anonymity model for privacy where non-key attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an epsilon fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well. View details
    Efficiency of (Revenue-)Optimal Mechanisms
    Proceedings of the 10th ACM Conference on Electronic Commerce (2009)
    Preview abstract We compare the expected efficiency of revenue maximizing (or optimal) mechanisms with that of efficiency maximizing ones. We show that the efficiency of the revenue maximizing mechanism for selling a single item with (k + log_{e / e−1} k + 1) bidders is at least as much as the efficiency of the efficiency-maximizing mechanism with k bidders, when bidder valuations are drawn i.i.d. from a Monotone Hazard Rate distribution. Surprisingly, we also show that this bound is tight within a small additive constant of 5.7. In other words, Θ(log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism, while o(log k) do not. This is in contrast to the result of Bulow and Klemperer comparing the revenue of the two mechanisms, where only one extra bidder suffices. More precisely, they show that the revenue of the efficiency maximizing mechanism with k + 1 bidders is no less than the revenue of the revenue maximizing mechanism with k bidders. We extend our result for the case of selling t identical items and show that 2.2 log k + tΘ(log log k) extra bidders suffice for the revenue maximizing mechanism to match the efficiency of the efficiency maximizing mechanism. In order to prove our results, we do a classification of Monotone Hazard Rate (MHR) distributions and identify a family of MHR distributions, such that for each class in our classification, there is a member of this family that is point-wise lower than every distribution in that class. This lets us prove interesting structural theorems about distributions with Monotone Hazard Rate. View details
    Preview abstract In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad on the page. This gives rise to a bipartite matching market that is typically cleared by the way of an automated auction. Several auction mechanisms have been proposed, with variants of the Generalized Second Price (GSP) being widely used in practice. There is a rich body of work on bipartite matching markets that builds upon the stable marriage model of Gale and Shapley and the assignment model of Shapley and Shubik. This line of research offers deep insights into the structure of stable outcomes in such markets and their incentive properties. In this paper, we model advertising auctions in terms of an assignment model with linear utilities, extended with bidder and item specific maximum and minimum prices. Auction mechanisms like the commonly used GSP or the well-known Vickrey-Clarke-Groves (VCG) can be interpreted as simply computing a bidder-optimal stable matching in this model, for a suitably defined set of bidder preferences, but our model includes much richer bidders and preferences. We prove that in our model the existence of a stable matching is guaranteed, and under a non-degeneracy assumption a bidder-optimal stable matching exists as well. We give an algorithm to find such matching in polynomial time, and use it to design truthful mechanism that generalizes GSP, is truthful for profit-maximizing bidders, correctly implements features like bidder-specific minimum prices and position-specific bids, and works for rich mixtures of bidders and preferences. Our main technical contributions are the existence of bidder-optimal matchings and strategyproofness of the resulting mechanism, and are proved by induction on the progress of the matching algorithm. View details
    Sponsored Search Auctions for Markovian Users
    S. Muthukrishnan
    Fourth Workshop on Ad Auctions; Workshop on Internet and Network Economics (WINE). (2008)
    Preview abstract Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. The most popular auction for sponsored search is the “Generalized Second Price” (GSP) auction where advertisers are assigned to slots in the decreasing order of their score, which is defined as the product of their bid and click-through rate. One of the main advantages of this simple ranking is that bidding strategy is intuitive: to move up to a more prominent slot on the results page, bid more. This makes it simple for advertisers to strategize. However this ranking only maximizes efficiency under the assumption that the probability of a user clicking on an ad is independent of the other ads shown on the page. We study a Markovian user model that does not make this assumption. Under this model, the most efficient assignment is no longer a simple ranking function as in GSP. We show that the optimal assignment can be found efficiently (even in near-linear time). As a result of the more sophisticated structure of the optimal assignment, bidding dynamics become more complex: indeed it is no longer clear that bidding more moves one higher on the page. Our main technical result is that despite the added complexity of the bidding dynamics, the optimal assignment has the property that ad position is still monotone in bid. Thus even in this richer user model, our mechanism retains the core bidding dynamics of the GSP auction that make it useful for advertisers. View details
    Theory research at Google
    Nir Ailon
    Florin Constantin
    Eyal Even-Dar
    Gereon Frahling
    Monika R. Henzinger
    S. Muthukrishnan
    Noam Nisan
    Anastasios Sidiropoulos
    SIGACT News, vol. 39 (2008), pp. 10-28
    Preview
    Bidding to the Top: VCG and Equilibria of Position-Based Auctions
    S. Muthukrishnan
    Proceedings of the Fourth Workshop on Approximation and Online Algorithms (WAOA) (2006)
    Preview abstract Many popular search engines run an auction to determine the placement of advertisements next to search results. Current auctions at Google and Yahoo! let advertisers specify a single amount as their bid in the auction. This bid is interpreted as the maximum amount the advertiser is willing to pay per click on its ad. When search queries arrive, the bids are used to rank the ads linearly on the search result page. Advertisers seek to be high on the list, as this attracts more attention and more clicks. The advertisers pay for each user who clicks on their ad, and the amount charged depends on the bids of all the advertisers participating in the auction. We study the problem of ranking ads and associated pricing mechanisms when the advertisers not only specify a bid, but additionally express their preference for positions in the list of ads. In particular, we study prefix position auctions where advertiser i can specify that she is interested only in the top κi positions. We present a simple allocation and pricing mechanism that generalizes the desirable properties of current auctions that do not have position constraints. In addition, we show that our auction has an envy-free or symmetric Nash equilibrium with the same outcome in allocation and pricing as the well-known truthful Vickrey-Clarke-Groves (VCG) auction. Furthermore, we show that this equilibrium is the best such equilibrium for the advertisers in terms of the profit made by each advertiser. We also discuss other position-based auctions. View details
    Truthful auctions for pricing search keywords
    Ashish Goel
    Rajeev Motwani
    ACM Conference on Electronic Commerce (2006), pp. 1-7
    Preview abstract We present a truthful auction for pricing advertising slots on a web-page assuming that advertisements for different merchants must be ranked in decreasing order of their (weighted) bids. This captures both the Overture model where bidders are ranked in order of the submitted bids, and the Google model where bidders are ranked in order of the expected revenue (or utility) that their advertisement generates. Assuming separable click-through rates, we prove revenue-equivalence between our auction and the non-truthful next-price auctions currently in use. View details
    Knapsack auctions
    Jason D. Hartline
    SODA (2006), pp. 1083-1092
    Preview abstract We consider a game theoretic knapsack problem that has application to auctions for selling advertisements on Internet search engines. Consider n agents each wishing to place an object in the knapsack. Each agent has a private valuation for having their object in the knapsack and each object has a publicly known size. For this setting, we consider the design of auctions in which agents have an incentive to truthfully reveal their private valuations. Following the framework of Goldberg et al., we look to design an auction that obtains a constant fraction of the profit obtainable by a natural optimal pricing algorithm that knows the agents’ valuations and object sizes. We give an auction that obtains a constant factor approximation in the non-trivial special case where the knapsack has unlimited capacity. We then reduce the limited capacity version of the problem to the unlimited capacity version via an approximately efficient auction (i.e., one that maximizes the social welfare). This reduction follows from generalizable principles. View details
    Achieving Anonymity via Clustering
    Tomás Feder
    Krishnaram Kenthapadi
    Samir Khuller
    Rina Panigrahy
    Dilys Thomas
    Proceedings of the 25th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS) (2006), pp. 153-162
    Preview
    On the Streaming Model Augmented with a Sorting Primitive
    Mayur Datar
    Sridhar Rajagopalan
    Matthias Ruhl
    FOCS (2004), pp. 540-549
    Preview
    The load rebalancing problem
    Rajeev Motwani
    J. Algorithms, vol. 60 (2006), pp. 42-59
    Algorithms for the Database Layout Problem
    Tomás Feder
    Rajeev Motwani
    Rina Panigrahy
    ICDT (2005), pp. 189-203
    Derandomization of auctions
    Amos Fiat
    Andrew V. Goldberg
    Jason D. Hartline
    Nicole Immorlica
    Madhu Sudan
    STOC (2005)
    Anonymizing Tables
    Tomás Feder
    Krishnaram Kenthapadi
    Rajeev Motwani
    Rina Panigrahy
    Dilys Thomas
    ICDT (2005), pp. 246-258
    Two Can Keep A Secret: A Distributed Architecture for Secure Database Services
    Mayank Bawa
    Prasanna Ganesan
    Hector Garcia-Molina
    Krishnaram Kenthapadi
    Rajeev Motwani
    Utkarsh Srivastava
    Dilys Thomas
    Ying Xu
    CIDR (2005)
    On Identifying Stable Ways to Configure Systems
    Mayur Datar
    Nina Mishra
    Rajeev Motwani
    ICAC (2004)
    Complexities for generalized models of self-assembly
    Michael H. Goldwasser
    Ming-Yang Kao
    Robert T. Schweller
    SODA (2004)
    Vision Paper: Enabling Privacy for the Paranoids
    Mayank Bawa
    Prasanna Ganesan
    Hector Garcia-Molina
    Krishnaram Kenthapadi
    Nina Mishra
    Rajeev Motwani
    Utkarsh Srivastava
    Dilys Thomas
    Jennifer Widom
    Ying Xu 0002
    VLDB (2004)
    Secure Computation of the k th-Ranked Element
    Nina Mishra
    Benny Pinkas
    EUROCRYPT (2004)
    Algorithms for Multi-product Pricing
    Tomás Feder
    Rajeev Motwani
    ICALP (2004), pp. 72-83
    Switch Scheduling via Randomized Edge Coloring
    Rajeev Motwani
    Devavrat Shah
    FOCS (2003)
    The load rebalancing problem
    Rajeev Motwani
    SPAA (2003)