Jump to Content
Sandeep Tata

Sandeep Tata

Sandeep Tata is a Staff Software Engineer in the Strategic Technologies group in Google Research. He currently leads a team focused on information extraction using machine-learning. Prior to Google, Sandeep was a researcher at IBM's Almaden Research Center in the Data Management group. His interests lie broadly at the intersection of large-scale data management and applied machine-learning. He earned his PhD in Computer Science from the University of Michigan (2007).
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    VRDU: A Benchmark for Visually-rich Document Understanding
    Zilong Wang
    Wei Wei
    2023 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
    Preview abstract Understanding visually-rich business documents to extract structured data and automate business workflows has been receiving attention both in academia and industry. Although recent multi-modal language models have achieved impressive results, we argue that existing benchmarks do not reflect the complexity of real documents seen in industry, and therefore not suitable for measuring progress in practical settings. In this work, we identify the desiderata for a more comprehensive benchmark and propose one we call VRDU for Visually Rich Document Understanding. VRDU contains two datasets that represent several challenges: rich schema including diverse data types as well as nested entities, complex templates including tables and multi-column layouts and diversity of different layouts within a single document type. We design few-shot and conventional experiment settings along with a carefully designed matching algorithm to evaluate extraction results. We report the performance of strong baselines and observe three conclusions: (1) generalizing to new templates from a document type is still very challenging, (2) few-shot performance continues to have a lot of headroom, and (3) models struggle with nested repeated fields such as line-items in an invoice. We plan to open source the benchmark and the evaluation toolkit. We hope that it helps inspire and guide future research in this challenging area. View details
    Preview abstract Comparative decisions, such as picking between two cars or deciding between two hiking trails, require the users to visit multiple webpages and contrast the choices along relevant aspects. Given the impressive capabilities of pre-trained large language models, we ask whether they can help automate such analysis. We refer to this task as extractive aspect-based contrastive summarization which involves constructing a structured summary that compares the choices along relevant aspects. In this paper, we propose a novel method called STRUM for this task that can generalize across domains without requiring any human-written summaries or fixed aspect list as supervision. Given a set of relevant input webpages, STRUM solves this problem using two pre-trained T5-based large language models: first one fine-tuned for aspect and value extraction, and second one fine-tuned for natural language inference. We showcase the abilities of our method across different domains, identify shortcomings, and discuss questions that we believe will be critical in this new line of research. View details
    Preview abstract Building automatic extraction models for visually rich documents like invoices, receipts, bills, tax forms, etc. has received significant attention lately. A key bottleneck in developing extraction models for new document types is the cost of acquiring the several thousand high-quality labeled documents that are needed to train a model with acceptable accuracy. In this paper, we propose selective labeling as a solution to this problem. The key insight is to simplify the labeling task to provide “yes/no” labels for candidate extractions predicted by a model trained on partially labeled documents. We combine this with a custom active learning strategy to find the predictions that the model is most uncertain about. We show through experiments on document types drawn from 3 different domains that selective labeling can reduce the cost of acquiring labeled data by 10× with a negligible loss in accuracy. View details
    Preview abstract Given a web page, extracting an object along with various attributes of interest (e.g. price, publisher, author, and genre for a book) can facilitate a variety of downstream applications such as large-scale knowledge base construction, e-commerce product search, and personalized recommendation. Prior approaches have either relied on computationally expensive visual feature engineering or required large amounts of training data to get to an acceptable precision. In this paper, we propose a novel method, LeArNing TransfErable node RepresentatioNs for Attribute Extraction (LANTERN), to tackle the problem. We model the problem as a tree node tagging task. The key insight is to learn a contextual representation for each node in the DOM tree where the context explicitly takes into account the tree structure of the neighborhood around the node. Experiments on the SWDE public dataset show that LANTERN outperforms the previous state-of-the-art (SOTA) by 1.44% (F1 score) with a dramatically simpler model architecture. Furthermore, we report that utilizing data from a different domain (for instance, using training data about web pages with cars to extract book objects) is surprisingly useful and helps beat the SOTA by a further 1.37%. View details
    Preview abstract Extracting structured information from templatic documents is an important problem with the potential to automate many real-world business workflows such as payment, procurement, and payroll. The core challenge is that such documents can be laid out in virtually infinitely different ways. A good solution to this problem is one that generalizes well not only to known templates such as invoices from a known vendor, but also to unseen ones. We developed a system called Glean to tackle this problem. Given a target schema for a document type and some labeled documents of that type, Glean uses machine learning to automatically extract structured information from other documents of that type. In this paper, we describe the overall architecture of Glean, and discuss three key data management challenges : 1) managing the quality of ground truth data, 2) generating training data for the machine learning model using labeled documents, and 3) building tools that help a developer rapidly build and improve a model for a given document type. Through empirical studies on a real-world dataset, we show that these data management techniques allow us to train a model that is over 5 F1 points better than the exact same model architecture without the techniques we describe. We argue that for such information-extraction problems, designing abstractions that carefully manage the training data is at least as important as choosing a good model architecture. View details
    Preview abstract Automating information extraction from form-like documents at scale is a pressing need due to its potential impact on automating business workflows across many industries like financial services, insurance, and healthcare. The key challenge is that form-like documents in these business workflows can be laid out in virtually infinitely many ways; hence, a good solution to this problem should generalize to documents with unseen layouts and languages. A solution to this problem requires a holistic understanding of both the textual segments and the visual cues within a document, which is non-trivial. While the natural language processing and computer vision communities are starting to tackle this problem, there has not been much focus on (1) data-efficiency, and (2) ability to generalize across different document types and languages. In this paper, we show that when we have only a small number of labeled documents for training (~50), a straightforward transfer learning approach from a considerably structurally-different larger labeled corpus yields up to a 27 F1 point improvement over simply training on the small corpus in the target domain. We improve on this with a simple multi-domain transfer learning approach, that is currently in production use, and show that this yields up to a further 8 F1 point improvement. We make the case that data efficiency is critical to enable information extraction systems to scale to handle hundreds of different document-types, and learning good representations is critical to accomplishing this. View details
    Migrating a Privacy-Safe Information Extraction System to a Software 2.0 Design
    Nguyen Ha Vo
    Proceedings of the 10th Annual Conference on Innovative Data Systems Research (2020)
    Preview abstract This paper presents a case study of migrating a privacy-safe information extraction system for Gmail from a traditional rule-based architecture to a machine-learned Software 2.0 architecture. The key idea is to use the extractions from the existing rule-based system as training data to learn ML models that in turn replace all the machinery for the rule-based system. The resulting system a) delivers better precision and recall, b) is significantly smaller in terms of lines of code, c) has been easier to maintain and improve, and d) has opened up the possibility of leveraging ML advances to build a cross-language extraction system even though our original training data was only in English. We describe challenges encountered during this migration around generation and management of training data, evaluation of models, and report on many traditional ``Software 1.0'' components we built to address them. View details
    Preview abstract Extracting structured data from HTML documents is a long-studied problem with a broad range of applications like augmenting knowledge bases, supporting faceted search, and providing domain-specific experiences for key verticals like shopping and movies. Previous approaches have either required a small number of examples for each target site or relied on carefully handcrafted heuristics built over visual renderings of websites. In this paper, we present a novel two-stage neural approach, named FreeDOM, which overcomes both these limitations. The first stage learns a representation for each DOM node in the page by combining both the text and markup information. The second stage captures longer range distance and semantic relatedness using a relational neural network. By combining these stages, FreeDOM is able to generalize to unseen sites after training on a small number of seed sites from that vertical without requiring expensive hand-crafted features over visual renderings of the page. Through experiments on a public dataset with 8 different verticals, we show that FreeDOM beats the previous state of the art by nearly 3.7 F1 points on average without requiring features over rendered pages or expensive hand-crafted features. View details
    Improving Recommendation Quality at Google Drive
    Suming Jeremiah Chen
    Zachary Teal Wilson
    Brian Lee Calaci
    Ryan Lee Evans
    Sean Robert Abraham
    26TH ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) (2020)
    Preview abstract Quick Access is a machine-learned system in Google Drive that predicts which files a user wants to open. Adding Quick Access recommendations to the Drive homepage cut the amount of time that users spend locating their files in half. Aggregated over the ~1 billion users of Drive, the time saved up adds up to ~1000 work weeks every day. In this paper, we discuss both the challenges of iteratively improving the quality of a personal recommendation system as well as the variety of approaches that we took in order to improve this feature. We explored different deep network architectures, novel modeling techniques, additional data sources, and the effects of latency and biases in the UX. We share both pitfalls as well as successes in our attempts to improve this product, and also discuss how we scaled and managed the complexity of the system. We believe that these insights will be especially useful to those who are working with private corpora as well as those who are building a large-scale production recommendation system. View details
    Representation Learning for Information Extraction from Form-like Documents
    Bodhisattwa Majumder
    Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pp. 6495-6504
    Preview abstract We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images. We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. View details
    Preview abstract Consider a sequential active learning problem where, at each round, an agent selects a batch of unlabeled data points, queries their labels and updates a binary classifier. While there exists a rich body of work on active learning in this general form, in this paper, we focus on problems with two distinguishing characteristics: severe class imbalance (skew) and small amounts of training data. Both of these problems occur with surprising frequency in many web applications. For instance, detecting offensive or sensitive content in online communities (pornography, violence, and hate-speech) is receiving enormous attention from industry as well as research communities. Such problems have both the characteristics we describe -- a vast majority content is {\em not} offensive, so the number of positive examples for such content is orders of magnitude smaller than the negative examples. Further, there is usually only a small amount of initial training data available when building machine-learned models to solve such problems. To address both these issues, we propose a hybrid active learning algorithm (HAL) that balances exploiting the knowledge available through the currently labeled training examples with exploring the large amount of unlabeled data available. Through simulation results, we show that HAL makes significantly better choices for what points to label when compared to strong baselines like margin-sampling. Classifiers trained on the examples selected for labeling by HAL easily out-perform the baselines on target metrics (like recall at a high precision threshold and area under the precision-recall curve) given the same budget for labeling examples. We believe HAL offers a simple, intuitive, and computationally tractable way to structure active learning that can significantly amplify the impact (or alternately, reduce the cost) of human labeling for a wide range of web applications. View details
    Preview abstract Machine Learning (ML) is a critical component of several novel applications and intelligent features in existing applications. Recent advances in deep learning have fundamentally advanced the state- of-the-art in several areas of research and made it easier to apply ML to a wide variety of problems. However, applied ML projects in industry, where the objective is to build and improve a production feature that uses ML continues to be complicated and often bottlenecked by data management challenges. In this paper, we describe the design and implementation of a machine learning platform for building learned ranking services that leverages key ideas from data management. The platform allows engineers to focus on application-specific modeling and simplifies key tasks of 1) gathering training data, 2) cleaning, validating, and monitoring data quality, 3) training and evaluating models, 4) feature lifecycle management, 5) and infrastructure for A/B tests. We describe key design choices anchored around the core idea of optimizing for experiment velocity. We describe lessons learned from applications built on this platform that have been in production serving hundreds of millions of users for over a year. Finally, we identify two key components of the platform where data management research can have a major impact. We believe such platforms have the potential to accelerate and simplify ML applications the same way data warehouses radically simplified complex reporting applications. View details
    RiSER: Learning Better Representations for Richly Structured Emails
    Furkan Kocayusufoğlu
    Nguyen Ha Vo
    Proceedings of the 2019 World Wide Web Conference, pp. 886-895
    Preview abstract Recent studies show that an overwhelming majority of emails are machine-generated and sent by businesses to consumers. Many large email services are interested in extracting structured data from such emails to enable intelligent assistants. This allows experiences like being able to answer questions such as ``What is the address of my hotel in New York?'' or ``When does my flight leave?''. A high-quality email classifier is a critical piece in such a system. In this paper, we argue that the rich formatting used in business-to-consumer emails contains valuable information that can be used to learn better representations. Most existing methods focus only on textual content and ignore the rich HTML structure of emails. We introduce RiSER (Richly Structured Email Representation) -- an approach for incorporating both the structure and content of emails. RiSER projects the email into a vector representation by jointly encoding the HTML structure and the words in the email. We then use this representation to train a classifier. To our knowledge, this is the first description of a neural technique for combining formatting information along with the content to learn improved representations for richly formatted emails. Experimenting with a large corpus of emails received by users of Gmail, we show that RiSER outperforms strong attention-based LSTM baselines. We expect that these benefits will extend to other corpora with richly formatted documents. We also demonstrate with examples where leveraging HTML structure leads to better predictions. View details
    Preview abstract Most consumer email in the world is machine-generated communication from a businesses to a human. Understanding the underlying templates that are used to instantiate these templates is a key step to enabling a variety of intelligent experiences. In this paper, we present the first description of the template-induction problem in an online setting for a planet-scale email system. While previous work has addressed the problem of discovering these templates using an offline batch job (perhaps architected as a MapReduce), discovering these templates online has several advantages. In this paper, we present the design of an online template induction system and describe the design choices we had to make. The resulting system handles online template induction over a stream of several billion emails a day. With the new system, new incoming email can be identified as belonging to a known template within minutes of discovering a template compared to several days worth of delay with the previous batch approach. Further, the online system has a resource consumption footprint that is 10x smaller than the batch approach. We also report on the surprising lesson we learned that conventional stream processing systems did not present a good framework on which to build this system. We hope that the lessons from this system help designers of future stream processing systems accommodate a broader range of applications like online template induction. View details
    Preview abstract A vast majority of the emails received by people today are machine-generated by businesses communicating with consumers. While some emails originate as a result of a transaction (e.g., hotel or restaurant reservation confirmations, online purchase receipts, shipping notifications, etc.), a large fraction are commercial emails promoting an offer (a special sale, free shipping, available for a limited time, etc.). The sheer number of these promotional emails makes it difficult for users to read all these emails and decide which ones are actually interesting and actionable. In this paper, we tackle the problem of extracting information from commercial emails promoting an offer to the user. This information enables an email platform to build several new experiences that can unlock the value in these emails without the user having to navigate and read all of them. For instance, we can highlight offers that are expiring soon, or display a notification when there's an unexpired offer from a merchant if your phone recognizes that you are at that merchant's store. A key challenge in extracting information from such commercial emails is that they are often image-rich and contain very little text. Training a machine learning (ML) model on a rendered image-rich email and applying it to each incoming email can be prohibitively expensive. In this paper, we describe a cost-effective approach for extracting signals from both the text and image content of commercial emails in the context of a free email platform that serves over a billion users around the world. The key insight is to leverage the template structure of emails, and use off-the-shelf OCR techniques to obtain the text from images to augment the existing text features offline. Compared to a text-only approach, we show that we are able to identify 9.12% more email templates corresponding to ~5% more emails being identified as offers. Interestingly, our analysis shows that this 5% improvement in coverage is across the board, irrespective of whether the emails were sent by large merchants or small local merchants, allowing us to deliver an improved experience for everyone. View details
    Preview abstract Extracting structured data from emails can enable several assistive experiences, such as reminding the user when a bill payment is due, answering queries about the departure time of a booked flight, or proactively surfacing an emailed discount coupon while the user is at that store. This paper presents Juicer, a system for extracting information from email that is serving over a billion Gmail users daily. We describe how the design of the system was informed by three key principles: scaling to a planet-wide email service, isolating the complexity to provide a simple experience for the developer, and safeguarding the privacy of users (our team and the developers we support are not allowed to view any single email). We describe the design tradeoffs made in building this system, the challenges faced and the approaches used to tackle them. We present case studies of three extraction tasks implemented on this platform—bill reminders, commercial offers, and hotel reservations—to illustrate the effectiveness of the platform despite challenges unique to each task. Finally, we outline several areas of ongoing research in large-scale machine-learned information extraction from email. View details
    Recommendations for all : solving thousands of recommendation problems a day
    Proceedings of the 34th IEEE International Conference on Data Engineering (ICDE) (2018) (to appear)
    Preview abstract Recommendations are known to be an important part of several online experiences. Outside of media recommendation (music, movies, etc), online retailers have made use of product recommendations to help users make purchases. Product recommendation tends to be really hard because of the twin problems of sparsity and cold-start. Building a recommendation system that performs well in this setting is hard and is generally considered to need some expert tuning. However, all online retailers need to solve this problem well to provide good recommendations. In this paper, we tackle this problem and describe an industrial-scale system called Sigmund where we solve tens of thousands of instances of the recommendation problem as a service for various online retailers. for customers. Sigmund was deployed to production in early 2014 and has been serving thousands of retailers. We describe several design decisions that we made in building Sigmund. We also share some of the lessons we learned from this experience –both from a machine learning perspective and a systems perspective. We hope that these lessons are useful for building future machine-learning services. View details
    Quick Access: Building a Smart Experience for Google Drive
    Alexandrin Popescul
    Julian Gibbons
    Alan Green
    Michael James Smith
    Cayden Meyer
    Reuben Kan
    Proc. of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2017), pp. 1643-1651
    Preview abstract Google Drive is a cloud storage and collaboration service used by hundreds of millions of users around the world. Quick Access is a new feature in Google Drive that surfaces the relevant documents to the user on the home page. We describe the development of a machine-learned service behind this feature. Our metrics show that this feature cuts the time it takes for users to locate their documents in half. The development of this product feature is an illustration of a number of more general challenges and constraints associated with machine learning product deployment such as dealing with private corpora and protecting user privacy, working with data services that are not designed with machine-learning in mind and may be owned and operated by different teams with different constraints, and evolving product definitions which inform the metric being optimized. We believe that the lessons learned from this experience will be useful to practitioners tackling a wide range of applied machine-learning problems. View details
    Clydesdale: structured data processing on MapReduce
    Tim Kaldewey
    Eugene J. Shekita
    Proceedings of the 15th International Conference on Extending Database Technology, ACM, New York, NY, USA (2012), pp. 15-25
    Preview
    No Results Found