Jump to Content
Ciera Jaspan

Ciera Jaspan

Ciera is the tech lead manager of the Engineering Productivity Research within Developer Infrastructure. The Engineering Productivity Research team brings a data-driven approach to business decisions around engineering productivity. They use a combination of qualitative and quantitative methods to triangulate on measuring productivity. Ciera previously worked on Tricorder, Google's static analysis platform. She received her B.S. in Software Engineering from Cal Poly and her Ph.D. from Carnegie Mellon, where she worked with Jonathan Aldrich on cost-effective static analysis and software framework design.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    Preview abstract Measuring the productivity of software developers is inherently difficult; it requires measuring humans doing a complex, creative task. They are affected by both technological and sociological aspects of their job, and these need to be evaluated in concert to deeply understand developer productivity. View details
    Preview abstract Beyond self-report data, we lack reliable and non-intrusive methods for identifying flow. However, taking a step back and acknowledging that flow occurs during periods of focus gives us the opportunity to make progress towards measuring flow by isolating focused work. Here, we take a mixed-methods approach to design a logs based metric that leverages machine learning and a comprehensive collection of logs data to identify periods of related actions (indicating focus), and validate this metric against self-reported time in focus or flow using diary data and quarterly survey data. Our results indicate that we can determine when software engineers at a large technology company experience focused work which includes instances of flow. This metric speaks to engineering work, but can be leveraged in other domains to non-disruptively measure when people experience focus. Future research can build upon this work to identify signals associated with other facets of flow. View details
    Systemic Gender Inequities in Who Reviews Code
    Jill Dicker
    Amber Horvath
    Laurie R. Weingart
    Nina Chen
    Computer Supported Cooperative Work (2023) (to appear)
    Preview abstract Code review is an essential task for modern software engineers, where the author of a code change assigns other engineers the task of providing feedback on the author’s code. In this paper, we investigate the task of code review through the lens of equity, the proposition that engineers should share reviewing responsibilities fairly. Through this lens, we quantitatively examine gender inequities in code review load at Google. We found that, on average, women perform about 25% fewer reviews than men, an inequity with multiple systemic antecedents, including authors’ tendency to choose men as reviewers, a recommender system’s amplification of human biases, and gender differences in how reviewer credentials are assigned and earned. Although substantial work remains to close the review load gap, we show how one small change has begun to do so. View details
    What Improves Developer Productivity at Google? Code Quality.
    Lan Cheng
    Andrea Marie Knight Dolan
    Nan Zhang
    Elizabeth Kammer
    Foundations of Software Engineering: Industry Paper (2022)
    Preview abstract Understanding what affects software developer productivity can help organizations choose wise investments in their technical and social environment. But the research literature either focuses on what correlates with developer productivity in realistic settings or focuses on what causes developer productivity in highly constrained settings. In this paper, we bridge the gap by studying software developers at Google through two analyses. In the first analysis, we use panel data to understand which of 39 productivity factors affect perceived developer productivity, finding that code quality, tech debt, infrastructure tools and support, team communication, goals and priorities, and organizational change and process are all causally linked to developer productivity. In the second analysis, we use a lagged panel analysis to strengthen our causal claims. We find that increases in perceived code quality tend to be followed by increased developer productivity, but not vice versa, providing the strongest evidence to date that code quality affects individual developer productivity. View details
    Detecting Interpersonal Conflict in Issues and Code Review: Cross Pollinating Open- and Closed-Source Approaches
    Huilian Sophie Qiu
    Bogdan Vasilescu
    Christian Kästner
    International Conference on Software Engineering: Software Engineering on Society (2022)
    Preview abstract In software engineering, interpersonal conflict in code review, such as toxic language or an unnecessary pushback on a change request, is a well-known and extensively studied problem because it is associated with negative outcomes, such as stress and turnover. One effective approach to prevent and mitigate toxic language is to develop automatic detection. Two most-recent attempts on automatic detection were developed under different settings: a toxicity detector using text analytics for open source issue discussions and a pushback detector using logs-based metrics for corporate code reviews. While these settings are arguably distinct, the behaviors that they can capture share similarities. Our work studies how the toxicity detector and the pushback detector can be generalized beyond the respective contexts in which they were developed and how the combination of the two can improve interpersonal conflict detection. This research has implications for designing interventions and offers an opportunity to apply a technique to both open and closed source software, possibly benefiting from synergies, a rarity in software engineering research, in our experience. View details
    Preview abstract Code review is a common practice in software organizations, where software engineers give each other feedback about a code change. As in other human decision-making processes, code review is susceptible to human biases, where reviewers’ feedback to the author may depend on how reviewers perceive the author’s demographic identity, whether consciously or unconsciously. Through the lens of role congruity theory, we show that the amount of pushback that code authors receive varies based on their gender, race/ethnicity, and age. Furthermore, we estimate that such pushback costs Google more than 1000 extra engineer hours every day, or about 4% of the estimated time engineers spend responding to reviewer comments, a cost borne by non-White and non-male engineers. View details
    Preview abstract Code review is a powerful technique to ensure high quality software and spread knowledge of best coding practices between engineers. Unfortunately, code reviewers may have biases about authors of the code they are reviewing, which can lead to inequitable experiences and outcomes. In this paper, we describe a field experiment with anonymous author code review, where we withheld author identity information during 5217 code reviews from 300 professional software engineers at one company. Our results suggest that during anonymous author code review, reviewers can frequently guess authors’ identities; that focus is reduced on reviewer-author power dynamics; and that the practice poses a barrier to offline, high-bandwidth conversations. Based on our findings, we recommend that those who choose to implement anonymous author code review should reveal the time zone of the author by default, have a break-the-glass option for revealing author identity, and reveal author identity directly after the review. View details
    Enabling the Study of Software Development Behavior with Cross-Tool Logs
    Ben Holtz
    Edward K. Smith
    Andrea Marie Knight Dolan
    Elizabeth Kammer
    Jillian Dicker
    Lan Cheng
    IEEE Software, vol. Special Issue on Behavioral Science of Software Engineering (2020)
    Preview abstract Understanding developers’ day-to-day behavior can help answer important research questions, but capturing that behavior at scale can be challenging, particularly when developers use many tools in concert to accomplish their tasks. In this paper, we describe our experience creating a system that integrates log data from dozens of development tools at Google, including tools that developers use to email, schedule meetings, ask and answer technical questions, find code, build and test, and review code. The contribution of this article is a technical description of the system, a validation of it, and a demonstration of its usefulness. View details
    Preview abstract During code review, developers critically examine each others’ code to improve its quality, share knowledge, and ensure conformance to coding standards. In the process, developers may have negative interpersonal interactions with their peers, which can lead to frustration and stress; these negative interactions may ultimately result in developers abandoning projects. In this mixed-methods study at one company, we surveyed 1,317 developers to characterize the negative experiences and cross-referenced the results with objective data from code review logs to predict these experiences. Our results suggest that such negative experiences, which we call “pushback”, are relatively rare in practice, but have negative repercussions when they occur. Our metrics can predict feelings of pushback with high recall but low precision, making them potentially appropriate for highlighting interactions that may benefit from a self-intervention. View details
    Do Developers Learn New Tools On The Toilet?
    Edward K. Smith
    Andrea Knight Dolan
    Andrew Trenk
    Steve Gross
    Proceedings of the 2019 International Conference on Software Engineering
    Preview abstract Maintaining awareness of useful tools is a substantial challenge for developers. Physical newsletters are a simple technique to inform developers about tools. In this paper, we evaluate such a technique, called Testing on the Toilet, by performing a mixed-methods case study. We first quantitatively evaluate how effective this technique is by applying statistical causal inference over six years of data about tools used by thousands of developers. We then qualitatively contextualize these results by interviewing and surveying 382 developers, from authors to editors to readers. We found that the technique was generally effective at increasing software development tool use, although the increase varied depending on factors such as the breadth of applicability of the tool, the extent to which the tool has reached saturation, and the memorability of the tool name. View details
    What Predicts Software Developers’ Productivity?
    David C. Shepherd
    Michael Phillips
    Andrea Knight Dolan
    Edward K. Smith
    Transactions on Software Engineering (2019)
    Preview abstract Organizations have a variety of options to help their software developers become their most productive selves, from modifying office layouts, to investing in better tools, to cleaning up the source code. But which options will have the biggest impact? Drawing from the literature in software engineering and industrial/organizational psychology to identify factors that correlate with productivity, we designed a survey that asked 622 developers across 3 companies about these productivity factors and about self-rated productivity. Our results suggest that the factors that most strongly correlate with self-rated productivity were non-technical factors, such as job enthusiasm, peer support for new ideas, and receiving useful feedback about job performance. Compared to other knowledge workers, our results also suggest that software developers’ self-rated productivity is more strongly related to task variety and ability to work remotely. View details
    Preview abstract Software developers’ productivity can be negatively impacted by using APIs incorrectly. In this paper, we describe an analysis technique we designed to find API usability problems by comparing successive file-level changes made by individual software developers. We applied our tool, StopMotion, to the file histories of real developers doing real tasks at Google. The results reveal several API usability challenges including simple typos, conceptual API misalignments, and conflation of similar APIs. View details
    Bridging the Gap: From Research to Practical Advice
    Claire Le Goues
    Ipek Ozkaya
    Kathryn T. Stolee
    Mary Shaw
    IEEE Software (2018)
    Preview abstract Software engineers must solve practical problems under deadline pressure. They rely on the best-codified knowledge available, turning to less-rigorous results and their expert judgement when sound science is unavailable. Meanwhile, software engineering researchers seek fully validated results. Yet, translation of those research results to practical guidance lags. To bridge this gap, research results should be systematically distilled into actionable guidance in a way that respects differences in rigor and scope among the results. Starting with the practitioners’ need for actionable guidance, we review the evolution of software engineering research expectations, identify types of results and their strengths, and draw on evidence-based medicine for a concrete example of deriving pragmatic guidance from mixed-strength research results. We advance a framework to allow researchers to clearly identify the strengths of the claims and the supporting evidence of their results and to work with practitioners to synthesize various types of evidence into actionable recommendations. View details
    Lessons from Building Static Analysis Tools at Google
    Edward Aftandilian
    Alex Eagle
    Liam Miller-Cushon
    Communications of the ACM (CACM), vol. 61 Issue 4 (2018), pp. 58-66
    Preview abstract In this article, we describe how we have applied the lessons from Google’s previous experience with FindBugs Java analysis, as well as lessons from the academic literature, to build a successful static analysis infrastructure that is used daily by the majority of engineers at Google. Our tooling detects thousands of issues per day that are fixed by engineers, by their own choice, before the problematic code is checked into the codebase. View details
    Advantages and Disadvantages of a Monolithic Codebase
    Andrea Knight
    Edward K. Smith
    International Conference on Software Engineering, Software Engineering in Practice track (ICSE SEIP) (2018)
    Preview abstract Monolithic source code repositories (repos) are used by several large tech companies, but little is known about their advantages or disadvantages compared to multiple per-project repos. This paper investigates the relative tradeoffs by utilizing a mixed-methods approach. Our primary contribution is a survey of engineers who have experience with both monolithic repos and multiple, per-project repos. This paper also backs up the claims made by these engineers with a large-scale analysis of developer tool logs. Our study finds that the visibility of the codebase is a significant advantage of a monolithic repo: it enables engineers to discover APIs to reuse, find examples for using an API, and automatically have dependent code updated as an API migrates to a new version. Engineers also appreciate the centralization of dependency management in the repo. In contrast, multiple-repository (multi-repo) systems afford engineers more flexibility to select their own toolchains and provide significant access control and stability benefits. In both cases, the related tooling is also a significant factor; engineers favor particular tools and are drawn to repo management systems that support their desired toolchain. View details
    Detecting argument selection defects
    Andrew Rice
    Eddie Aftandilian
    Michael Pradel
    Yulissa Arroyo-Paredes
    SPLASH 2017 OOPSLA
    Preview abstract Identifier names are often used by developers to convey additional information about the meaning of a program over and above the semantics of the programming language itself. We present an algorithm that uses this information to detect argument selection defects, in which the programmer has chosen the wrong argument to a method call in Java programs. We evaluate our algorithm at Google on 200 million lines of internal code and 10 million lines of predominantly open-source external code and find defects even in large, mature projects such as OpenJDK, ASM, and the MySQL JDBC. The precision and recall of the algorithm vary depending on a sensitivity threshold. Higher thresholds increase precision, giving a true positive rate of 85%, reporting 459 true positives and 78 false positives. Lower thresholds increase recall but lower the true positive rate, reporting 2,060 true positives and 1,207 false positives. We show that this is an order of magnitude improvement on previous approaches. By analyzing the defects found, we are able to quantify best practice advice for API design and show that the probability of an argument selection defect increases markedly when methods have more than five arguments. View details
    An Empirical Study of Practitioners’ Perspectives on Green Software Engineering
    Irene Manotas
    Christian Bird
    Rui Zhang
    David Shepherd
    Lori Pollock
    James Clause
    International Conference on Software Engineering (ICSE), 1600 Amphitheatre Parkway (2016) (to appear)
    Preview abstract The energy consumption of software is an increasing concern as the use of mobile applications, embedded systems, and data center-based services expands. While research in green software engineering is correspondingly increasing, little is known about the current practices and perspectives of software engineers in the field. This paper describes the first empirical study of how practitioners think about energy when they write requirements, design, construct, test, and maintain their software. We report findings from a quantitative, targeted survey of 464 practitioners from ABB, Google, IBM, and Microsoft, which was motivated by and supported with qualitative data from 18 in-depth interviews with Microsoft employees. The major findings and implications from the collected data contextualize existing green software engineering research and suggest directions for researchers aiming to develop strategies and tools to help practitioners improve the energy usage of their applications. View details
    Preview abstract Static analysis tools help developers find bugs, improve code readability, and ensure consistent style across a project. However, these tools can be difficult to smoothly integrate with each other and into the developer workflow, particularly when scaling to large codebases. We present Tricorder, a program analysis platform aimed at building a data-driven ecosystem around program analysis. We present a set of guiding principles for our program analysis tools and a scalable architecture for an analysis platform implementing these principles. We include an empirical, in-situ evaluation of the tool as it is used by developers across Google that shows the usefulness and impact of the platform. View details
    No Results Found