Jump to Content
Javier A. Bargas-Avila

Javier A. Bargas-Avila

Javier Bargas-Avila holds a PhD in Cognitive Psychology. Before joining Google he was the manager of the HCI lab at the University of Basel (Switzerland) from 2004 to 2011. He published over 20 peer reviewed papers in HCI journals and conferences covering topics such as user satisfaction, mental models in website perception, first visual impression of websites or webform usability. Since 2011 he is part of the YouTube UX research team, where he currently focuses on internationalization, monetization and analytics.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    Preview abstract Written text plays a special role in user interfaces. Key information in interaction elements and content are mostly conveyed through text. The global context, where software has to run in multiple geographical and cultural regions, requires software developers to translate their interfaces into many different languages. This translation process is prone to errors – therefore the question of how language quality can be measured is important. This article presents the development of a questionnaire to measure user interface language quality (LQS). After a first validation of the instrument with 843 participants, a final set of 10 items remained, which was tested again (N=690). The survey showed a high internal consistency (Cronbach's α) of .82, acceptable discriminatory power coefficients (.34 – .47), as well as a moderate average homogeneity of .36. The LQS also showed moderate correlation to UMUX, an established usability metric (convergent validity), and it successfully distinguished high and low language quality (discriminative validity). The application to three different products (YouTube, Google Analytics, Google AdWords) revealed similar key statistics, providing evidence that this survey is product-independent. Meanwhile, the survey has been translated and applied to more than 60 languages. View details
    Is Once Enough? On the Extent and Content of Replications in Human-Computer Interaction
    Kasper Hornbæk
    Søren S. Sander
    Jakob Grue Simonsen
    CHI '14 Proceedings of the 2014 annual conference on Human factors in computing systems, ACM, pp. 3523-3532
    Preview abstract A replication is an attempt to confirm an earlier study's findings. It is often claimed that research in Human-Computer Interaction (HCI) contains too few replications. To investigate this claim we examined four publication outlets (891 papers) and found 3% attempting replication of an earlier result. The replications typically confirmed earlier findings, but treated replication as a confirm/not-confirm decision, rarely analyzing effect sizes or comparing in depth to the replicated paper. When asked, most authors agreed that their studies were replications, but rarely planned them as such. Many non-replication studies could have corroborated earlier work if they had analyzed data differently or used minimal effort to collect extra data. We discuss what these results mean to HCI, including how reporting of studies could be improved and how conferences/journals may change author instructions to get more replications. View details
    Designing Usable Web Forms -- Empirical Evaluation of Web Form Improvement Guidelines
    Mirjam Seckler
    Silvia Heinz
    Klaus Opwis
    Alexandre Tuch
    Proceedings of the 2014 annual conference on Human factors in computing systems (2014), pp. 1275-1284
    Preview abstract This study reports a controlled eye tracking experiment (N = 65) that shows the combined effectiveness of 20 guidelines to improve interactive online forms when applied to forms found on real company websites. Results indicate that improved web forms lead to faster completion times, fewer form submission trials, and fewer eye movements. Data from subjective questionnaires and interviews further show increased user satisfaction. Overall, our findings highlight the importance for web designers to improve their web forms using UX guidelines. View details
    Empirical evaluation of 20 web form optimization guidelines
    Mirjam Seckler
    Silvia Heinz
    Klaus Opwis
    Alexandre N. Tuch
    CHI '13 Proceedings of the 2013 annual conference on Human factors in computing systems, ACM, pp. 1893-1898
    Preview abstract Mostwebsitesuseinteractiveonlineformsasamain contact point to users. Recently, many publications aim at optimizing web forms. In contrast to former research that focused at the evaluation of single guidelines, the present study shows in a controlled lab experiment with n=23 participants the combined effectiveness of 20 guidelines on real company web forms. Results indicate that optimized web forms lead to faster completion times, less form submission trials, fewer eye fixations and higher user satisfaction in comparison to the original forms. View details
    Preview abstract This article contains the response to the reviews regarding the development and validation of the Intranet Satisfaction Questionnaire (ISQ), which measures user satisfaction with the Intranet. Where appropriate additional data analysis and interpretation is provided, the data show further evidence for the good validity, reliability and sensitivity of this tool. In addition, we provide a short preview of a follow-up publication and show that the ISQ can differentiate effectively between bad and good Intranets. View details
    The role of visual complexity and prototypicality regarding first impression of websites: Working towards understanding aesthetic judgments
    Alexandre N. Tuch
    Eva Presslaber
    Markus Stoecklin
    Klaus Opwis
    International Journal of Human-Computer Studies, vol. 70(11) (2012), pp. 794-811
    Preview abstract This paper experimentally investigates the role of visual complexity (VC) and pro- totypicality (PT) as design factors of websites, shaping users’ first impressions by means of two studies. In the first study, 119 screenshots of real websites varying in VC (low vs. medium vs. high) and PT (low vs. high) were rated on perceived aes- thetics. Screenshot presentation time was varied as a between-subject factor (50 ms vs. 500 ms vs. 1000 ms). Results reveal that VC and PT affect participants’ aesthet- ics ratings within the first 50 ms of exposure. In the second study presentation times were shortened to 17, 33 and 50ms. Results suggest that VC and PT affect aesthetic perception even within 17ms, though the effect of PT is less pronounced than the one of VC. With increasing presentation time the effect of PT becomes as influential as the VC effect. This supports the reasoning of the information-processing stage model of aesthetic processing (Leder et al., 2004), where VC is processed at an earlier stage than PT. Overall, websites with low VC and high PT were perceived as highly appealing. View details
    Location matters, especially for non-salient features – An eye-tracking study on the effects of web object placement on different types of websites
    Sandra P. Roth
    Alexandre N. Tuch
    Elisa E.D. Mekler
    Klaus Opwis
    International Journal of Human-Computer Studies, vol. 71 (2013), pp. 228-235
    Is beautiful really usable? Toward understanding the relation between usability, aesthetics, and affect in HCI
    Alexandre N. Tuch
    Sandra P. Roth
    Kasper Hornbæk
    Klaus Opwis
    Computers in Human Behavior (2012)