Design Competition

Competition to be held in 2021!


The Evaluation Competition seeks to motivate the participation of undergraduate and graduate students, as well as teachers and practitioners in HCI contributing to the training of these students. The Evaluation Competition has an essentially practical nature: participants make the evaluation of a computer system and thus apply their theoretical knowledge related to HCI evaluation methods.


Competition Theme

The theme of IHC 2020 will be 'History and Interaction'. The objective of the evaluation competition, in particular, is to explore the evaluation of interactive computational artefacts and their relationship with History and their global effects in the past, present and future perspectives. As such, the competition aims to incorporate the possibility of evaluating technologies and their relationship with the historical legacy of the past, as well as technologies and their relationship with "making and telling history" or "retelling history" in today's striking episodes. In addition, the competition also aims to encourage the pursuit of a more critical line of history construction, opening up to evaluation with applications and technologies that help in the understanding of History and its reflections.


Evaluation teams may focus their assessment on different possible artefacts. From the point of view of technologies that help to "make, tell and retell history" nowadays, evaluation teams could approach social networks with situations and discussions involving 'fake news' (political, economic, social, cultural, legal etc.) and collaborative technologies to support everyday life in situations of isolation (e.g. COVID-19). From the point of view of technologies that help to understand the past historical legacy, evaluation teams could approach interactive technologies that gained great prominence with the situations of social isolation with the crisis of COVID-19 and the closure of museums and cultural spaces for the public. Possibilities to evaluate include applications, websites or virtual and augmented reality environments of museums and cultural spaces in Brazil and all over the world (e.g. Museu Afro Brasil, Museu do Amanhã in Brazil, AR RATP Museum in France, VR British Museum in the UK, Museu Nacional do Azulejo in Portugal, among others). Educational games and serious games related to the theme can also be evaluated in the competition.


From a methodological point of view, in this evaluation competition, we would like to challenge students to extrapolate the simple evaluation of classical aspects of quality in use of systems (e.g. usability, accessibility, and communicability) and include aspects related to user experience (UX). According to ISO 9241-210, user experience addresses "a person's perceptions and reactions that result from the intended use or use of a product, system or service" and "includes all emotions, beliefs, preferences, perceptions, physical and psychological responses, user behaviours and achievements that occur before, during and after use". In several interactive technologies related to the theme of this competition, the "user experience" has an important role in the use of this technology, including their emotions, sensations and other aspects related to UX, and not only aspects related to the use of the system and performing tasks on it. Students are free to choose methods that they consider appropriate to analyze UX aspects.


We consider it important to disseminate the use of UX evaluation methods that actually cover aspects of UX that go beyond classic aspects of usability, accessibility and communicability, for example. Despite the wide dissemination of the term "UX" and its adoption in many companies, several evaluations carried out in commercial environments said "UX evaluations" do not incorporate broader aspects such as perceptions, reactions, emotions, beliefs, among others. Therefore, the competition puts this challenge forward to demonstrate the effective use of these methods, as examples that can be replicated in commercial environments.


Within the scope of the competition, evaluation teams can choose to evaluate different aspects of UX with potential relevance. For example, teams could approach the reactions with respect to beliefs and perceptions about interactive systems involving news, emotions that emerge from the interaction with content referring to real and 'fake' historical basis, or feelings emerging from the use of technology in situations related to historical facts, as well as the perceptions and reactions that come from the interaction with artefacts linked to the cultural legacy of the past.


For graduate students, we would like to launch an "extra" challenge, in the sense of exploring methods of evaluating UX from recent research literature in their evaluation venture. In the academic field, since the publication of the article "User experience - a research agenda" in 2006 by Hassenzahl and Tractinsky, with more than 2,500 citations in March 2020, several research endeavours have been dedicated to the investigation of methods to evaluate aspects of user experience. Despite the advances, this field still presents more research challenges than the use of techniques consolidated decades ago to evaluate other aspects of quality in use, such as usability. Graduate students are invited to review recent HCI literature towards a search for methods with applicability to the chosen type of application. In the literature, there are specific proposals for evaluating user experience with general-purpose applications, games or with cultural heritage interactive applications. Teams will have complete freedom to carry out their research in the literature and choose the best method(s) to use. However, we suggest below some examples of articles with similar proposals, but the teams do not need to adopt them necessarily in the evaluations to be performed:


- IJSSELSTEIJN, W. A.; DE KORT, Y. A. W.; POELS, Karolien. The game experience questionnaire. Eindhoven: Technische Universiteit Eindhoven, p. 3-9, 2013.

- PETRIE, H.; OTHMAN, M. K.; POWER, C. Smartphone Guide Technology in Cultural Spaces: Measuring Visitor Experience with an iPhone Multimedia Guide in Shakespeare’s Church. International Journal of Human–Computer Interaction, v. 33, n. 12, p. 973-983, 2017.

- BRÜHLMANN, F; Vollenwyder, B.; Opwis, K.; & Mekler, E. D. Measuring the “Why” of Interaction: Development and Validation of the User Motivation Inventory (UMI). In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 2018. p. 1-13.

 

Evaluation

Object and Focus

Assess the quality (usability, communicability, accessibility etc.) and the user experience of an interactive computational artefact that approaches History and its global effects in the past, present and future prospects. The computational artefacts evaluated can be for individual (single user) or collaborative use, and can be implemented in different ways that offer interaction, such as software, hardware, an environment, application, game, or other computer systems. Proposals with evaluations of computational artefacts that enhance social collaboration in the field of History are also welcome.


Methods

Teams will be free to choose the method(s) they will use to evaluate the computational artefact.

Teams of graduate students are encouraged to seek the use of recent UX assessment methods proposed in research papers in the HCI literature.


Product

After the evaluation, each team must generate a report containing:

  • Name(s) of employed evaluation method(s);

  • Justification of the choice of method(s);

  • Description of the chosen platform(s): brand, model, operating system;

  • Description of the evaluation process (part of the interface under consideration, evaluated features, procedures involved);

  • Evaluation results:

    • Detected problems: description, location and context of occurrence, including justification;

  • Closing remarks: conclusion and other observations;

  • An analysis from the team on how the employed method(s) were able (or not) to identify aspects related to the experience of use of the application(s).


Submissions

Submissions of reports should be anonymous and have up to 10 pages following the ACM Master Article Templater (SIGCHI) ( https://www.acm.org/publications/proceedings-template). Authors should submit a PDF version of their paper through the JEMS system, in PDF format.


Reports will be kept confidential during the review process and reports of the finalists will be kept confidential until the start of the event, when the teams will be identified during the presentations.


Team requirements

Two kinds of teams are allowed to participate in the competition:

  1. Undergraduate students: between 3 and 5 students;

  2. Graduate students: between 2 and 4 students.


All undergraduate students' teams must have one or two advisors associated with a higher education institution, research institution or private company related to the HCI field. A graduate student can be one of the advisors, if he/she helps in the supervision of an undergraduate students' team together with a professor-advisor.


The graduate students' teams also must have one or two advisors associated with a higher education institution or research institution. In this case, the advisors must be a supervisor and co-supervisor of the graduate students.


Mixed teams with undergraduate and graduate students are not allowed. If these requirements will not be followed by the teams, they will be automatically excluded from the selection process.


Instructions for Participation

To participate in the competition, each team should follow three steps:

  1. Run the evaluation;

  2. Submit the evaluation report;

  3. If selected by the scientific committee, present the evaluation report during the Evaluation Competition session at IHC 2020, in Diamantina - MG.


Important Dates

  • Deadline for submission of reports at JEMS: to be defined in 2021

  • Notification of finalists for presentation at IHC 2020: to be defined in 2021

  • Deadline for submission of final reports at JEMS: to be defined in 2021

  • Presentation at IHC 2020: to be defined in 2021


Evaluation of Submitted Reports

Each report will be evaluated by reviewers with experience in HCI or evaluation of interactive systems. The following criteria will be considered:

  • Adequacy of the system/application to the Evaluation Competition theme;

  • Readability, organization and presentation of the evaluation report;

  • Clear definition of the evaluation scope and purpose;

  • Adequacy of the chosen method(s) and evaluation procedure(s);

  • Quality of results considering the established scope and purpose;

  • Consideration of ethical issues involved in the execution of the evaluation (in the case of using methods involving users or people outside the team);

  • Quality of the critical analysis of the ability of the method to disclose (or not) potential problems of use of the system.


Selection and Presentation at IHC 2020

Three (3) finalists of each category (undergraduate and graduate students) will be selected for a short oral presentation, followed by questions from a board of evaluators during the symposium in Diamantina - MG, Brazil, October 19-23, 2020. The registration and the presence of at least one member of each finalist team is required.


Awards

Each finalist team who presents their work at the symposium will receive a Certificate of Recognition. The winning team will also receive a prize (to be defined by the organization).


Support for Finalists

The organization will try to provide free registration for one member of each team selected for oral presentation in this edition. However, authors are initially responsible for ensuring their registration and presence in the symposium with their own resources.  


Publication

The reports of the finalist teams will be published as the extended abstracts of the IHC 2020 proceedings.


Summary

  • System to be evaluated: computational artefacts that support History and its global effects in the past, present and future prospects;

  • Criteria to be evaluated: Quality of use and/or User experience;

  • Methods to be used: to be selected by the teams;

  • Team size:

    • 3 to 5 undergraduate students + 1 to 2 supervisors (professors, researchers, graduate students or practitioners) OR

    • 2 to 4 graduate students + 1 to 2 supervisors (professors or researchers).


Organizers