12.07.2015 Views

Deliverable 4.4 - INSEAD CALT

Deliverable 4.4 - INSEAD CALT

Deliverable 4.4 - INSEAD CALT

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

AtGentive IST-4-027529-STP - Attentive Agents for Collaborative Learners5. Experiment 2This Experiment aimed to reproduce Experiment 1 with a number of methodologicalmodifications and improvements. Based on the difference evinced in Experiment 1between native and non-native English speakers, we decided to concentrateexclusively on people who, despite being fluent in English, have learned it as asecond language. This sample is important as it reflects a large proportion ofeducated European students and professionals, who are daily confronted withInternet resources in English.The same hypotheses stated in Section 4 were tested (persona effect and positivevalence advantage). An additional test was introduced, to address memory retention(Stenberg et al., 1998). At the end of the RT task, participants were invited torecognise the list of words presented in the experiment, from a set of distracters.Assuming that words encountered in inconsistent conditions were more deeplyprocessed, to counteract the effect of the disturbing stimulus, we expected that theyshould have been more easily recognised (Inconsistency advantage).5.1 Method5.1.1 ParticipantsTwenty-two people participated in the experiment. They were postgraduate studentsat the University of Manchester. All of them were proficient in English, but not nativespeakers.5.1.2 MaterialsNew videos were recorded to show a progression of emotion. They all started withthe facial expression showing the lowest level of a specific emotion (e.g., angry 1),and then progressed to a more marked facial expressions (e.g., angry 2), immediatelyfollowed by the corresponding body animation (e.g., angry) 2. All videos lasted 3seconds. The final clip, which remained visible on the computer screen until the userpressed a key, showed the agent in the apex of the emotion. The words weredisplayed 100 msec before the video was completed. Only four videos (two positiveand two negative emotions) were recorded as in Experiment 2, no video wasdisplayed in the control condition.The word lists were also substantially revised. Six lists were created, completelybalanced on average word valence, length and frequency of use. These lists werepaired two by two, according to the procedure proposed by (Larsen et al., 2006).These authors developed two lists of positive and negative words. In each list,individual main words (either positive or negative) were matched to their opposite,balanced by length, orthographic variation, and frequency of use. Main words andtheir opposite were counterbalanced in our study, so that each list contained thesame number of direct stimuli (main words) and opposite ones. Once again, lengthand frequency of use was kept constant. The six lists were then assigned toanimation conditions, with the criteria that matched list could not be proposed in thesame animation condition, and that each video could not include the same list twice.A new list of 72 words was prepared to be tested in the memory items. It included 36words tested in the first part of the experiment and 36 distracter items. Distracterswere selected by the lists developed by (Larsen et al., 2006), with the constraint thatDel <strong>4.4</strong>: AtGentive Final Evaluation Report – Appendix A page 19

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!