09.02.2013 Views

28th International Congress of Psychology August 8 ... - U-netSURF

28th International Congress of Psychology August 8 ... - U-netSURF

28th International Congress of Psychology August 8 ... - U-netSURF

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The integration <strong>of</strong> semantic attribute and stimulus colour in a colour defined search space was<br />

examined using the RSVP task. Participants were required to identify a target word defined by<br />

colour from 9 distractor words one semantically associated with the target colour, the remaining<br />

unassociated. Frequent errors were observed with the word immediately succeeding the target<br />

misidentified as the target. However when the colour associated distractor appeared within two<br />

words <strong>of</strong> the target it was misidentified as the target. The result suggests an influence <strong>of</strong> semantic<br />

context on feature integration.<br />

1063.20 Does empathy give insight in person perception? Billy Lee 1 , Tsuneo Kito 2 , 1 University<br />

<strong>of</strong> Edinburgh, UK; 2 Kurume University, Japan<br />

Is empathy blind or does it confer insight into another person? We assessed 138 observers using a<br />

video test <strong>of</strong> veracity detection and felt empathy. There was no correlation between empathy level<br />

and accuracy <strong>of</strong> detection. However, low empathic participants performed better than high<br />

empathic on accuracy <strong>of</strong> veracity detection. Women received more empathy than men but men and<br />

women empathised to the same degree. There was an interaction between observer and actor<br />

gender. Number <strong>of</strong> siblings had no effect. The results suggest that in some cases self reported<br />

empathy may hinder accurate person perception.<br />

1063.21 Embedded words are activated during processing: Evidence from the stroop effect,<br />

Remo Job 1 , Roberto Nicoletti 2 , Rino Rumiati 2 , Giuseppe Sartori 3 , 1 University <strong>of</strong> Trento, Italy;<br />

2 3<br />

University <strong>of</strong> Bologna, Italy; University <strong>of</strong> Padova, Italy<br />

We investigated whether words embedded in longer words (e.g. car in careful) are activated when<br />

processing the carrier word. Participants performed a Stroop task on primes and a lexical decision<br />

task on targets. Primes were carrier words (e.g. redemption) with the letters comprising the color<br />

word written in an incongruent color. Targets were (e.g. confession or were not (e.g. production<br />

related to the meaning <strong>of</strong> the carrier word. Both a Stroop and a priming effect emerged. The results<br />

show that focusing on task-relevant part <strong>of</strong> the stimuli allows semantic processing <strong>of</strong> both the<br />

carrier and the embedded word.<br />

1063.22 A study on the pattern <strong>of</strong> feature search under dual-task condition, Mowei Shen, Tao<br />

Gao, Rende Shui, Haijie Ding, Zhejiang University, China<br />

This research investigated the effects <strong>of</strong> set-size, top-down activation and first task on feature<br />

search under dual-task condition. In Exp1 and Exp2, task one (T1) was to identify the unique letter<br />

among Arabic numerals, task two (T2) was to detect a red dot. Distractors were grey dots in Exp1,<br />

and dots with multiple colors in Exp2. In Exp3 and Exp4, T1 was to report the shape <strong>of</strong> a hexagon.<br />

In Exp3, T2 was identical to that in Exp2; in Exp4, T2 was to detect “+” among “L”. The results<br />

revealed the pattern <strong>of</strong> feature search was not changed by task switch.<br />

1063.23 Effect <strong>of</strong> feature-changing on multiple-object-tracking, Rende Shui, Mowei Shen,<br />

Zhejiang University, China<br />

Many multiple-object-tracking (MOT) studies showed that people can track about four objects<br />

simultaneously. Pylyshyn developed Visual Index hypothesis to explain it. Scholl and Pylyshyn’s<br />

(1999) studies showed subjects could not detect the change <strong>of</strong> color or shape <strong>of</strong> the items in MOT<br />

tasks; Dennis, John & Pylyshyn, Zenon's(2002) study showed that objects with different features<br />

118

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!