Reco Love Blue Ocean Iso Download

Feb 11, 2018 - 3: Lily White Unit (Vita, 2014) • series • Reco Love: Blue Ocean (Vita, 2016) • Reco. Ford Travelpilot Nx Europe 2013 Dvd Download. Update: SolidWorks 2012 SP5.0 32b-64b Multilanguage ISO + activator Links file:. Reco Love is the spiritual successor to Photo Kano. There are two versions, ' Reco Love: Blue Ocean ' and ' Reco Love: Gold Beach ', which share the same setting and characters but differ in story and available heroines. Cheapest prices for レコラヴ Blue Ocean on PlayStation Vita in all regions, updated daily. Set a target price and we'll notify you when it drops below! Cheapest prices for レコラヴ Blue Ocean on PlayStation Vita in all regions, updated daily. Reco Love Blue Ocean (Vita).

description

First-print copies of both versions of the game will include a drama CD and set of two downloadable swimsuit outfits for use in the game.
Kadokawa Games and Dingo have announced two new love simulators for the PlayStation Vita produced by Ichiro Sugiyama (Photo Kano): Reco Love: Blue Ocean & Reco Love: Gold Beach. Both versions of the new game share the same characters and setting, but the story developments and events with heroines will differ.
Although all the characters appear in both games, between Blue Ocean and Gold Beach, almost everything is different, from the story to the free event scenarios and 'Reco Session' poses. Both games are fully voiced, and between them both there are a total of 40,000+ voice files!
The player-controlled protagonist of the Reco Love is a second-year high school student who attends Mitsumi Academy, a beachside school equipped with a pool and all. While attending a two month long summer vacation at Mitsumi Academy he will develop relationships with the various heroines, ultimately resulting in the goal of confessing his love at the Mitsumi Festival on the final day of camp.
In addition to the photography theme found in the likes of Photo Kano, the title will also be expanding towards an overall 'video' theme for the love simulator. Through the title's newly developed 'Reco Session' feature, players will be able to shoot a short video of the girls in multiple locations, including the beach. Reco Love makes use the PS Vita's motion sensors, treating the handheld like it’s a digital video camera itself.

LEGEND...................................................................................2

Monday August 29th Poster presentations...................................................................................3

Monday August 29th Symposia presentations...................................................................................71

Monday August 29th Oral presentations...................................................................................76

Tuesday August 30th Poster presentations...................................................................................98

Tuesday August 30th Symposia presentations...................................................................................169

Tuesday August 30th Oral presentations...................................................................................174

Wednesday August 31th Poster presentations...................................................................................190

Wednesday August 31th Symposia presentations...................................................................................261

Wednesday August 31th Oral presentations...................................................................................264

Thursday September 1st Poster presentations...................................................................................279

Thursday September 1st Symposia presentations...................................................................................351

Thursday September 1st Oral presentations...................................................................................353

Author Index...................................................................................370

Legend

For Posters

Example: 1P002

First number is the day (1 = Monday 2 = Tuesday 3 = Wednesday 4 = Thursday); P = poster; 002 = poster number

For example: 2P145: Second day poster number 145

For talks and symposia:

Examples: 12S203; 11T101

First number is the day (1 = Monday 2 = Tuesday 3 = Wednesday 4 = Thursday);

Second number is the period: Period 1 = morning (9–11) 2 = midday (14–15:30); 3 = afternoon (17–18:30)

Letter S = symposia; T = talk

Third number: Hall (Pau Casals Hall 1; Oriol Martorell Hall 2; Tete Montoliu Hall 3)

Fourth and five number: sequence of talks within the session with format 0#.

13T101 = (1 First day; 3 = afternoon (17–18:30); T = Talk; 1 = Hall 1 (Pau Casals); 01 = talk # 1 in that session)

[1P001] Contingent affective capture: manipulating top-down search goals induces involuntary capture by threat

Christopher Brown1, Nick Berggren2 and Sophie Forster1

1Psychology, University of Sussex, UK

2Birkbeck University of London, UK

Prominent mainstream models of attention have characterised attention as reflecting competition between top-down goals and bottom-up salience. It remains unclear, however, how involuntary attentional capture by motivationally salient (e.g. threatening or rewarding) stimuli fits into such models. While such effects have traditionally been held to reflect bottom-up processes, the well-established phenomenon of ‘contingent capture’ highlights that top-down goals can not only guide voluntary attention, but also lead to involuntary attentional capture by goal-congruent yet task-irrelevant stimuli. Hence, attentional capture by motivationally salient stimuli, such as threat, could alternatively reflect top-down rather than bottom-up prioritisation processes. Here we test this possibility by combining the classic ‘contingent capture’ and ‘emotional blink’ paradigms in an RSVP task with either positive or threatening target search goals. Across experiments, task-irrelevant threat distractors were presented in peripheral and central locations. At all distractor locations, we found that attentional capture by irrelevant threatening distractors was contingent upon the adoption of a search-goal for threat. This ‘contingent affective capture’ appeared to extend only to the specific category of threat being searched for. These findings have implications for accommodating motivationally salient stimuli within mainstream models of attention, as well as applied relevance to attentional biases found in affective disorders.

[1P002] Can attentional templates operate in a spatially-localised format during visual search? An electrophysiological investigation

Nick Berggren, Michael Jenkins, Cody McCants and Martin Eimer

Psychological Sciences, Birkbeck University of London, UK

Target-defining features act as attentional templates, guiding attentional allocation during search. Although multiple feature templates (e.g., two colours) can be maintained, they are assumed to operate in a spatially-global fashion at task-relevant and irrelevant locations. Here, we assessed attentional guidance during a task that encouraged the operation of spatially-localised feature templates. Participants searched for two laterally presented target rectangles defined by a colour/location combination (e.g., red upper and blue lower visual field). On some trials, targets were accompanied by two objects in the reverse arrangement (e.g., blue upper and red lower visual field) in the opposite hemifield. Search displays were preceded by spatially-uninformative cues that contained colours at task-relevant locations (e.g., red above blue; matching cues) or in the reverse arrangement (e.g., blue above red; reverse cues). Behavioural cueing effects were only elicited by matching cues, but electrophysiological evidence demonstrated that both cue types initially attracted attention. For search displays containing target and reverse nontarget pairs on opposite sides, spatial biases towards the target only emerged after approximately 300 ms. These results show that rapid feature-based attentional selection processes do not operate in a spatially-localised fashion, but that spatially-selective templates can control subsequent object identification processes within visual working memory.

Funding: Economic and Social Research Society (ESRC) grant

[1P003] Fixation-related potentials in overt visual search for multiple targets

Hannah Hiebel, Joe Miller, Margit Höfler, Anja Ischebeck and Christof Körner

Department of Psychology, University of Graz, Austria

To date, little is known about the neural mechanisms underlying overt visual search. In two experiments, we concurrently recorded EEG and eye movements while participants searched for two identical targets amongst a set of distractors. In such a multiple-target paradigm, participants must continue the search after finding the first target and memorize its location. We investigated neural correlates of item processing in different stages of the search using fixation-related potentials (FRPs). Results of Experiment 1 showed that the detection of the first target elicited a P3-like component absent in distractor fixations. Furthermore, finding the first target influenced FRPs in continued search: a sustained negativity was observed for distractor fixations following the first target fixation, likely indicating the involvement of memory. In Experiment 2, display size was manipulated systematically (10, 22, 30 items). Eye movement analysis showed that the average number of fixations preceding the first target detection varied as a function of display size. We present FRPs for target fixations and discuss the influence of the number of preceding distractor fixations on their morphology. In addition, we analyzed the post-target negativity over longer time intervals and report on its temporal properties and functional role.

Funding: This work was supported by the Austrian Science Fund (FWF), grant P27824.

[1P004] The Role of Working Memory in Serial Overt Visual Search: A Combined Eye Tracking and fMRI Study

Joe Miller, Hannah Hiebel, Margit Höfler, Anja Ischebeck and Christof Körner

Department of Psychology, University of Graz, Austria

When searching for multiple targets, we must process the current object while maintaining representations of previously located targets (Körner & Gilchrist, 2008). It is unclear which brain regions are involved in these processes, and how recruitment of these regions changes during the task. In this experiment, subjects were presented with a letter display containing between 0 and 3 targets (‘T’s) amongst up to 20 distractors (‘L’s), and were asked to respond with the number of targets present. We used a combination of eye tracking and fMRI to measure dynamic shifts in brain activation as working memory load changes during a visual search. Information from eye tracking was used to estimate working memory load based on target fixations. Imaging data were divided into epochs based on presumed working memory load. Preliminary data suggest that recruitment of working memory regions during multiple target visual search changes dynamically according to the demands of the task. This suggests that only relevant visual information is stored in short term memory. With further analysis, we expect to show increased activation in prefrontal and parieto-occipital regions when at least 1 target has been fixated, and that these activations will be greater when 2 targets have been fixated.

Funding: FWF grant

[1P005] The attentional salience of a reward cue outlasts reward devaluation

Matteo De Tommaso, Tommaso Mastropasqua and Massimo Turatto

CIMeC, University of Trento, Italia

Reward cues have been shown to attract attention. These stimuli posses an increased attentional salience because they anticipate positive outcomes. However, the motivational value of rewards changes when the organism has the possibility to expend the reward. Hence, an interesting issue is whether the attentional salience of reward cue persists after reward devaluation. In Experiment 1, thirsty human participants learned cue-liquid reward associations by means of an instrumental task. Then, while thirsty, participants performed a visual search task under extinction, where target and distractor letters were presented within the previously reward cues. Experiment 2 was identical to the first one, but participants drank ad libitum before the visual search task. The results of Experiment 1 showed that in the visual search task attention was preferentially deployed toward the stimulus that was the best reward predictor in the previous conditioning phase. Crucially, Experiment 2 revealed that an attentional bias of the same magnitude was still present after reward devaluation. Our study provides compelling evidence showing that the attentional salience of a reward cue outlasts reward devaluation. This might explain why drug cues remain salient stimuli that trigger compulsive drug-seeking behavior even after a prolonged period of abstinence.

[1P006] Inhibition of irrelevant objects in repeated visual search?

Sebastian A Bauch, Christof Körner, Iain D. Gilchrist and Margit Höfler

Department of Psychology, University of Graz, Austria

When we search the same display repeatedly, not all items from a previous search may be relevant for the subsequent search. Here, we tested whether inhibition of saccadic return (ISR) works for all items similarly or whether ISR operates on irrelevant items only. Participants searched the same display with letters of two colours twice. In the first search, the target colour could be pink or blue, while in the second search, the target was always of one colour. Hence, half of the items in the first search were irrelevant for the second search. We measured ISR during the first and at the beginning of the second search by presenting a probe at an item that was either relevant or not for the second search. The probed item was either previously inspected (old probe) or not (new probe). Participants were instructed to saccade to the probe and then to continue the search. Preliminary results showed that ISR operated in the first search regardless of item relevance: Saccadic latencies were longer to old probes than to new probes. No ISR was observed at the beginning of the second search. These findings suggest that item relevance does not affect the occurrence of ISR.

Funding: FWF Grant: P 28546

[1P007] Gaze fixations and memory in short and massive repeated visual search

M Pilar Aivar1 and Meagan Y. Driver2

1Basic Psychology, Universidad Autónoma de Madrid, Spain

2New York University, USA

Our day-to-day experience suggests that, with repeated exposure, we can easily acquire information about our environment. However, it is not clear how much exposure is needed for information to be useful for task performance. To analyze this issue, we employed a repeated visual search paradigm, and tested whether previous searches on a set of items facilitated search for other items within the same set. A total of 72 colored letters (12 letters x 6 colors) were used as targets and distracters. In each trial a target letter was presented at fixation for 1 second, followed by the search display. Participants pressed the space bar when the target letter was found. In Experiment 1, twelve different displays were generated by placing all the letters at random locations on the screen. Participants searched for six different target letters on each display (short repeated visual search task). In Experiment 2, only one of the previously created displays was used, and participants searched for all the letters on that display (massive repeated visual search task). RT and eye movements were registered in each trial. Analysis of RT and number of fixations showed that display repetition had no effect in either experiment.

Funding: Research supported by grant PSI2013-43742.

[1P008] Distraction in visual search is driven by neutral information as well as association with reward

Luke Tudge and Torsten Schubert

Institute of Psychology, Humboldt-Universität zu Berlin

In visual search, an irrelevant distractor draws participants' gaze if the distractor is physically salient (i.e. is of a different color, luminance or orientation from other items), or if it signals the availability of reward (i.e. predicts a monetary payoff). We propose that these two phenomena can be unified under the more general concept of information. Physical contrasts contain local spatial information that a homogeneous background does not, and reward cues offer information about the probable satisfaction of desires. We test this claim in a visual search experiment in which the distractor carries information but is neither physically salient (it is one of many, differently-colored, shapes), nor associated with reward (no money was offered). Instead, the distractor was predictive of an irrelevant event (the appearance of an image). Despite neither physical salience nor association with reward, this distractor draws participants' gaze, though the effect is considerably smaller. We propose that visual search is at least sometimes driven by a ‘dispassionate' motivation to gather information about the future, even where that information is neither physically salient nor relevant to reward. We propose future research to measure the relative weight accorded to physical salience, reward and information in visual search.

Funding: Luke Tudge is supported by the Berlin School of Mind and Brain PhD scholarship

[1P009] Visual search during navigation in complex virtual environments: an eyetracking study

Chris Ramsey, Christos Gatzidis, Sebastien Miellet and Jan Wiener

Faculty of Science and Technology, Bournemouth University

Visual search in real situations often involves head and body movement through space. We present results from a visual search task where participants were required to actively navigate through virtual complex scenes whilst searching for a target object. Using a large, complex virtual environment, eye-movements were recorded for three distinct phases; static visual search, visual search during navigation and locomotion without search. Movement through the virtual environment was characterised. By these means we are able to disentangle the specific contributions search and locomotion have on gaze behaviour and analyse how gaze behaviour differs, depending on the form of trajectory. In addition to the benefits of allowing participants to navigate freely through the virtual environment, we show further benefits of integrating eye-tracking with virtual environments by demonstrating how we go beyond the screen and translate 2D gaze coordinates to actual 3D world coordinates, allowing for novel analysis. We investigate how gaze control behaves within the virtual environments during visual search. For example, the distance into the environment of a fixation or gaze point and the distance between a fixation and target object in the virtual environment are two measures that go beyond standard 2D analysis of eye-recordings.

[1P010] Saccades performance in a center-periphery visual task

Edgardo F Mamani and Mirta Jaén

Departamento de Luminotecnia, Luz y Visión, Universidad Nacional de Tucuman

Computer screens present visual stimuli with a temporary frequency that depends on monitor refresh rate. It determines a temporary modulation that can be or not perceived by the observer. This modulation could affect eye movements (EM) that the subject performs. As a result, a point within the visual field may appear as a sequence of points on the retina, being perceived multiple images during a fixation. In a preliminary research we showed the influence of the refresh rate in visual search task efficiency and EM using 60, 75 and 100 Hz refresh frequencies. As eccentricity of the target is another factor in visual task performance, we evaluate the influence of stimulus temporary modulation when the objective appears in the center or in the periphery. The task is to identify a random number accompanied by distractors of the same category, located in different positions on a computer screen. Objectives appear, in randomized sequence and location, at 0°, 6° and 12°. We analyzed task time, number and length of saccades and fixations time, for 60, 75 and 100 Hz monitor refresh rates. Results show that at lower frequency, the number of short saccades increases when the stimulus is far from the center of the screen.

[1P011] Intertrial Priming and Target Certainty Both Affect Singleton Distractor Interference During Visual Search

Jacquelyn Berry

Psychology, State University of New York at New Paltz

Visual perception research has shown that what drives attentional capture can be determined by internal factors, such as strategic goals, or by external factors, such as stimulus salience. Proponents of the role of strategic factors point to differences between responses to constant and variable targets during visual search. When the target is constant, and its identity certain, subjects may use a “feature search mode” which is less susceptible to interference by irrelevant stimuli such as a singleton distractor. When the target is variable subjects must employ a wider, “singleton search mode”, which is more susceptible to singleton distractor interference because subjects must be prepared to respond to any item that is visually distinct. In several experiments, we manipulated variable targets regarding: (1) target certainty- the target identity was certain or uncertain, and (2) intertrial priming- the target and singleton distractor shapes were sometimes interchangeable across trials. Intertrial priming was the most important in determining singleton distractor interference and the two interacted producing (1) greater singleton distractor interference when target identity was uncertain and target and singleton distractor shapes were interchangeable, and (2) no singleton distractor interference when target identity was certain and target and singleton distractor shapes were not interchangeable.

[1P012] Search for symbolic images of real-life objects: An eye movement analysis

Irina Blinnikova, Anna Izmalkova and Maria Semenova

Faculty of Psychology, Moscow State Lomonosov University, Russia

In the current study we modeled a web site search task. Subjects (n = 39) were to find «icons» - symbolic images of real-life objects (such as butterfly, cactus, book, etc.) among a variety of other objects. Instructions to find target stimuli were given either in written or in graphic form. It was presented for 1 sec, then a rectangular full screen stimulus matrix (9x9) was presented. The target was situated in one of 8 quadrants (the central one was not used). We varied the color and the shape of the stimuli framing. Search time and eye movement data were recorded. In case the target was introduced as a word the search took more time compared to a picture-based search. Moreover, in this case the search time did not depend on the color and the shape of the stimuli in the matrix, and relatively long fixations and short slow saccades were observed. When the test stimulus was introduced as a picture, the search process was determined by physical characteristics of the stimuli in the matrix. The search was performed faster in case the stimuli were square and colored. In this case shorter fixations and longer faster saccades were observed.

Funding: This study was sponsored by the Russian Foundation of Basic Research (№ 14-06-00371)

[1P013] Selective attentional bias to explicitly and implicitly predictable outcomes

Noelia Do Carmo Blanco1, Jeremie Jozefowiez1 and John J.B. Allen2

1UMR CNRS 9193, Université de Lille

2University of Arizona

Expectations of an event can facilitate its neural processing. One of the ways we build these expectations is through associative learning. Besides, this learning of contingencies between events can occur implicitly, without intention and awareness. Here we asked how a learned association between a cue and an outcome impacts the attention allocated to this outcome, and particularly when this association is irrelevant to the task at hand and thus implicit. We used an associative learning paradigm where we manipulated predictability and relevance of the association upon streams of cue-outcome visual stimuli, while stimulus characteristics and probability were held constant. To measure the N2pc component, every outcome was embedded among distractors. Importantly, the location of the outcome could not be anticipated. We found that predictable outcomes showed an increased spatial attention as indexed by a greater N2pc component, and surprisingly, even when the learned association was irrelevant to the main task. A later component, the P300, was sensitive to the relevance of the outcome (intention to learn). The current study confirms the remarkable ability of the brain to extract and update predictive information, including implicitly. Associative learning can guide a visual search and shape covert attentional selection in our rich environments.

Funding: Conseil Régional Nord - Pas de Calais. DAI-ED SHS

[1P014] Attentional capture by subliminal onsets - Stimulus-driven capture or display wide contingent orienting?

Tobias Schoeberl and Ulrich Ansorge

Faculty of Psychology, University of Vienna

Recent studies attributed cueing effects of subliminal onsets to stimulus-driven attention. They presented a cue as one placeholder with a very short lead time prior to a target and two additional placeholders. Due to the short lead time, participants saw all items appearing at the same time, remaining unaware of the cue. Although the cue was different from targets in color, singleton status and luminance, attention capture by its onset was found: Response times were shorter when cue and target were at the same position than at opposite positions. Here, we investigated whether this cueing effect could have reflected display-wide top-down contingent orienting rather than stimulus-driven capture. Participants might have searched for cue and placeholders as signals for target onsets. We manipulated the contingency between the onsets of cue/placeholders and targets. Presenting cue and placeholders after targets or leaving out the cue in a majority of trials did not change cueing effects, but auditory warning signals in advance of the cues mitigated the cueing effect. In conclusion, our results still favor the stimulus-driven capture hypothesis although auditory warning signals might have been so strong as to bail out small spatial cueing effects on top of it.

[1P015] Entire valid hemifield shows IOR during reference frame task

Liubov Ardasheva, Tatiana Malevich and W. Joseph MacInnes

Social sciences (Psychology), National Research University, Higher School of Economics, Russia

We perceive the outside world as stable, however our eyes make about three movements each second causing constant recalibration of the retinal image. The question of how this stability is achieved is still an active research question. Coding of visual input information happens mostly in retinotopic coordinates but must be understood in real world, spatiotopic coordinates. Inhibition of return (IOR) represents the involuntary delay in attending an already inspected location and therefore facilitates attention to seek novel locations in visual search. IOR would only be helpful as a facilitator if it were coded in spatiotopic coordinates, but recent research suggests that it is coded in both frames of reference. In this experiment we manipulated the location of the cue and the target with an intervening saccade and used a continuous cue-target onset asynchrony (CTOA) of 50–1000 ms. We found IOR at all locations of the valid, vertical hemifield, including the spatiotopic, retinotopic and neutral frame of reference. This could mean two things: either participants attended the entire valid hemifield or the gradient of IOR is large enough to encompass multiple locations. Finally, we found no differences in IOR for manual and saccadic responses.

[1P016] Task dependency of audio-visual semantic congruency effect on spatial orienting

Daria Kvasova and Salvador Soto-Faraco

DTIC, Universitat Pompeu Fabra, Catalonia

Combined information from different senses can affect the deployment of attention in space. For example, attention can be captured automatically by synchronized audio-visual onsets, using simple stimuli. Further, in real-world environments semantic relations between sounds and visual objects might have an influence on attention capture. Recent studies have addressed the role of crossmodal semantic congruency on spatial orienting . However, the results of these studies are mixed, some suggesting that semantic congruency has an effect of attracting attention (Iordanescu et al., 2010; Mastroberardino et al., 2015) and some suggesting that it does not (Nardo et al., 2014). Variations in task-relevance of the crossmodal stimuli (from explicitly needed, to completely irrelevant) and visual perceptual load of the task may account for these differences. Here we aim at investigating how task constrains may modulate the effect of cross-modal semantic congruency on attracting spatial attention. We designed three experiments where the stimuli and the perceptual load are constant, and the only variations are based on task relevance constrains. In this way, we aim to reproduce, within the same paradigm, conditions of explicit, implicit or no task relevance of the multisensory pair, and hence be able to directly compare them.

[1P017] Continuous CTOAs in a cuing task experiment bring about Inhibition of Return, but not early facilitation

Tatiana Malevich, Liubov Ardasheva and W. Joseph MacInnes

Social Sciences (Psychology), Higher School of Economics, Russia

Cueing effects, i.e. an early facilitation of reaction time and inhibition of return (IOR), are well-established and robust phenomena characterizing exogenous orienting and widely observed in experiments with traditional Posner cueing paradigm. However, their specific nature and origin are still a subject for debate. One of the recent explanations proposed for the facilitatory and inhibitory effects of peripheral cues by Krüger et al. (2014) treats them as the result of the cue-target perceptual merging due to re-entrant visual processing. In order to specify the role of these feedback mechanisms in the peripheral cueing effects, the present experiment was conducted, using a modified cuing task with pre- and post-cue trials at the valid and invalid location and random cue-target onset asynchrony ranging from −300 to +1000 ms. Analysis of the manual reaction time distribution showed a well-pronounced IOR effect in the valid pre-cue condition, but no early facilitation of reaction time was observed, neither in the pre-cue nor in the post-cue condition. These results run counter the outcomes of the traditional experiments with cue-target spatial overlap.

[1P018] Location based processing in object substitution masking

Iiris Tuvi and Talis Bachmann

Institute of Law and Institute of Psychology, University of Tartu, Finland

According to Põder’s (2013) object substitution masking model there are two stages of processing in object substitution masking (OSM). First stage is an unselective stage where attention is distributed and target signal must be detected from the noise originating from other signals. The second stage is a selective stage where attention is directed to the target location and noise from more restricted space around the target location may contribute to the masking effect. Here, we explored how close to the target location the mask must be in order to influence the OSM effect and found that a single dot mask in the target location was as influential as were the distracters next to the target location despite that these objects differed in size and complexity. When the attention directing cue appeared after the target offset and the identification relied on the visual short-term memory performance dropped about 20% compared to the simultaneous attentional selection condition. The results support Põder's OSM model and favor location based processing explanation of OSM since distracters next to the target location appear to be the source of masking noise in the trailing mask stage.

[1P019] Attentional capture by unexpected onsets depends on the top-down task-set

Josef G. Schönhammer and Dirk Kerzel

FAPSE, University of Geneva

Many studies showed that attentional capture by peripheral, spatially unpredictive precues is contingent on the task-set that observers establish in response to the target display. In contrast, a recent study reported that infrequent onset cues (a single new object in the display) captured even when the targets were red color singletons (a red item among white nontargets) and also when the targets were red non-singletons (a red target among one green and several white nontargets). This suggested that unexpected onsets capture independently of task-sets (Folk & Remington, 2015). In our Experiment 1, we replicated these findings. In Experiment 2, however, rare onset cues did not capture when the target was a red item among a single white nontarget. This finding suggests that the target display did not only induce a positive task-set for the target property (red color), but also suppression of the nontarget property (local onset or its achromatic color). Hence, the nontarget properties in Experiment 2 probably required suppression of those properties that defined the onset cue, which eliminated capture. Thus, unexpected onsets do not capture attention independently of the task-set. Previous studies in favor of this hypothesis used tasks that did not require suppression of adequate properties.

Funding: Swiss National Foundation PDFMP1-129459 and 100014_162750/1.

[1P020] Testing three systems of attention with saccadic and manual responses

Alena Kulikova and W. Joseph MacInnes

Department of Psychology, National Research University - Higher School of Economics, Russia

The idea of three attentional networks (Posner & Peterson, 1990) illustrates how systems of anatomical areas in the cortex execute specific functions of attention, such as alerting, orienting, and executive control. The Attentional Network Test (ANT) was designed to test for interactions between these systems (Fan et.al., 2002). In Experiment 1 we use a version of the ANT with an auditory trigger (Callejas, Lupiáñez, & Tudela, 2004) by adding an eye-tracker to control for eye movements. Only two main effects were found: participants were quicker to respond in congruent versus incongruent trials, and in cued versus uncued trials. Also, an interaction between congruency and validity was observed. In a Experiment 2, we modified the ANT for a saccadic response instead of a manual response by replacing the congruency task with an anti-saccade task that is believed to be performed under executive attentional control (Vandierendonck et.al., 2007). Main effects of alerting, orienting, and executive control were observed. Additionally, alerting interacted with orienting, with alerting effects only observed for valid and neutral cues. The congruency/validity interaction in Experiment 2 was different from Experiment 1, which suggests that networks might not be the same for each executive control task.

[1P021] Visual strategies of viewing flow visualisations under different workload conditions and representation types

Vladimir Laptev1, Pavel A. Orlov2, Ulyana M. Zhmailova1 and Vladimir Ivanov1

1Department of Engineering Graphics and Design, Peter the Great St. Petersburg Polytechnic University, Russia

2University of Eastern Finland School of Computing, Finland

Flow visualization is used to provide dynamic representation of data structure. It is especially valuable in big data analysis and serves a useful component of control interfaces and visual analysis tools. This paper is focused on visual strategies of data flow perception by human subjects under a workload of difference intensity. The study employed three representations of Sankey diagrams: a solid flow; separated flows; and discrete flows composed of equally sized countable modules. The workload was determined by the number of flows: three, four, and five. This allowed to estimate the impact of data structure complexity. The correct answers were less frequent under lighter workload, their number being independent from the type of representation. The subjects counted the modules in discrete flows only when the workload increased. This caused faster task solving with a smaller number of gaze fixations. The attention map for discrete flows was different from that for the other types: the subjects did not look at the single starting flow when comparing the split flows. Meanwhile, when analysing the other flow types, the subjects were switching attention to the beginning of the chart in order to compare flows' sizes with starting unsplit flow.

Frans W Cornelissen and Funda Yildirim

Laboratory of Experimental Ophthalmology, University Medical Center Groningen, The Netherlands

Orientation-selective neurons abound throughout visual cortex. Nevertheless, in human fMRI, the assessment of orientation selectivity is still relatively uncommon. Here, we use fields of oriented Gabor patches as a means to map and characterize properties of early visual cortex. In “orientation-contrast-based retinotopy (OCR)”, participants view arrays of gabors composing a foreground (a bar) and a background that can only be distinguished by gabor orientation. We compared the population receptive field (pRF) properties obtained using OCR and classic luminance-contrast-based retinotopy (LCR). Visual field maps for LCR and OCR were highly comparable. Explained variance (EV) of the pRF-models for OCR was lower than for LCR. Yet, for OCR EV remained constant over eccentricity while for LCR it tended to drop with eccentricity. This was most marked in visual areas LO1 and LO2. For V1-V4, pRF eccentricity for LCR and OCR was comparable, yet OCR resulted in smaller pRFs. For LO1 and LO2, both pRF eccentricity and size differed substantially between LCR and OCR, with lower eccentricity and smaller sizes estimated for OCR. We discuss why OCR may result in more accurate pRF estimation, and may therefore be the method of choice, in particular when characterizing higher order visual areas.

Funding: Graduate School Medical Sciences Groningen

[1P023] Locally-directed visual attention as assessed by the attentional blink: Does video game experience really matter?

Travis C Ting, Nicole H.L. Wong and Dorita H.F. Chang

Department of Psychology, The University of Hong Kong

Studies have demonstrated the potential visuo-cognitive benefits of action video gaming, e.g. reduced attentional blink (Green & Bavelier, 2003), although such effects are controversial (e.g. Murphy & Spencer, 2009). It is unclear whether visual-attentional benefits, if present, hold when attention is restricted and directed locally, in the presence of potentially conflicting global information. Here, we investigated the role of action video game experience on locally-directed visual attention by testing video-gamers (n = 18), and non-video-gamers (n = 18) on an RSVP sequence composed of congruent and incongruent Navon figures. Participants were asked to selectively attend to local aspects of the stimuli only and identify the first target and the presence of a second target. Attentional blink (AB) was quantified as the impairment in detection of a second target. Results indicated a pronounced attentional blink in both groups, with no significant difference in the magnitude of AB between groups. This is in contrast to a significant advantage for video-gamers versus non-gamers if attention is instead globally-directed (Wong et al., unpublished data). The findings suggest that visual-attentional advantages as traditionally reported for extensive video-gaming experience does not extend to a restricted local system that may tap into a slower parvocellular (P)-pathway.

[1P024] Attending to external feedback in goal-directed pointing: Differences in attention allocation based on feedback

Aoife Mahon, Constanze Hesse and Amelia Hunt

School of Psychology, University of Aberdeen, Scotland

While attention is usually automatically allocated to movement targets during action planning, consciously attending automated actions can negatively impact accuracy. Allocating attention to external locations of feedback can benefit these actions. We investigated whether attention allocation to movement goals is enhanced due to this being the location from which the most reliable feedback about movement accuracy is obtained. Participants pointed to a cued target while discriminating a perceptual target that could occur at the movement location, an independent feedback location or a task-irrelevant location. In Experiments 1–2, feedback about movement accuracy was provided immediately after the movement. In Experiment 3, feedback was provided during the movement. Experiment 1 was performed visually closed-loop while vision of the hand was prevented in Experiment 2 and 3. In all experiments, discrimination performance was enhanced at movement locations, confirming attention allocation to the movement target. Perceptual enhancement was larger when visual feedback from seeing one’s hand was removed. When feedback was given after the movement, perceptual performance was not enhanced at feedback locations, however there was enhancement if feedback was provided during the movement. This suggests that attention is needed both for movement target selection and for monitoring feedback during movement execution.

Funding: James S. McDonnell (funding my PhD), ECVP fee waiver

[1P025] The role of perceptual factors in the reflexive attentional shift phenomenon

Alessandro Soranzo1, Christopher Wilson2 and Marco Bertamini3

1Faculty of Development & Society, Sheffield Hallam University

2Teesside University

3Liverpool University

The presence of a cue in the visual scene that orients attention can interfere with what we report to see. It has been suggested that this interference effect is affected by socially-relevant characteristics of the cue (social model of interference); for instance when attention is biased by the presence of a cue to whom a mental state is attributed (e.g. another person). This paper examines whether perceptual features of the cue, readily detected by visual processes (perceptual model of interference), are sufficient to elicit the interference effect. To compare the social and perceptual models of interference, an experiment was conducted which systematically manipulated the mental state attribution to the cue. The results show that interference persists even when a mental state is not attributed to the cue, and that perceptual expectations are sufficient to explain the reflexive attentional shift, thus supporting a perceptual model of interference.

[1P026] Perception of backward visual masking in a patient with bilateral frontal leucotomy

Hector Rieiro1, Susana Martinez-Conde2, Jordi Chanovas2, Emma Gallego3, Fernando Valle-Inclán3 and Stephen L Macknik2

1Mind, Brain and Behaviour Research Center, University of Granada

2SUNY Downstate Medical Center USA

3University of A Coruña, Spain

J.R., a patient that had most of the prefrontal cortex disconnected from the rest of the brain after receiving a bilateral frontal leucotomy in his youth, participated in a series of backward masking experiments conducted at the Institute Pere Mata, a psychiatric hospital in Reus, Catalonia, Spain. Visual stimuli were presented on a computer screen and consisted of vertical bars, where the central bar (target) was abutted by two flanking bars (masks). J.R. conducted multiple sessions of a 2-alternative forced choice (2-AFC) task, where he indicated which of two targets, presented on the left and right sides of the screen, was longest, by pointing at the corresponding side of the screen. Experimental conditions included left vs right presentation on the screen, 2 target durations (34 ms and 100 ms), 3 target and masks lengths (3, 4, and 5 dva), and 6 stimulus onset asynchronies (SOAs) between target and masks (0 ms, 34 ms, 67 ms, 100 ms, 134 ms, no masks). J.R. also indicated verbally whether he thought that the left and right targets were ‘equal’ or ‘different’ in each trial. The 2-AFC results indicated significant masking at 0 ms, 34 ms and 67 ms SOAs. Yet, J.R.’s complementary verbal reports suggested unawareness of the targets.

Funding: This study was supported by a challenge grant from Research to Prevent Blindness Inc. to the Department of Ophthalmology at SUNY Downstate, the Empire Innovation Program, and the National Science Foundation.

[1P027] Reading in visual noise in developmental dyslexia and autism spectrum disorders

Milena S Mihaylova1, Katerina Shtereva2, Yordan Y. Hodzhev2 and Velitchko Manahilov3

1Institute of Neurobiology, Bulgarian Academy of Sciences

2Department of Special Education and Speech/Language Therapy‘St. Kliment Ohridski’, University of Sofia

3Department of Life Sciences, Glasgow Caledonian University, UK

Neurodevelopmental conditions as autism spectrum disorders (ASD) and developmental dyslexia (DD) are characterized by a different specific pattern of behavioural and learning difficulties. However, important common feature of both conditions is limited performance efficiency induced by external sensory noise. The present work studied visual noise effect on reading performance in individuals with ASD or DD. We compared reading durations and error rates for real words and pseudowords in presence or absence of text degradation produced by random displacement of letters above or below the horizontal reading line. Observers with ASD, DD and typically developing controls showed similar reading durations for non-degraded real words. However, reading durations as well as error rates increased proportionally to the noise variance increase. The positional noise affected most strongly reading performance in ASD. Likewise, the time for reading pseudowords was similar for the three groups of observers (insignificantly longer in DD). Increasing vertical displacement of pseudowords letter position, the reading duration and error rates increased most strongly for ASD observers. Deteriorated reading performance in visual noise for ASD and DD could be a marker for increased sensitivity to sensory noise or dysfunction in a mechanism responsible for grouping object elements and constructing global percepts.

Funding: Contract №173/14.07.2014, Bulgarian Ministry of Education and Science

[1P028] Autistic traits indicate characteristic relation between self-body, others-body, and spatial direction

Hanako Ikeda and Makoto Wada

Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Japan

Previous study has shown the difficulties of visual perspective taking in persons with autism. To examine the effect of autistic traits on these cognitive features about self-body, others-body, and visual space, we asked participants to complete left-right discriminations with different perspectives. An avatar, which was accompanied with flowers on both left and right sides, was displayed on a PC monitor. The avatar took one of three postures (front, back, and front with arm-crossing). At the beginning in each trial, participants were instructed the direction (left or right, side of left hand or right hand) and perspective (own or other person's); they had to touch an either flower as soon and accurately as possible. All participants were right-handed. We analyzed response time and error rate at each condition. In the result, persons with higher autistic traits didn't have advantages on the response time in the situation that they could project themselves onto the back view of avatar easily when they had to change the perspective to the other person's. They tended to use body parts of the avatar as cues to discriminate the directions. These results suggested that characteristic tendency about self-body, others-body, and space is associated with autistic traits.

Funding: This work was partly supported by MEXT KAKENHI Grant Number 15H01590, JSPS KAKENHI Grant Number 15K12615, Foundation for Fusion Of Science and Technology, HAYAO NAKAYAMA Foundation for Science & Technology and Culture.

[1P029] Neurotechnologies for rehabilitation of the patients in neurological and psychiatric clinics

Elena Yakimova, Evgeny Shelepin, Sergey Pronin and Yuri Shelepin

Laboratory of Physiology of Vision, Pavlov Institute of Physiology Russian Academy of Sciences, Russia

For the recovery of mental and motor functions, we created an experimental setup. Our system was made of modules that connected in various combinations, depending on the task. Stimulation module consists of a panoramic 3D-display to demonstrate virtual scenes with adjusted spatial and temporal frequencies. The crucial approach was the selective activation of magno- and parvo-channels of the visual system. The human state control module includes EEG, ECG, eye-tracking system, and the system to capture and describe body movements, facial expressions. Setup uses API software and our own software that synchronize the given signals and the whole complex of the measured parameters. The system to capture and describe body movements, which is important for neurological clinic [Adar Pelah, i-Perception, 3(4), 2012], includes a treadmill, and software that form a statistical model of human movements. The system permits the rehabilitation of patients with disorders of motor activity after a stroke, patients with psychosis (schizophrenia), psychopathy and neurosis. Rehabilitation works on the principle of optical feedback and importance of magno- and parvo-channels stimulation in natural scenes, which permit us to reactivate neuronal networks in subjects, including schizophrenia patients. Pilot studies have shown the effectiveness of this hardware-software complex.

Funding: Supported by Russian Science Foundation (project № 14-15-00918)

[1P030] Visual rehabilitation in chronic cerebral blindness: a randomized controlled crossover study

J.A. Elshout1, F. van Asten2, C.B. Hoyng2, D.P. Bergsma1 and A.V. van den Berg1

1Cognitive Neuroscience/Donders Institute for Brain Cognition and Behaviour, Radboud UMC

2Department of Ophthalmology Radboud UMC, Nijmegen, The Netherlands

The treatment of patients suffering from cerebral blindness following stroke is a topic of much recent interest. In the current study twenty-seven chronic stroke patients with homonymous visual field defects received a visual discrimination training aimed at vision restitution. Using a randomized controlled crossover design, each patient received two successive training rounds, one directed to their affected hemifield (test) and one round directed to their intact hemifield (control). Goldmann and Humphrey perimetry were performed at the start of the study, and following each training round. In addition, reading performance was measured. Goldmann perimetry revealed a statistically significant reduction of the visual field defect after the test training, but not after the control training or after no intervention. For both training rounds combined, Humphrey perimetry revealed that the effect of a directed training (sensitivity change in trained hemifield) exceeded that of an undirected training (sensitivity change in untrained hemifield). Reading speed revealed a significant improvement after training and was related to the extent of field recovery measured by Goldmann after test training. These findings demonstrate that our visual discrimination training can result in reduction of the visual field defect and can lead to improvements in daily life activities such as reading.

Funding: This work was supported by Netherlands Organization for Scientific Research medical section ZonMW-InZicht grant 94309003 (to A.V.v.d.B.)

[1P031] How the lack of vision impacts on perceived verticality

Luigi F Cuturi and Monica Gori

U-Vip (Unit for Visually Impaired People), Istituto Italiano di Tecnologia, Italia

Estimation of verticality can be biased depending on the encoding sensory modality and the amount of head and body roll tilt relative to the gravitational vector. Less is known on how these factors influence haptic perception of verticality in visually impaired people. In the present work, we had sighted and non-sighted participants with their head and body roll tilted 90° relative to gravity and asked them to perform orientation adjustments and discrimination tasks using a motorized haptic bar positioned in different locations along their body. Consistently with previous findings, sighted participants show that their perceived verticality is biased towards the opposite direction of their roll tilt. On the other hand, visually impaired individuals show a different pattern of verticality estimations compared to the sighted group. Results suggest that long-term absence of vision might lead the brain to rely mostly on internal references that are centered on the head (e.g. vestibular based) thus leading to verticality estimates towards the roll tilt. Factors influencing these differences can be ascribed to age, capacity of echolocation and visual impairment onset (either acquired or congenital). These findings throw light on the role of vision in multisensory calibration involving vestibular sensory information and body based perception.

[1P032] Word and text processing in developmental prosopagnosia

Jeffrey Corrow1, Sherryse Corrow1, Cristina Rubino1, Brad Duchaine2 and Jason JS Barton1

1Ophthalmology and Visual Sciences, University of British Columbia

2Dartmouth College

The ‘many-to-many’ hypothesis [Behrmann & Plaut (2013). Trends in Cognitive Sciences, 17(5), 210–219] proposes that processes involved in visual cognition are supported by distributed circuits, rather than specialized regions. More specifically, this hypothesis predicts that right posterior fusiform regions contribute to face and visual word processing. However, studies testing visual word processing in acquired prosopagnosia have produced mixed results. In this study, we evaluated visual word and text processing in subjects with developmental prosopagnosia, a condition linked to right posterior fusiform abnormalities. Ten developmental prosopagnosic subjects performed two tasks: first, a word-length effect task and, second, a task evaluating the recognition of word content across variations in font and handwriting style, and the recognition of style across variations in word content. All prosopagnosic subjects had normal word-length effects. Only one had prolonged sorting time for word recognition in handwritten stimuli and none were impaired in accuracy. These results contrast with prior findings of impairments in processing style in acquired prosopagnosia and suggest that the deficit in developmental prosopagnosia is more selective than in acquired prosopagnosia, contrary to predictions derived from the many-to-many hypothesis.

Funding: CIHR under Grant MOP-102567 awarded to JB;Canada Research Chair for JB; The Marianne Koerner Chair in Brain Disease for JB; The Economic and Social Research Council (UK) Grant RES-062-23-2426 for BD; The Hitchcock Foundation (BD); NIH F32 EY023479-02 (SC).

[1P033] Dysfunction of parvo–systems and its stimulation in patients with schizophrenia with early stage of the disease

Svetlana V Muravyova, Galina Moiseenko, Marina Pronina, Eugene Shelepin and Yuriy Shelepin

Pavlov Institute of Physiology, Russian Academy of Sciences, Russia

These electrophysiological and psychophysical studies, we conducted on a group of patients with paranoid form of schizophrenia, disease duration 1–3 years, mild and medium severity of the disease, showed a dominant dysfunction of parvo-system. The work consisted of 3 stages. The first stage: measurement of cognitive visual evoked potentials and measurement of contrast sensitivity of spatial stimuli and images of objects that are processed using wavelet filters for the low and high spatial frequencies. The second stage was the impact on the visual system of patients by presenting of virtual environment - video that simulates the ride on the bike in the first person through the varying landscape with different terrain. The task of the patient included a careful review of spatial images (stimulation of magno-system) and the individual objects on the monitor (stimulation of parvo-system). The third stage was the repeated measurement of cognitive visual evoked potentials and contrast sensitivity. We can conclude that in patients with schizophrenia disease duration of 1 to 3 years there was a dysfunction of the parvo-system, and the virtual environment improved the efficiency of this system by stimulating the object vision.

Funding: Russian Science Foundation (№14-15-00918)

[1P034] Impaired cognitive functioning in first-episode patients

Maya Roinishvili1, Mariam Oqruashvili2, Tinatin Gamkrelidze2, Michael Herzog3 and Eka Chkonia4

1Institute of Cognitive Neurosciences, Agricultural University of Georgia

2Tbilisi Mental Health Center, Tbilisi, Georgia

3Laboratory of Psychophysics Brain Mind Institute Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland

4Department of Psychiatry, Tbilisi State Medical University, Tbilisi, Georgia

The first psychotic episode is an important period for prevention of cognitive and social deterioration in schizophrenia. Cognitive deficits are of particular interest since they are evident even before a proper diagnosis can be made. Interestingly, there is a relation between cognitive deficits and social functioning. Here, we investigated the changes in cognitive and social functioning during one year and determined also the association of social functioning with cognitive impairments and psychopathological symptoms in first episode patients. 32 patients with a first psychotic episode and 32 healthy controls were investigated. Cognitive functions such as visual perception, executive functions, sustained attention, were tested with visual backward masking (VBM), the Wisconsin Card Sorting Test (WCST), and the Continuous Performance Test (CPT). Follow up tests were carried out after 6 and 12 months. Social functioning of the patients was evaluated by Health and Outcome Scale (HoNOS). Cognitive functions of patients were impaired compared to the healthy controls in all 3 tests. Performance in the cognitive tests did not change significantly during the year. Treatment compliance, however, improved social and symptom indicators.

[1P035] Inverted saccade adaptation in Parkinson’s disease

Mark R Harwood1, Alicia Perre-Dowd2 and Annabelle Blangero3

1Psychology, University of East London

2City College of New York

3University of Oxford

Parkinson’s disease (PD) is most known as a limb and body movement disorder, but it can also affect the initiation, dynamics and amplitude of saccadic eye movements. Multiple small ‘staircase’ saccades, each undershooting the desired target location, are commonplace in reflexive PD saccades. We hypothesized that this might result from an abnormal amplitude-increasing adaptation mechanism (‘saccade adaptation’ usually corrects consistent landing errors). To test this hypothesis we recorded 20 PD patients with age-matched controls using a brief intertrial interval, reflexive saccade paradigm. In separate sessions in the same subjects, we stepped the target forward (backward) by 20% of the initial target amplitude during the saccade towards it. We found significant amplitude-reduction in both sessions in the PD subjects, revealing an inverted saccade adaptation response to forward intrasaccadic target steps. This apparently maladaptive response might explain the persistence of ‘staircase’ saccades in PD.

Funding: National Science Foundation

[1P036] Gaze fixation during the slowing down presentation of handwriting movement in adults with autism spectrum disorders

Anaïs Godde, Raphaele Tsao and Carole Tardif

Laboratoire PsyCLE, Université Aix Marseille, France

Autism spectrum disorders (ASD) are neurodevelopmental disorders characterized by impairments in social communication, restrictive interests and behavior (DSM-5, APA, 2013). Some of these impairments can be owed to a too fast surrounding world (Gepner & Feron, 2009). Previous studies of children with ASD suggest better information processing when the presentation of dynamic facial expressions is slowed down (Charrier, 2014). We hypothesized that a visual slowing down of the presentation of handwriting movement (HM) could help adults with ASD to improve their handwriting, due to a better information processing. Indeed, HM in adults with ASD is a domain poorly investigated in spite of the difficulties (Beversdorf et al., 2001). We showed movies of dynamic handwriting of non-letters and non-words to adults with ASD and to two paired control groups on chronological age and non-verbal mental age. We manipulated presentation speed (real time speed, slow and very slow) and handwriting complexity. Gaze fixation, for the ASD group at slow speed and complex non-words, showed that there are more significant fixations on the word and less fixations on the hand compared to real time speed. This result could indicate that it facilitates the selection of relevant information for HM.

[1P037] Effect of unconscious fear-conditioned stimuli on eye movements

Apoorva R Madipakkam, Marcus Rothkirch, Kristina Kelly, Gregor Wilbertz and Philipp Sterzer

Visual Perception Laboratory Department of Psychiatry Charité – Universitätsmedizin Berlin

The efficient detection and evaluation of threat from the environment is critical for survival. Accordingly, fear-conditioned stimuli receive prioritized processing and capture attention. Although consciously perceived threatening stimuli have been shown to influence eye movements, it is unknown whether eye movements are influenced by fear-conditioned stimuli that are presented outside of awareness. We performed a classical fear-conditioning procedure with fearful faces using an aversive noise as the unconditioned stimulus. In a subsequent test phase, participants’ eye movements were recorded while they were exposed to these fear-conditioned stimuli that were rendered invisible using interocular suppression. Chance-level performance in a manual forced-choice-task demonstrated participants’ unawareness. Differential skin conductance responses and a change in participants’ subjective ratings of the fearfulness of the faces indicated that the conditioning procedure was effective. In contrast, eye movements were not specifically biased towards the fear-conditioned stimulus. These results suggest that (I) the initiation of eye movements towards fear-conditioned stimuli requires awareness, or (II) a saccadic bias towards fear-conditioned stimuli in the absence of awareness hinges on the stimulus feature that is conditioned: While fear-conditioning contingent on high-level features, such as face identities, may abolish overt attentional selection without awareness, it may still occur for fear-conditioned low-level features.

Funding: DFG, EXC257 NeuroCure (DFG funded)

[1P038] The dynamics of gaze trajectory when imagining a falling object

Nuno A De Sá1, Heiko Teixeira, Hecht2

1Instituto de Psicologia Cognitiva, Universidade de Coimbra

2Johannes Gutenberg-Universität Mainz, Germany

Despite known neural delays involved in the perception-action cycle, humans show impressive abilities when interacting with moving objects. Allegedly the brain capitalizes on physical regularities by developing internal models which aid and guide the processing of dynamic information. In striking contrast, humans show significant deviations from normative physics when asked to reason about dynamic events. To date, few inquiries have been made into how humans process dynamic information at intermediate levels between sensoriomotor and cognitive stages. The present work aims at filling this gap: participants were shown animations depicting a basketball being launched from the top of a brick wall, with heights varied, at different speeds. The ball disappeared just before falling or 350 ms after leaving the wall. Participants were asked to vividly imagine that the ball continued its motion and to indicate when they thought it would reach the floor while their eye movements were recorded. The visual information about the first 350 ms had remarkable positive effects on the eye trajectory but not on the temporal accuracy. The gaze followed paths in close agreement with the physical trajectory, except for a deceleration of eye movements in the final phase, which might account for an overestimation of time judgements.

Funding: Fundação para a Ciência e Tecnologia - SFRH/BP/84118/2012

[1P039] Does conceptual quantity of words affect the spatial coding of saccade responses, like a SNARC effect?

Alexandra Pressigout, Agnès Charvillat, Alexandra Fayel, Viktoriya Vitkova and Karine Doré-Mazars

Laboratoire Vision Action Cognition/Institut de Psychologie, Université Paris Descartes

The spatial-numerical association of response codes (SNARC) effect has abundantly been documented with numerical quantity for Arabic numbers. In the present study, we tested whether the representation of conceptual quantity in words (“IDOLE” versus “LEGION”) is automatically activated and spatially coded in the same way as numerical quantity. On the basis of saccade responses, we examined gaze durations both in a baseline condition where participants had to judge the parity of number words (from zero to nine) and in a condition where they had to judge the gender of words expressing small versus large quantities. Preliminary results show the expected SNARC effect elicited by numerical quantity (i.e. faster gaze durations to leftward/rightward responses with small/large numbers, respectively). Surprisingly, a different pattern was found for conceptual quantity with a gender-space interaction (i.e. faster gaze durations for leftward/rightward responses with masculine/feminine words, respectively) which tends to interfere with a SNARC-like effect. Results are discussed in terms of multiple stimulus-to-response mapping conflicts involving the semantic and grammatical levels of word processing.

[1P040] Identifying information processing strategies during the picture completion test from eye tracking data

Ayano Kimura, Shinobu Matsunaga and Takanori Matsuno

Department of Psychology, Faculty of Humanities and Social Sciences, Showa Women's University, Japan

The Picture Completion Test (PCT) is a visuospatial cognitive task in which an important missing portion of a picture must be identified. The PCT is included as a sub-test in Binet and Wechsler intelligence tests and is thought to involve the use of a wide range of abilities including long-term memory, knowledge, and reasoning. However, few studies to date have examined the cognitive processing involved in the PCT. In the present study, we administered our own version of the PCT to 20 adults (mean age: 21.61 years, SD: 1.62) and analyzed eye tracking data during the test. We analyzed whether patterns of gaze toward areas including missing portion of a picture (Critical Areas of Interest : CAOI), by examining the relationship with assumed visual information processing subtypes of the PCT. In many of the pictures, the relative extent of gazing toward the CAOI was linked to successful performance of the test. Furthermore, using mean first fixation time on the CAOI and the difficulty of each picture as indicators, our findings suggested that differences in information processing strategies can be determined.

Funding: This work was supported by JSPS KAKENHI Grant Number 26590163 (to S. M.).

[1P041] The role of eye movements during image learning and recognition

Polina Krivykh and Galina Menshikova

Psychology, Lomonosov Moscow State University, Russia

The aim of our study was to reveal eye movement characteristics when performing two tasks: A) to memorize a set of images and B) to recognize them among other images. In session A the participants were shown a short comic strip (15 pictures, each for 5000 ms) and asked to learn them. In session B they were shown a set of the same 15 images added on 5 unfamiliar but related story images included in a strip. The participants were asked to recognize the images learned in session A. During the performance of both sessions eye movements were recorded. It was shown that for learned stimuli fixation durations, fixation counts and saccade counts during correct recognition were significantly lower in comparison with the same eye movement characteristics for incorrect answers. As to unfamiliar stimuli eye parameters during their perception were similar to those revealed in a session A. The data showed also that the features selected in the first and second fixations during learning session were not recapitulated during recognition session. Our results indicate that the eye movement characteristics may be considered as reliable indicators of learning processes.

[1P042] Fixations on human face: cross-cultural comparison

Kristina I Ananyeva, Ivan Basyul and Alexander Demidov

Psychology, Moscow Institute of Psychoanalysis

Moscow, Russia

We studied cross-cultural differences in eye movement characteristics with 49 Russian and 60 Tuvanian subjects viewing 14 color images of Russian and Tuvanian still faces in two conditions: (1) free viewing of face, and (2) identification of race of face. Registration was made using SMI RED-m 120 Hz eye-tracker. Average duration and number of fixations in different facial areas were counted. In free viewing condition, Russians demonstrated significantly longer duration of fixations in left and middle parts of face when viewing both Tuvan and Russian faces. Tuvanians demonstrated significantly more fixations in these areas, and longer average duration of fixations in midface area of Russian faces. Russians showed significantly more fixations in midface area of Tuvanian faces. In race attribution task, Russians showed significantly shorter duration of fixations and larger number of fixations in all facial areas in comparison with free viewing condition. Tuvanians showed similar duration of fixations in the left, right and midface area, and number of fixations in midface and right areas, and, compared to Russians, significantly longer duration of fixations and smaller total number of fixations in all facial areas.

Funding: The study was supported with the Russian Federation Presidential grant for young scientists, project no. MK-7445.2015.6.

[1P043] Is the remote distractor effect on saccade latency greater when the distractor is less eccentric than the target?

Soazig Casteau1, Françoise Vitu1 and Robin Walker2

1Psychology Department, Aix-Marseille Université, France

2Royal Holloway University of London, UK

The remote distractor effect (RDE) shows that saccades are initiated with a longer latency when their target is displayed with a foveal, and/or remote, distractor stimulus (Walker et al., 1997). It has been attributed to a competition between a fixation, gating, system, whose activity is enhanced when a stimulus falls within an extended foveal region, and a move system, associated with peripheral stimulation. According to this hypothesis, the critical variable to account for saccade latency is the distractor-to-target-eccentricity ratio. However, while some studies suggest that inter-stimulus distance might be the relevant variable, the ratio has never been manipulated independently of inter-stimulus distance. Here, we manipulated orthogonally the distractor-to-target-eccentricity ratio and the angular separation between the stimuli. The target, always on the horizontal axis, appeared either alone or with a distractor on the same or a different axis, with the distractor eccentricity, and hence the ratio, being also manipulated. As expected, we observed that the presence of a distractor systematically delayed saccade onset (the RDE), compared to a singleton target condition. Interestingly this effect tended to decrease as the distractor-to-target-eccentricity ratio increased, irrespective of the angular separation between the stimuli. These findings provide further evidence for the fixation-move hypothesis.

Funding: Fondation Fyssen postdoctoral fellowship to S. Casteau & Experimental Psychology Society small grant to R. Walker and S. Casteau

[1P044] The correlation between visual perception and verbal description of painting

Veronika Prokopenya and Elena Chernavina

Laboratory for Cognitive Studies, Division of Convergent Studies in Natural Science and Humanities, St.Petersburg State University, Russia

This study focuses on the perception and verbalization of visual information. Our aim was to investigate if there is any correspondence between visual perception and verbal description of complex images. Eye movements were recorded while 30 subjects were looking at the classic painting of genre scene (free viewing), and then they were asked to compose its coherent verbal description. Comparative analysis revealed the strong correlation between the eye-movements patterns (fixations distribution and duration) during free viewing and the following narration: the more and the longer the gaze was directed to the certain region of the picture, the more words were dedicated to this region in the verbal description. Furthermore our results showed the correlation between the ordinal sequence of fixations on depicted objects and the order these objects are mentioned in the narrations. Although paintings and verbal texts have different structures (the latter are composed of discrete units and have linear structure, the former are not divided into discrete units and are not linear), our data show that there is a relationship between the way humans perceive visual information and the way they express it in natural language.

Priscilla Heard and Hannah Bainbridge

Psychology, University of the West of England

Twenty two participants viewed Nimstim and Ekman emotional faces. Half the faces had their eyes covered with sunglasses. Participants fixated a cross while the face was presented 15 degrees to the right or left of fixation. They made a seven alternative forced choice response of one of the standard emotions. Once this emotion recognition had been made participants were allowed to free view the face centrally and name another emotion if they so wished. The observer's eyes were tracked to check the fixation was maintained and monitor where the eyes moved when free viewing was permitted. Emotion recognition 15 degrees in the periphery was overall 60% correct for the normal faces but only 52% for the faces wearing sunglasses. When centrally viewed the emotion recognition went up to 80% for the normal faces but was only 67% for those wearing sunglasses. The happy face was the most recognized emotion under all conditions. Eye tracking data will be reported.

[1P046] Planning functional grasps of tools. What can eye movements tell us about motor cognition?

Agnieszka Nowik, Magdalena Reuter and Gregory Kroliczak

Institute of Psychology, Adam Mickiewicz University, Poland

The visual structures and/or the perceived functional characteristics (i.e., affordances) of tools are thought to automatically “potentiate” relevant actions. Is this also the case for eye movements in the absence of overt tasks? This idea was tested directly by asking participants skilled in tool-related actions to freely view pictures of tools or to watch them with a view to planning functional grasps that enable immediate use. SMI RED eye-tracker was utilized to study the patterns of eye movements. The stimuli were high-resolution photos of workshop, kitchen, and garden tools shown at three angles (0, 135, and 225 degrees) in their foreshortened perspectives, emulating 3D viewing. Although as expected, the number of saccades did not differ between tasks, there was a significant interaction between task and the inspected object part (i.e., affording grip vs. enabling action). Namely, when participants planned functional grasps, their attention was drawn significantly longer to parts affording appropriate grips. This was not the case during free viewing wherein fixations were distributed more equally across parts. These outcomes indicate that the visual exploration of tools is quite sensitive to task requirements. Therefore, other cognitive factors must contribute to automatic action potentiation when tools are encountered.

Funding: NCN Grant Maestro 2011/02/A/HS6/00174 to GK

[1P047] Testing the level of knowledge of a foreign language using Eye-Tracking technology

Maria Oshchepkova and Galina Menshikova

Faculty of Psychology, Lomonosov Moscow State University, Russia

The aim of our work was to create a method for testing the level of knowledge of English language of Russian students using eye-tracking technology. Three types of images were constructed consisting of: 1) two English words one with correct-incorrect spelling located right and left of the fixation point; 2) the English word and four (1 correct and 3 incorrect) translations into Russian; 3) the Russian word and four (1 correct and 3 incorrect) translations into English. The images were presented for 5000 ms. The participant's task was to choose the right variant. During the execution eye movements were recorded. The results showed that fixation durations and fixation counts during the fixation on the correct word were significantly larger than on the misspelled word. The same eye activity was shown during the choice of correct word translation into Russian or English. Our results were in good agreement with the E-Z Reader Model (Reichle et al. 1998). Thus eye movement characteristics allow developing the method of testing the level of knowledge of a foreign language.

[1P048] Eye movements in second language vocabulary acquisition

Anna Izmalkova, Irina Blinnikova and Sofia Kirsanova

Psychology, Moscow State Linguistic University, Russia

We used eye tracking technique to investigate the process of reading SL texts. To date, studies of eye movements in SL reading have focused on the influence of contextual characteristics - frequency, word familiarity, etc. (Williams & Morris, 2004; Rayner et al., 2011). It remains largely unknown, however, how SL vocabulary acquisition techniques are reflected in eye movements. Eye movement data of 26 Russian-speaking students was recorded as they read an English (SL) text with 10 implanted low-frequency words. We classified vocabulary acquisition techniques the subjects reported based on the work of H.Nassaji (2003): using contextual, morphological or discourse knowledge. We also analyzed the mistakes Ss made when asked to translate the words into their native language. The coefficient of contingency between the techniques and the mistakes was 0.43 (p < .05). Significant distinctions were found in eye movement patterns when different vocabulary acquisition techniques were used: most fixations and returns to the words were made if the use of contextual knowledge was reported, whereas using discourse knowledge resulted in fewer fixations on the words and shorter saccadic amplitude. The findings indicate that subjects use different vocabulary acquisition techniques, which are consistent with recall mistakes and are reflected in eye movement characteristics.

Funding: The study was sponsored by the Russian Fund for the Humanitarian Sciences (№ 16-36-00044)

[1P049] Evolutive Gradient Face Compositing using The Poisson Equation

Ruben Garcia-Zurdo

Psychology, Colegio Universitario “Cardenal Cisneros”, Spain

Face compositing aims to create images resembling the faces seen by witnesses for forensic or law goals. Composites have allowed the identification of offenders and are accepted as a trial evidence. State-of-the-art systems for face compositing, as the EvoFit system, are based on the evolution of the Principal Component Analysis (PCA) coefficients of a sample of images according to participant selections. Given that image gradient represents image features in a more reliable way, we present an evolutive method where images are represented by the complex PCA coefficients of their gradient instead of the pixel ones. To translate the gradient representation into the pixel domain we solve the corresponding Poisson equation relating the image Laplacian and the image pixels using Neumann boundary conditions. This method is amenable for real world applications. We performed a within-subjects design controlling for the distinctiveness of target identities to compare the perceived results of using pixel and gradient methods in evolutive face compositing. Participants perceive a higher likeness between the resulting composites and the target identity when they were built using the gradient representation.

[1P050] Combined TMS and fMRI demonstrates a Double dissociation between face and motor functional brain networks

David Pitcher1, Daniel Handwerker2, Geena Ianni2, Peter Bandettini2 and Leslie Ungerleider2

1Psychology, University of York

2NIMH

The brain contains multiple brain networks specialized for different cognitive operations that support human thought. The extent to which these functional networks are functionally independent and segregated is unclear. We addressed this issue by causally disrupting two functional brain networks with thetaburst transcranial magnetic stimulation (TBS) and measuring the effects of this disruption across the entire brain with resting-state functional magnetic resonance imaging (rs-fMRI). Over two sessions sixteen participants were scanned using rs-fMRI before and after TBS was delivered over the face-selective right superior temporal sulcus (rpSTS) or the hand region in the right motor cortex (hMC). Results revealed that TBS delivered over the rpSTS selectively reduced connectivity in the face network more than connectivity in the motor network. While TBS delivered over the hMC selectively reduced connectivity in the motor network more than connectivity in the face network. These results demonstrate that brain networks that support different types of cognitive operations are spatially and functionally independent and can be selectively dissociated by neural disruption to component cortical regions. We propose that systematically disrupting functional networks with TBS will facilitate our understanding of how the brain supports cognition.

Funding: NIMH Intramural

[1P051] Spatiotemporal dynamics of view-sensitive and view-invariant face identity processing

Charles C Or, Joan Liu-Shuang and Bruno Rossion

Psychological Sciences Research Institute & Institute of Neuroscience, University of Louvain

How humans differentiate faces across substantial variations in head orientation is not well understood. Using fast periodic visual stimulation in electroencephalography (EEG), we investigated face individualization in 20 observers across 7 ranges of viewpoint variations: 0° (no change), ±15°, ±30°, ±45°, ±60°, ±90°. Stimulation sequences (60s each) consisted of one face identity varying randomly in viewpoint at F = 6 Hz (6 faces/s) interleaved with different face identities every 7th face (F/7 Hz = 0.86 Hz). Periodic EEG responses at 6 Hz captured general sensitivity to faces; those at 0.86 Hz and harmonics captured face individualization. All observers showed general face-sensitive responses, with a view-sensitive pattern emerging over occipito-temporal regions with viewpoint variations. Face-individualization responses, also present in all observers, decreased linearly over occipito-temporal regions with increasing viewpoint variations (responses at ±90° were <50% of those at 0° variation), suggesting reduced face-identity discrimination. Analyzing the face-individualization responses in the time-domain revealed a dissociation between an early (∼200–300 ms), view-sensitive response and a later (∼300–600 ms), view-invariant response. These findings suggest that an initial reduced ability to discriminate face identities due to viewpoint variations is compensated partly by a later view-invariant process.

Funding: This work was supported by the Postdoctoral Researcher Fellowship from the National Fund for Scientific Research to C.O. and J.L., and Grant facessvep 284025 from the European Research Council to B.R.

[1P052] Holistic Processing of Static and Rigidly Moving Faces

Mintao Zhao and Isabelle Bülthoff

Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tubingen

Unlike most everyday objects, faces are processed holistically—they tend to be perceived as indecomposable wholes instead of a collection of independent facial parts. While holistic face processing has been demonstrated with a variety of behavioral tasks, it is predominantly observed with static faces. Here we investigated three questions about holistic processing of moving faces: (1) are rigidly moving faces processed holistically? (2) does rigid motion reduces the magnitudes of holistic processing? and (3) does holistic processing persist when study and test faces differ in terms of facial motion? Participants completed two composite face tasks (using a complete design), one with static faces and the other with rigidly moving faces. We found that rigidly moving faces are processed holistically. Moreover, the magnitudes of holistic processing effect observed for moving faces is similar to that observed for static faces. Finally, holistic processing still holds even when the study face is static and the test face is moving or vice versa. These results provide convincing evidence that holistic processing is a general face processing mechanism that applies to both static and moving faces. These findings indicate that rigid facial motion neither promotes part-based face processing nor eliminates holistic face processing.

Funding: The study was supported by the Max Planck Society.

[1P053] Face inversion reveals configural processing of peripheral face stimuli

Petra Kovács1, Petra Hermann2, Balázs Knakker2, Gyula Kovács3 and Zoltán Vidnyánszky2

1Faculty of Natural Sciences, Department of Cognitive Science, Budapest University of Technology and Economics

2Brain Imaging Centre, Research Centre for Natural Sciences, Hungarian Academy of Sciences

3Institute of Psychology, Friedrich-Schiller-University of Jena, Germany

DFG Research Unit, Person Perception, Friedrich-Schiller-University of Jena

Sensitivity to configural properties of face stimuli is a characteristic of expert face processing and it can be unveiled by changing the orientation of foveal face stimuli, resulting in impaired face perception and the modulation of the amplitude and latency of the N170 component of ERP responses. However, to what extent configural face processing is preserved in the periphery remains to be explored. Here we addressed this question by measuring 3AFC face identity discrimination performance and ERP responses to upright and inverted face stimuli presented foveally or peripherally (10 deg to the left or right from the fixation). The results revealed significant face inversion effects on the behavioural and ERP responses both in the case of foveal and peripheral face stimuli. Furthermore, the strength of behavioural as well as N170 face inversion effects showed a strong correlation when conditions with left and right visual field presentation were compared. Whereas we failed to show an association in these measures between the foveally and peripherally presented face conditions. These findings provide evidence for configural face processing in the periphery and suggest that foveal and peripheral configural processing of faces might be subserved by different neural processes.

[1P054] Five Commonly Used Face Processing Tasks Do Not Measure The Same Construct

Elizabeth Nelson, Abhi Vengadeswaran and Charles Collin

Psychology, University of Ottawa

Researchers have used a variety of tasks to examine holistic/configural processing in face recognition, with the implicit assumption that they are all measuring the same construct. However, there is a lack of consensus with respect to how recognition performance correlates across the most common tasks. Additionally, there is a lack of evidence demonstrating what each task is actually measuring: featural, configural, or holistic face processing, or some combination thereof. Our hypothesis is that the most commonly-used tasks measure different constructs, or different components of face processing. We conducted a correlational analysis of efficiency scores across the following tasks: the Complete Composite Effect Task, the Partial Composite Effect Task, the Face Inversion Effect Task, the Configural/Featural Difference Detection Task, and the Part Whole Effect Task. Results demonstrate that performance is most strongly correlated within each task, with little correlation across tasks. This suggests that each task is measuring a unique component of facial recognition. The two versions of the Composite Effect Task were moderately correlated with each other. Implications for the face recognition literature will be discussed.

Funding: NSERC

Nichola Burton1, Linda Jeffery2, Jack Bonner2 and Gillian Rhodes2

1School of Psychology, The University of Western Australia

2ARC Centre of Excellence in Cognition and its Disorders The University of Western, Australia

We investigated the timecourse of the expression aftereffect, an adaptation aftereffect that biases perception of facial expressions towards the opposite of the adapted expression. In Experiment 1 we examined the effect of the duration of adaptation and test stimuli on the size of the aftereffect. We found that the aftereffect builds up logarithmically and decays exponentially, a pattern also found for facial identity and figural face aftereffects, and for lower-level visual aftereffects. This “classic” timecourse is consistent with a perceptual locus for expression aftereffects. We also found that significant aftereffects were still present as long as 3200 ms after adaptation. We extended our examination of the longevity of the aftereffect in Experiment 2 by inserting a stimulus-free gap between adaptation and test. A significant expression aftereffect was still present 32 seconds after adaptation. The persistence of the aftereffect suggests that this effect may have a considerable impact on day-to-day expression perception.

Funding: Australian Research Council (ARC) Centre of Excellence in Cognition and its Disorders (CE110001021), an ARC Professorial Fellowship to Rhodes (DP0877379), and an ARC Discovery Outstanding Researcher Award to Rhodes (DP130102300).

[1P056] Fast and objective quantification of face perception impairment in acquired prosopagnosia

Joan Liu-Shuang, Katrien Torfs and Bruno Rossion

IPSY - IONS, University of Louvain

The assessment of perceptual deficits, common in many neurological conditions, is challenging. Fast periodic “oddball” stimulation coupled with electroencephalographic (EEG) recordings allows for objective and sensitive quantification of visual perception without requiring explicit behavioural output. In this paradigm, base stimuli appear at a fast fixed rate (6 Hz, SOA = 170 ms) with target stimuli inserted at regular intervals (1/5 stimuli = 6/5 Hz). Periodic EEG responses at 1.2 Hz and its harmonics (2.4 Hz, 3.6 Hz…) reflect perceptual discrimination between base and target stimuli at a single glance. We tested this approach with PS, a well-described patient specifically impaired at face recognition following brain damage (prosopagnosia). We first presented sequences containing “object” stimuli interleaved with “face” stimuli (sequence: ObjObjObjObjFaceObjObjObjObjFace…). Conformingly to her preserved ability to detect faces, PS showed periodic face-selective responses within normal range. However, when testing face individualisation with “different” face identities (B, C, D…) inserted into sequences containing a “same” face identity (A; sequence: AAAABAAAACAA…), face individualisation responses were absent for PS. By contrast, significant responses found in all age-matched controls. These observations result from only 8-12 min of recordings and demonstrate the value of fast periodic visual stimulation in EEG to assess visual perception in difficult-to-test populations.

Funding: ERC grant “facessvep”, FNRS PhD grant FC91608

[1P057] Vertically oriented cues to face identification are susceptible to color manipulations

Kirsten Petras, Laurie Geers and Valerie Goffaux

Faculty of Psychology and Educational Sciences, Université Catholique de Louvain

Face-identity processing has been suggested to primarily rely on horizontally-oriented cues. However, most evidence for this horizontal advantage comes from experiments using grayscale photographs, discarding the potential contribution of the color cues which characterize natural viewing conditions. We tested behaviorally whether color cues influence the horizontal dependence of facial identity processing. Participants were familiarized with two computer generated, full color human face avatars. In a subsequent testing period these familiar faces appeared with their native color-spectrum (color-congruent condition) in half of the trials and with the color spectrum of the other identity (color-incongruent condition) in the remaining half. All images were filtered to preserve either horizontally-oriented information, vertically-oriented information or a combination of both. Filtered faces were presented together with decreasing levels of grayscale noise in order to obtain the psychometric function of face identification in each participant. We found that the recognition of vertically-filtered but not of horizontally-filtered faces suffers from color-incongruence resulting in a disruption of identity discrimination, even in the absence of noise. Our results suggest that vertically-oriented information may be instrumental in conveying color cues to face identity. These findings highlight the importance of considering the color properties of face stimuli when investigating identity recognition.

[1P058] The contribution of spatial transformations in the estimation of the psychological characteristics of people by facial expressions

Vladimir A Barabanschikov and Irina Besprozvannaya

Psychology, Moscow Institute of Psychoanalysis, Russia

E. Brunswick’s (1956) studies of perception of schematic faces show that transformation of eye locations, nose length and mouth height make possible to design experience of different emotional states and personality traits. V.A. Barabanschikov and E.G. Jose (2015) showed that the tendencies of induced perception described by Brunswick preserved when photos of real faces are perceived. Our study was focused on the estimation of psychological features of facial images with configurational changes in the structure (sadness and joy). Images of male and female faces from the Pictures of Facial Affect Base by P. Ekman subjected to linear transformation of four configuration features were used as stimuli. The study involved 103 participants. Type of configurational transformations was the independent variable; the dependent one was the estimation of psychological qualities using bipolar scales. In the estimation of psychological characteristics by faces with different transformational change, significant differences were received for a number of scales. “Neutral faces” were estimated as more charming, strongly active, resolute and confident. “Sad faces” were estimated as more unsociable and insincere. “Happy faces” were estimated as talkative, conscientious, open and vigorous.

Funding: The study was supported with the Russian Science Foundation grant #14-18-03350 (“Cognitive mechanisms of non-verbal communication”).

[1P059] Skilled face recognizers have higher contrast sensitivity in the right hemifield

Simon Faghel-Soubeyrand and Frédéric Gosselin

Departement of psychology, Université de Montréal

We tested the hypothesis that individual differences in face recognition ability can be accounted for by systematic, qualitative variations in the use of spatial information. In experiment 1, 75 participants were submitted to a Bubbles face gender discrimination task. Group classification image of observers from the top performance tercile, as indexed by the quantity of samples required to reach target performance, revealed the use of the eye on the right of the face images while the group classification image from the bottom tercile shows the use of the eye on the left of the face. In experiment 2, we asked whether these better face recognizers have higher contrast sensitivity in their right hemifield. Thirty participants completed the same task as in experiment 1 as well as an orientation discrimination task (horizontal/vertical) with Gabors that spanned 5 deg and ranged from 0.25 to 12 cpd at 2.2 deg left, right, and under fixation cross. Right eye usage was linked to higher maximum sensitivity threshold in the right (r = .49, p = .007) but not in the left hemifield (r = −0.19, p = .34). These results indicate for the first time a link between the lateralization of a low-level visual ability—contrast sensitivity—and face recognition ability.

Funding: Conseil de Recherche en Sciences Nature et Génie (CRSNG), Vision Health Research Network (VHRN)

[1P060] Face-responsive ERP components show time-varying viewing angle preferences

Anna L Gert1, Tim C Kietzmann1 and Peter König2

1Institute of Cognitive Science, Osnabrück University;2Institute for Neurophysiology and Pathophysiology (University Clinics Hamburg Eppendorf)

In our everyday life, we encounter faces from a variety of viewing angles. Despite our ability to generalize across views, selected angles can provide behavioral benefits. For instance the 3/4 view has been described as advantageous in face identification. To test the physiological substrate of these effects, we presented faces shown from a large variety of viewing angles and examined changes in amplitude in the classical EEG face processing components. Neural responses were recorded using high density, 128 channel EEG, while subjects viewed images of four identities, seen from 37 angles presented in a random sequence from left to right profile view. The experimental task was kept condition-orthogonal by asking the subjects to respond to occasional color-changes in the central fixation-dot. Our cluster permutation tests and post-hoc pairwise t-tests revealed significant effects of viewing angle, even after controlling for low-level contrast of the stimulus quadrants. Unexpectedly, while the P100 exhibited a dominant preference for viewpoints close to 45°, the face-selective N170 and C250 did not. The N170 showed strongest activation for profile and front facing viewpoints, whereas the C250 exhibited no clear viewpoint preference. Together, these findings suggest shifting preferences for distinct viewing angles and increasing viewpoint-invariance over time.

Funding: Research and Innovation programs of the European Union (FP7-ICT-270212, H2020-FETPROACT-2014 grant SEP-210141273), European Research Council (ERC-2010-AdG #269716)

[1P061] Interaction effect between length of nose, viewing angle of face, and gender in estimation of age

Takuma Takehara1 and Toyohisa Tanijiri2

1Department of Psychology, Doshisha University

2Medic Engineering Co. Ltd.

There have been studies reporting that length of nose, viewing angle of a face, and gender of a face influenced estimation of age. These previous studies, however, used two-dimensional facial images in frontal or oblique views and presented those photos to participants. No studies have used digitally averaged three-dimensional representations. We digitally generated three-dimensional images of male and female averaged faces as controlled facial stimuli and manipulated independent variables such as length of the nose and angle of view of the face. Participants were then asked to estimate the age of the faces. Results showed that the estimated age of the averaged faces was higher than the mean of the actual age of the original faces, suggesting that averaging faces has an effect on the increment of age estimation. Also, the estimated age of averaged male faces was higher than those for female faces in all conditions. When the faces were presented from different angles the estimated age of averaged male faces reduced when compared to a purely frontal view. Moreover, the estimated age of faces with lengthened noses was higher than faces with other nose lengths.

[1P062] The Improved Discrimination on Facial Ethnicity Induced by Face Adaptation

Miao Song

School of information and engineering, Shanghai Maritime University

Adaptation to a face is reported to bias the perception of various facial dimensions and improve the sensitivities of neuronal populations tuning the face. In the present study, we examined whether the face adaptation could influence the discrimination on facial ethnicity. Facial ethnicity was manipulated by morphing between Asian and Caucasian faces. We measured the just-noticeable differences (JNDs) for an Asian/Caucasian face after subjects adapted to an Asian face, a Caucasian face, or blank stimuli, with Asian subjects. The results suggest that adaptation to a Asian or Caucasian face could improve ethnicity discrimination for the faces at the adapted level, and this effect could transfer across the changes in the image size and location. Moreover, the improvement effect is slightly stronger for Asian subjects in the Caucasian face adapting condition than that in Asian face adapting condition. Our results indicate that the recent visual experience could calibrate the high-level visual system and selectively improves the discrimination at the adapted characteristic.

Funding: Supported by the School Foundation of SMU (No. 20130468), NSFC (No. 61403251), and NSFS (No. 14ZR1419300)

[1P063] Painted features transform the shape of 3-D surfaces they are painted on – the case of faces

Thomas V Papathomas and Attila Farkas

Laboratory of Vision Research, Rutgers University, NJ USA

Humans parse the world using objects (e.g., when searching for an item), as well as surfaces (mainly in nature, e.g., running in uneven terrain). Certain Patrick Hughes’s paintings, such as “Forced into Reverse Perspective (2008)” [Papathomas et al., iPerception, 2012], “Day Dreaming” (2008) [Papathomas, “Innovating perspective”, in A New Perspective: Patrick Hughes, Flowers Gallery Publishing, 2014; Papathomas, ECVP 2014] afford compelling illustrations of object superiority over surfaces. Different objects in these paintings are painted in forced-, flat- or reverse-perspective, and they appear to rotate in different directions, even if they are painted on the same planar surface, thus “breaking” the surface into parts that exhibit disjoint motions. We report on a similar phenomenon in which realistically painted facial features drastically transform the 3-D shape of the underlying human facial geometry. For example, these painted features can mask imperfections and distortions of facial parts (nose, cheeks, lips, etc.), thus changing significantly the overall appearance of faces. Remarkably, this 3-D transformation is much more powerful if the features are painted on the concave side of the mask and the viewer experiences the “hollow-mask illusion”, rather than when the features are painted on the convex side of the mask.

[1P064] Center-surround unconscious visual contour integration

Hongmei Yan, Huiyun Du and Xiaoqiao Tang

Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China

Contour integration is a fundamental function of human vision to generate a coherent representation of visual objects. Traditional views believed that contour process follows the Gestalt rules, which implies that contour integration is accomplished at higher cortical areas. But others provided evidences that contour integration can occur as early as V1 (Bauer et al., 2002; Li et al., 2006; Gilad et al., 2013). A recent study suggested a model of contour integration involving an almost concurrently bidirectional cortico-cortical loop (Li et al., 2014). However, whether contour integration process must be achieved in participation of consciousness or not, there is no conclusion. In this study, a binocular rivalry flash suppression paradigm is applied to study whether peripheral contour clues could influence central contour perception under the state of awareness and unawareness. The results showed that, like visible collinear surrounding contour cues in one eye could improve the performance of central contour integration in the other eye and visible disordered Gabor surroundings could interfere the performance of central integration, invisible surrounding collinear information in one eye could also facilitate the contour integration in the other eye unconsciously. We deduced that contour integration may occur without the consciousness.

Funding: This work was supported by the 973 project (2013CB329401), and the Natural Science Foundations of China (61573080, 91420105).

[1P065] The influence of a (physical or illusory) barrier on motion correspondence

Elisabeth Hein and Bettina Rolke

Department of Psychology, University of Tübingen

A major task of the visual system is to organize its input in a way that elements that belong together are represented together. Here we investigated whether a physical and/or illusory barrier can influence this correspondence process. To this end we used an ambiguous apparent motion display (the Ternus display), in which three elements are presented next to each other, shifted by one position from one frame to the next. Depending on how correspondence between elements is established, this display can be perceived as one element jumping across the other two (element motion) or as all three elements moving coherently together as a group (group motion). A barrier was introduced using rectangles in the background of the Ternus display that were positioned in a way that one side of the rectangle was presented between the Ternus elements. The rectangle borders were either physical or illusory (using Kanizsa-type inducers). Participants reported to see more element motion when a barrier was between the elements, no matter whether this barrier was physical or perceived. The results suggest that barriers can influence the correspondence process, and in particular that this process happens after modal completion is achieved.

[1P066] The Global Precedence Effect is not affected by background colour

Jan L Souman, Sascha Jenderny and Tobias Borra

Experience & Perception Research, Philips Lighting

In the evaluation of hierarchical stimuli, the processing of the global shape has been found to interfere more with that of the local elements than vice versa (Global Precedence Effect, GPE). Michimata et al. (1999) reported that this effect disappeared when stimuli were presented against a red background. They explained their results in terms of suppression of the magnocellular pathway by red light. In an experiment with 18 participants, using the same stimuli as in Michimata’s original study, we failed to replicate this colour dependency of the GPE. It occurred for red as well as green and grey backgrounds. Since the stimuli in the original study were not optimally suited to differentially activate the magnocellular pathway, we performed a second experiment in which we used Gabor patches with either a high (∼4 cpd) or a low (∼0.5 cpd) spatial frequency as local elements, aligned horizontally or vertically to constitute global shapes. These were presented against equiluminant red, green or blue backgrounds. Again, we observed the GPE for all three background colours. Spatial frequency only mattered on a blue background, where the GPE only occurred with the higher frequency. Our results cast doubt on the magnocellular explanation by Michimata et al.

[1P067] Neurophysiological investigation of the role of (reflection) symmetry in figure-ground segregation

Giulia Rampone, Marco Bertamini and Alexis David James Makin

Department of Psychology, University of Liverpool

Reflection symmetry detection and contour integration are mediated by similar extrastriate networks, probably due to symmetry acting as a cue in figure-ground segregation. We used unfamiliar shapes (reflection vs. random) defined by collinear Gabor elements positioned along the outline of a closed contour. We measured the magnitude of a symmetry-related EEG component (“Sustained Posterior Negativity”, SPN) over lateral-occipital area. In Experiment 1 contour shapes were either embedded in an array of randomly oriented Gabors (Exp. 1a) or presented against a uniform background (Exp. 1b). Reflection elicited a negative deflection compared to random in both cases. This confirmed SPN as a symmetry-sensitive component. In Experiment 2, Gabors arrays containing a shape (reflection, random) were interleaved to Gabors arrays without a shape (noShape). We identified greater N1 amplitude for both shapes vs. noShape condition. The reflection vs. random SPN was observed bilaterally. Interestingly, there was also an SPN for reflection vs. noShape over right hemisphere. Our results suggest that figure-ground segregation preceded symmetry detection. However, it is also possible that symmetry facilitated this process. The later component of SPN may instead reflect regularity-specific processes that are not related to grouping processes in general, and that are mainly right lateralized.

[1P068] Local and Global Amodal Completion: Revealing Separable Processes Using A Dot Localization Method

Susan B Carrigan and Philip Kellman

Psychology, University of California, Los Angeles, USA

Differing theories of amodal completion emphasize either global influences (e.g. symmetry, familiarity, regularity) or geometric relations of local contours. These may reflect separate processes: a bottom-up, local contour interpolation process, and a top-down, cognitive process of recognition from partial information. These can be distinguished experimentally if only the local process produces precise boundary representations. Previously, we used dot localization to measure precision and accuracy of perceived boundaries for partially occluded objects with divergent local and global symmetry completions. Results revealed that local contour interpolation produces precise, accurate, and consistent representations, but responses based on symmetry do not. Here we extend the approach to completion based on familiarity or regularity. In two experiments, participants completed familiar logos (i.e. Apple, Pepsi, Playboy, Puma brands) or objects with regularly alternating borders either locally or globally. On each trial, a dot flashed on the occluder, and participants reported the dot’s location relative to the occluded boundary. Interleaved, 2-up, 1-down adaptive staircases estimated points on the psychometric function where the probability was .707 the dot would be seen as inside or outside the occluded object. Results support a clear distinction between local contour interpolation processes and global processes based on recognition from partial information.

[1P069] Task-dependent effect of similarity grouping and proximity on visual working memory

Jiehui Qian and Shengxi Liu

Department of Psychology, Sun Yat-Sen University

The visual working memory (VWM) is responsible for temporarily holding, processing, and manipulating visual information. Research suggests VWM is facilitated by Gestalt grouping principles, e.g., proximity and similarity, but it remains unclear how these factors interact with task. This study employed a pre-cued change detection paradigm to investigate the effect of task, proximity, and similarity grouping (SG) by color and shape. The memory array consisted of a 2 x 3 array of colored items, each being a circle or a triangle, following a cue presented at one item location. After a blank interval, a test item was presented at one of the locations: cued, near-cue, or far-from-cue. The test item in the latter two conditions shared the color, the shape or neither feature with the cued item. The participants performed different tasks, judging whether the color, the shape or either had changed for the test location. The results show that: 1) color SG greatly benefits the capacity of VWM regardless of task and cue-test distance; 2) shape SG does not seem to affect VWM; 3) proximity benefit VWM for the shape judgment but not for color. These suggest that features may differ in grouping effectiveness and the effects are task-dependent.

[1P070] No evidence for perceptual grouping in the absence of visual consciousness

Dina Devyatko, Shahar Sabary and Ruth Kimchi

The Institute of Information Processing and Decision Making, University of Haifa, Israel

In this study we examined whether perceptual grouping can unfold in the absence of visual consciousness. In two separate experiments, participants were presented with a prime consisted of dots organized into rows or columns by luminance similarity (Experiment 1) or by element connectedness (Experiment 2), followed by a target composed of lines, the orientation of which could be congruent or incongruent with the orientation of the prime. The prime was rendered invisible using continuous flash suppression (CFS), and the prime-target SOA varied (200/400/600 or 800 ms). On each trial participants made speeded discrimination of the orientation of the target lines and then rated the visibility of the prime using a scale ranging from 0 to 3 Unconscious grouping of the prime was measured as the priming effect (of the prime-target congruency) on target discrimination performance, on trials in which participants reported no visibility of the prime. In both experiments, and across all prime-target SOA, there were no priming when the prime was reported invisible; significant priming was observed when the prime was reported visible. These findings suggest that perceptual grouping by luminance similarity and by element connectedness does not take place when the visual stimulus is rendered nonconscious using CFS.

Funding: ISF grant 1473/15 to RK

[1P071] The effect of color contrast on Glass pattern perception

Yih-Shiuan Lin1 and Chien-Chung Chen2

1Department of Psychology, National Taiwan University

2Department of Psycholog/Neurobiology and Cognitive Science Center National Taiwan University

We used a variant of Glass patterns composed of randomly distributed tripoles, instead of dipoles, to estimate the influence of color contrast on perceptual grouping. Each tripole contained an anchor dot and two context dots. Grouping the anchor dot with one of the context dot would result in a global percept of a clockwise (CW) spiral while grouping with the other dot, a counterclockwise (CCW) spiral. All dots in each pattern were modulated in the same color direction but in different contrasts. There were four types of patterns, modulating in +/−(L-M), and +/−S respectively. The observer was to determine whether the spiral in each trial was CW or CCW. The probability of the anchoring point grouping with one of the context dot increased with the color contrast of that context dot to a critical level and was a constant as context dot contrast further increased. The grouping probability, however, decreased with the contrast of the other dot. This trend was the same for all isoluminance color direction tested but was different from the inverted U-shaped function for luminance contrast as previously reported. Our result cannot be explained by existing models for perceptual grouping but a divisive inhibition model.

Funding: MOST(Taiwan) 103-2410-H-002-076-MY3

[1P072] Modelling the Effects of Spatial Frequency Jitters in a Contour Integration Paradigm

Axel Grzymisch1, Malte Persike2 and Udo Ernst1

1Department of Physics, University of Bremen

2Johannes Gutenberg-Universität Mainz, Germany

Contour integration (CI), the bounding of elements into a coherent percept, has been studied under several regimes, usually pertaining to Gestalt laws. The bounding/grouping, of elements into a percept has been investigated in terms of the effects of good continuation, similarity, etc. Similarity can be defined in numerous ways, however, defining similarity of Gabor patches (the edge elements typically employed in CI paradigms) in terms of spatial frequency (SF) is one of the most interesting ways. The effects of SF jitters on CI have been extensively quantified (Persike & Meinhardt, 2015a,b). We have shown that the effects found in human observers are reproducible with a probabilistic model based on association fields. When an extension to Ernst et al. (2012) is added to account for SF similarities we see similar patterns of performance improvement as those reported by Persike and Meinhardt (2015a,b). This new model can help settle the questions raised by Persike and Meinhardt on whether the process of CI can lead to a non-linear dependency between two independent physical properties of a stimulus, and whether the resulting gains given by the combined presence of these two physical properties are summed as information summation (Machilsen & Wagemans, 2011) would suggest.

Funding: This work has been supported by the Bundesministerium für Bildung und Forschung (BMBF, Bernstein Award Udo Ernst, Grant No. 01GQ1106).

[1P073] A model of border-ownership assignment accounting for figure/hole perception

Masayuki Kikuchi

School of Computer Science, Tokyo University of Technology

One of the major problems on perceptual organization is how the visual system assigns figure/ground regions in the retinal image. Zhou et al. (2000) manifested that the brain uses border-ownership (BO) coding in order to represent figural side against contours. After this finding, many models have proposed so far to explain emergence of BO coding. Among them, the author of this study proposed previously a neural network model of BO coding based on local geometric information such as contour curvature and outer angle of corners (Kikuchi and Akashi, 2001). Though the model can assign BO for arbitrary closed contours, it cannot explain the figure/hole perception for certain patterns found by Nelson and Palmer (2001). This study proposed the revised version of the BO model by Kikuchi and Akashi (2001), which can account for the figure/hole perception found by Nelson and Palmer. The key point is to introduce the nature of saturation property for neuron’s output function by using the sigmoid function, instead of rectified linear function. The ability of the model was confirmed by computer simulation.

Funding: This study was supported in part by Grant-in-Aid #26330178 for Scientific Research from Japan Society for the Promotion of Science.

[1P074] The simplest visual illusion of all time? The folded paper-size illusion

Claus-Christian Carbon

Department of General Psychology and Methodology, University of Bamberg

Visual illusions are fun, but they are also insightful—the great pedagogic value behind such illusions is that most readers while being amused do also experience perceptual insights which assist the understanding of rather complex perceptual processing. Here I present a very simple illusion: Just take two sheet of paper (e.g. A4), one original sized, one halved by folding, and compare them in terms of area size by centering the halved sheet on the center of the original one! We perceive the larger sheet by far less than double (i.e. 100%) the size of the small one, typically only being about 66.5% larger (Cohen’s d = 1.05)—even rotating and aligning it at one side does not dissolve this very large perceptual effect (d’s > 0.80), here documented by data from 88 participants. The only way of escaping this strong visual illusion is to align two sides of both sheets. This point to a potential explanation: we face a general incapability of validly comparing more than one geometrical dimension at once—in everyday-life we circumvent this perceptual bottleneck by aligning geometrical forms as close as we can. If we do so, we validly estimate area sizes, if not, we evidently fail.

[1P075] Shooting at the Ponzo - effects and aftereffects

Valeriia Karpinskaia1 and Vsevolod Lyakhovetskii2

1Department of Psychology, Saint-Petersburg State University

2RAS Institute of Physiology St. Petersburg

Three groups of 20 participants with equally good shooting skills were required to shoot at the central oval of one of two ‘snowmen’ targets displayed on a computer screen 1.5 m away. During the training session, the target size was 46x39mm for both the experimental and control group 1 and 33×39mm for control group 2. However, the target for experimental group looked smaller due to a superimposedPonzofigure. During the subsequent test session, the target size for all groups was 33×39mm. Experimental group participants were the least accurate during training session. During the test session, participants in control group 1 were the least accurate. There were no differences between experimental and control group 2. This suggests that the lines of Ponzo illusion acted as distractors for the experimental group, but the illusory diminishing of ‘snowman’ target had an additional effect on the experimental group’s shooting abilities due to training on a smaller (real or illusory) target. These results suggest that the real size of the target plays an important role in the accuracy of shooting (as might be expected) but that the Ponzo illusion, and hence the subjective impression of the target’s size, is also important for shooting accuracy.

Funding: Saint-Petersburg State University 8.38.287.2014

[1P076] Interest is evoked by semantic instability and the promise of new insight

Claudia Muth and Claus-Christian Carbon

Department of General Psychology and Methodology, University of Bamberg

Interest is qualified by a yet unfulfilled promise (Berlyne, 1971; Silvia, 2005): we enjoy musical tensions leading to resolutions or take a closer look at enigmatic artworks. In a previous study interest for artworks increased with the plurality of meaning and the strength of insights they offer (Muth, Hesslinger, & Carbon, 2015). Furthermore, interest increased already shortly before moments of Gestalt-insight when watching indeterminate artistic movies (Muth, Raab, & Carbon, 2015). In the present study, we presented 30 ambivalent photographs (depicting scenes of unclear valence) to be rated twice on interest, valence, and ambivalence. During an intermediate elaboration phase participants described all possible positive and negative interpretations of a subset of these photographs. Whereas interest for elaborated stimuli increased after elaboration, non-elaborated stimuli induced less interest when rated a second time. Taken together, these findings suggest that interest evolves with the potential for new experience, be it sudden detections of Gestalt, complex insights into an artwork, or new facets of a photographed scene. A wide spectrum of artworks and pieces of music offer such ever-pending promises of new interpretations eventually sparking waves of interest. Our recent study suggests that this potential is dynamic and can be heightened by elaboration.

[1P077] Tell me about your Ponzo and I will tell you who you are

Lukasz Grzeczkowski, Aaron Clarke, Fred Mast and Michael Herzog

Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL)

Unlike in cognition, audition and somatosensation, performance between various visual tasks does not correlate. Surprisingly, even tasks that appear similar, like visual acuity and line bisection task do not share much common variance. Similar results were found for visual illusions. For example, the Ebbinghaus and the Müller-Lyer illusions correlate very weakly. The high intra- and inter-observer variability in visual perception is possibly due to perceptual learning, i.e., individual experience shaping perception throughout one’s life time. Here, we studied the relationship between illusion strength and high-level factors such as personality traits (O-Life) and the vividness of mental imagery (VVIQ). In line with previous findings, we found only few correlations between the magnitudes of the visual illusions, despite having high test-retest reliability. More interestingly, we found a high, positive correlation between the magnitude of the Ponzo illusion and vividness of mental imagery. Moreover, the magnitude of the Ponzo illusion was negatively correlated with cognitive disorganization personality trait. These results were specific to the Ponzo-type illusions. Principal component analysis revealed one factor, with high weights mainly on the Ponzo-type illusions, cognitive disorganization and the vividness of mental imagery.

[1P078] Influences on the perception of the morphing face illusion

Sandra Utz and Claus-Christian Carbon

Department of General Psychology & Methodology, University of Bamberg

Van Lier and Koning (2014) reported that the perceived change in a morphing face sequence was dependent on eye movements: with a moving fixation dot, changes were perceived significantly smaller than with a stationary fixation dot between the eyes. To further investigate the phenomenon, we used real faces and faces of different species, ethnicity and with emotional expressions. Additionally to the originally used fixations, we also had a stationary fixation dot on the tip of the nose. Results of 30 participants showed strongest underestimation of the real morphing range (two faces) with Caucasian faces with no significant difference to other-race or other-species faces. Strongest overestimation happened for emotional expressions. Regarding type of fixation, the stationary dot led to a clear overestimation and the moving dot to a clear underestimation (similar to Van Lier & Koning, 2014). However, with fixating at the nose, the range was correctly estimated. Therefore, expertise seems not to influence estimation, whereas emotional expressions enhance the perception of changes in the sequence. Correct estimations resulting from the fixation at the nose might be due to the fact that at that location more important information for processing faces in a configural way can be perceived simultaneously.

Kenneth Brecher

Astronomy, Boston University

A two dimensional image of an ellipse slowing rotating on a surface can seem to distort and appear gelatinous or fluid like. This effect can be seen on the “Project LITE: Light Inquiry Through Experiments” web site at http://lite.bu.edu/vision-flash10/applets/Form/Ellipse/Ellipse.html. The flat ellipse can also appear to be rotating rigidly, or even appear as a circular disc twisting in three dimensions. A three dimensional version of the effect was first reported in the literature by the physicist Ernst Mach in 1886. A recently developed ellipsoidal spinning top, the PhiTOP, beautifully elicits the three-dimensional version of the effect. A video of this can be seen on the PhiTOP website at http://www.thephitop.com. By slowly spinning the PhiTOP with a magnetic stirrer, the effect can be controlled and studied quantitatively. The results of these studies will be reported. A number of theories have been offered to explain the so-called “gelatinous ellipse” effect (in our case, the gelatinous ellipsoid effect), beginning with Mach (1886), Musatti (1924), Hildreth (1988), Nakayama and Silverman (1988) and continuing to recent proposals by Weiss and Adelson (2000, 2002). Whether these author’s computational models that involve either or both short-range and long-range effects explain the observed phenomena is still unclear.

[1P080] Effects of edge orientation and configuration on sliding motion

Nobuko Takahashi1 and Shinji Yukumatsu2

1Faculty of Health and Medical Sciences, Aichi Shukutoku University

2Chukyo University

An apparent sliding motion arises in a large square pattern made up of small squares with black and white edges in rows and columns, depending on the polarity combination of two L-shaped adjacent edges of the same polarity of the small squares in the central area of the large square and those in the surround, and, on the angles of the L-shaped parts. When the orientations of implicit diagonals of the L-shaped parts in the central and in the surround are orthogonal, sliding motion in the central area is perceived, and it is enhanced or reduced by the angles (Pinna and Brelstaff, 2000; Pinna and Spillman, 2005; Takahashi and Nishigaki, 2015 ECVP). To investigate the role of the implicit local orientation and the contrast polarity, we systematically manipulated the angles, the distance between black and white L-shaped parts, and their configuration. The direction and magnitude of apparent sliding motion were measured. The results showed that stronger sliding motion was perceived when the angles were obtuse, which was different from what the orientation of the implicit diagonal of the L-shaped parts of the same polarity predicts. We discuss the role of the edge orientation and their configuration on sliding motion.

[1P081] How to turn unconscious signals into visible motion: Modulators of the Motion Bridging Effect

Maximilian Stein1, Robert Fendrich2 and Uwe Mattler1

1Georg-Elias-Müller-Institut für Psychologie, Georg-August-Universität Göttingen, Germany

2Dartmouth College Hanover NH Department of Psychological and Brain Sciences USA

When a ring of dots rotates sufficiently fast, observers perceive a continuous circular outline and are unable to judge the true direction of rotation. Nevertheless, a conscious percept of the true direction can be recovered by presenting a subsequent stationary ring of dots after a short delay. This motion bridging effect (MBE, Mattler & Fendrich, 2010) indicates that the motion direction of the rapidly rotating ring is being encoded although the motion is not consciously visible. To elucidate the processes that generate the MBE its stimulus dependencies need to be clarified. Here, we assess the MBE as a function of the ring diameter and the angular velocity of the rotation. We replicate earlier findings on the effect of velocity, and find the MBE, measured by the increased ability of observers to judge the rotating rings' direction when a stationary ring is presented, increases with increasing ring diameter. These findings are considered in the context of current theories of motion perception.

[1P082] The Greenback Illusion: a new geometrical illusion

Kenpei SHIINA

School of Education, Waseda University

Take a picture of a 3D - rotated banknote and then superimpose a straight line that is perpendicular to one of the sides of the banknote. Whereas the added line and the side of the banknote should form a right angle, actually one angle looks obtuse and the other acute. See http://kshiina.web.fc2.com/newfile.html. We call this bias in angle evaluation the greenback illusion. Because the effect still occurs if we remove the pattern on the banknote, the illusion is a type of geometrical illusion arising from the configuration of the trapezium and the line. Further, we can make simpler versions by erasing the part of the trapezium: for example, erasing may produce a 45° rotated Greek cross with two parallel short lines attached to the endpoints of a line of the cross. Even in this case the two lines of the cross do not appear to intersect at a right angle. Overall the more the image is interpreted as a 3D scene, the stronger the angle misperception, so the illusion seems to tell us that the miscomputation of the angles in 2D images reflects the tendency of our visual system to restore true angles in 3D world.

[1P083] Perceptual filling-out induced by a preceding mask on the stimulus boundary

Shuichiro Taya

Hiyoshi Psychology Laboratory, Keio University

Here I report a new illusory filling-out phenomena. When a stimulus circle, which is filled with low-contrast texture pattern and presented at the central visual field, is preceded by a second stimulus which masks the contour of the textured circle, the foveal texture pattern subjectively propagates and fills up the non-textured homogeneous peripheral visual field. This mask induced filling-out (MIF) is unlikely to be explained by visual adaptation (e.g. Troxler effect) because the filling-out can be induced by a very brief (50 ms) presentation of the mask stimulus. I also found that the direction of the texture filling is always from the centre to the periphery, but not vice versa. For example, when a homogeneously coloured circle is presented at the central visual field against the textured background, the following mask induces the propagation of the foveal colour to the peripheral visual field instead of the filling-in of the peripheral texture. I suggest that the MIF might reflect a complement mechanism which compensates the achromatic and blurry peripheral visual field with the chromatic and high-resolution foveal information, and that may help us to see our entire visual field sharply and with colour.

[1P084] Fluttering-heart Illusion Occurs in Stimuli Consisting of Only Contours

Kazuhisa Yanaka, Masahiro Suzuki, Toshiaki Yamanouchi and Teluhiko Hilano

Faculty of Information Technology /Human Media Reserch Center, Kanagawa Institute of Technology, Japan

We examined the mechanism of the fluttering-heart illusion. This illusion is a phenomenon in which objectively synchronized motion between outer and inner figures is observed as unsynchronized motion. In our previous studies (Suzuki & Yanaka, APCV 2014; Yanaka & Suzuki, ECVP, 2014), we hypothesized that the fluttering-heart illusion was caused by the different latencies of edge detection between outer and inner figures. We also tested this hypothesis by conducting experiments using stimuli consisting of filled figures, and the results obtained from the experiments supported the hypothesis. In this study, we tested our hypothesis by conducting experiments using stimuli consisting of only contours. The contours of outer figures in the stimuli had high contrast of luminance, whereas the contours of inner figures had low contrast of luminance. Both outer and inner figures were moved on a circle. Observers adjusted the phase difference between the movements of the outer and inner figures to watch both movements synchronize. The results obtained from the experiments indicate that the fluttering-heart illusion occurs in stimuli consisting of only contours. The findings of this study support our hypothesis. Therefore, the fluttering-heart illusion is caused by the different latencies of edge detection between outer and inner figures.

[1P085] Vibration condition that strengthens the illusory motion of the Ouchi illusion

Teluhiko Hilano and Kouki Kikuchi

Information Media, Kanagawa Institute of Technology, Japan

The Ouchi illusion consists of a ring and a disc, each of which is filled with mutually perpendicular oblong checkered patterns. It is an illusory figure that can be perceived only if the middle disc is floating and moving autonomously. This illusory motion is strongly perceived when it is vibrated by human hands. In the analysis of the conditions of vibration, which maximize the illusory motion, controlling the stroke and frequency of vibration by human hands is difficult. In this work, we developed vibration equipment using a positive mechanical constraint cam that can change the frequency through the revolution of the electric motor to direct advance movement at the ditch cut into a disk. The stroke can be changed by shifting the revolving center from the center of the disk. This equipment facilitates the observation of the effects of stroke conditions and vibration frequency. We printed several types of the figure with a size of 20 cm. We then changed the colors of the oblong checkered patterns and observed them from a distance of 90 cm. We determined that the optimized vibration frequency is between 2 and 3 Hz when the stroke of vibration is 1 cm.

[1P086] Straight edges are not enough to overcome the tilt blindness

Takashi Ueda1, Takashi Yasuda2 and Kenpei Shiina1

1The Faculty of Education and Integrated Arts and Sciences, Waseda University

2Matsuyama Shinonome College

We sometimes have difficulty in detecting the tilt of objects. The tilt blindness occurs due to many factors including the relationships between figures and background frames, or between observers' posture and the allocation of objects, or objects themselves. In this study, we focused mainly on the shape of objects, in particular on their contours and edges. We hypothesized that the tilt of an object was more easily detected if the figure was simple in shape. In the experimental trials, the voluntarily participated observers were required to choose the target figure that has the same tilt as the standard stimulus on the center, from among eight circumjacently located comparative stimuli, seven of which are the distractors. The result showed that the observers most accurately detected the tilt of the rectangle-shaped figure which had both vertical and horizontal edges, followed by other geometrically shaped figures, and non-geometrical shapes. The tilt detection was easier when the shape of a figure was oval than diamond. These results suggested that the straightness, which was resistant to the tilt blindness, constitutes only one part of simplicity. We also discussed what the simplicity of a shape is.

[1P087] Effect of eccentricity on the direction of gradation-induced illusory motion

Soyogu Matsushita

School of Human Sciences, Osaka University

Luminance gradation patches induce smooth and slow illusory motion. Previous studies have asserted that such illusory motion occurs from a high contrast are to a low contrast area within a patch. However, Kitaoka and Ashida (2004) also reported some illusory figures in which the patches appeared to move from low to high contrast. The present study examined the effect of eccentricity on the perceived direction of illusory motion. The stimuli were white-to-black gradation patches on a white background. The results demonstrate that although the patches in the peripheral vision appeared to move from black to white, those in the foveal vision moved in the opposite direction. Thus, the pictorial properties of illusory figures and the manner of observation are both significant in determining the perceived direction of such an illusory motion.

Funding: This work was supported by JSPS KAKENHI Grant Number 26780416.

[1P088] Curvy is the new straight: Kanizsa triangles

Tímea Gintner1, Prashant Aparajeya2, Frederic Fol Leymarie2 and Ilona Kovács1

1Institute of Psychology, Faculty of Humanities and Social Sciences, Péter Pázmány Catholic University, Hungary

2Department of Computing Goldsmiths University of London London UK

The cortical representation of figure and ground still seems to be a partly solved puzzle. Based on psychophysically mapped contrast sensitivity fields within closed boundaries, it has been suggested earlier that symmetry related surface representations might be relevant in addition to co-linear activation along the path of a contour (Kovács & Julesz, 1994). Here we test the classic illusory figures (Kanizsa triangles) with a test probe that appears near the illusory edge. The task is to decide whether the probe appears inside or outside of the illusory triangle. Assuming an equilateral triangle, and that the illusory edge spans a straight path between the inducers, our results are surprisingly asymmetrical for “inside” and “outside” test probes. While there is no difficulty to judge “outside” targets, probes appearing on the alleged contour, or inside the assumed triangle are very clearly judged to be outside for up to a distance that is about 3% of the illusory contour length. The bent illusory contours seem to be more curved for diagonal than for horizontal or vertical edges. We interpret these results as a new indication for the necessity to couple medialness structure (Aparajeya & Fol Leymarie, 2016) with lateral propagation of contour completion.

Funding: Supported by OTKA NN 110466 to I.K.

[1P089] Boundary extension and image similarity via convolutional network: expanded views are more similar to the original

Jiri Lukavsky

Institute of Psychology, Czech Academy of Sciences

Boundary extension is a visual memory error, when people tend to report remembering more of an image than was previously shown. According to the Multisource model, people mistake the memories of what was seen with the memories of their expectations about the scene surroundings. Here we explored an alternative account based on image similarity: can we say that either cropped or expanded views are more similar to the original view? In a simulation experiment, we inspected 20050 scenes (401 categories). For each image, we used the 80%-view as an anchor and compared it with corresponding expanded (81–100%) or cropped (79–60%) views. We compared the images using L2 distance of fc7 feature vectors of AlexNet convolution network. We found that the expanded views are more similar to the original: the distances of corresponding cropped views are longer by 10.7% (1.9–18.6% depending on the extent). In other words, features extracted by an image classifier differ faster when the photo is cropped compared than when it is expanded. Our findings do not contradict the Multisource model, but they may constitute an additional factor contributing to the boundary extension effect.

Funding: The work was supported by Czech Science Foundation 16-07983S

[1P090] Contrast effect on visual spatial summation of different cell categories in cat V1

Ke Chen

Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China

Multiple cell classes have been found in the primary visual cortex, but the relationship between cell types and spatial summation has seldom been studied. Parvalbumin-expressing inhibitory interneurons can be distinguished from pyramidal neurons based on their briefer action potential durations. In this study, we classified V1 cells into fast-spiking units (FSUs) and regular-spiking units (RSUs) and then examined spatial summation at high and low contrast. Our results revealed that the excitatory classical receptive field and the suppressive non-classical receptive field expanded at low contrast for both FSUs and RSUs, but the expansion was more marked for the RSUs than for the FSUs. For most V1 neurons, surround suppression varied as the contrast changed from high to low. However, FSUs exhibited no significant difference in the strength of suppression between high and low contrast, although the overall suppression decreased significantly at low contrast for the RSUs. Our results suggest that the modulation of spatial summation by stimulus contrast differs across populations of neurons in the cat primary visual cortex.

Funding: Fundamental Research Funds for the Central Universities (ZYGX2014J080)

[1P091] A Retinal Adaptation Model for HDR Image Compression

Yongjie Li, Xuan Pu, Hui Li and Chaoyi Li

Key Laboratory for Neuroinformation of Ministry of Education, Center for Information in BioMedicine, University of Electronic Science and Technology of China

The intensities of real scenes are of high dynamic range (HDR). Human visual system can respond for huge luminance range (about 14 log10 unites), and in particular, photoreceptor cells (i.e., cones and rods) can vary their responsive ranges dynamically to adapt to the available luminance. In contrast, most devices for display are of low dynamic range. Hence, compressing the range of HDR images is necessary for many situations. In this work, we proposed a new visual adaptation model inspired by the physiological process of retinal adaptation. Basically, based on the model proposed by Naka and Rushton in 1996 for simulating the S-potentials in fish, we realized the dark and light adaptation by adaptively varying the semi-saturation (SS) parameter based on an empirical relation between the SS parameter and the local luminance suggested by Xie and Stockham in 1989. Then the outputs of rods and cones are further processed by difference-of-Gaussians shaped bipolar cells to enhance the details. Finally, we designed a sigmoid function as spatial weighting to combine the responses of cone and rod activated bipolar cells. Extensive results on both indoor and outdoor HDR images show that our model can compress HDR images effectively and efficiently.

Funding: 973 Project (#2013CB329401) and NSFC projects (#91420105 and #61375115)

[1P092] Revealing alpha oscillatory activity using Voltage-Sensitive Dye Imaging (VSDI) in Monkey V1

Sandrine Chemla1, Frédéric Chavane2 and Rufin VanRullen1

1Centre de Recherche Cerveau & Cognition (CERCO), CNRS and Université Paul Sabatier Toulouse III

2Institut de Neurosciences de la Timone (INT) CNRS and Aix-Marseille Université, France

Alpha oscillations play an important role in sensory processing. In humans, EEG alpha is enhanced in response to random, non-periodic dynamic stimulation (“perceptual echoes”; VanRullen and Macdonald, 2012) or to a static wheel (“flickering wheel illusion”; Sokoliuk and VanRullen, 2013). We used voltage-sensitive dye imaging (Chemla and Chavane, 2010) to investigate at a finer spatial scale whether the same visual patterns could induce an oscillatory response in V1 of two anesthetized monkeys. We observed a 10 Hz spectral peak in the cross-correlation between a random, non-periodic dynamic luminance sequence and the corresponding VSD response on each trial, similar to human “perceptual echoes”. However, this reverberation was present in only one of the two monkeys. The same monkey (but not the other) also showed a 10 Hz oscillatory response when visually stimulated with a stationary wheel, as in the “flickering wheel illusion”. In conclusion, similarly to well-characterized individual differences between humans, not all monkeys produce sizeable alpha oscillations. But when they occur, these oscillations react in a comparable manner: Alpha can be spatially dissociated from evoked activity, and depends on the spatial frequency of the stimulus. Importantly, these preliminary results provide new insights into the neural basis of alpha in V1.

Funding: ERC Consolidator grant P-CYCLES number 614244

[1P093] Image Reconstruction from Neural Responses: what can we learn from the analytic inverse?

Marina Martinez-Garcia12, Borja Galan1 and Jesús Malo1

1Image Processing Lab, Universitat de València

2Instit. Neurociencia CSIC, Universitat de València

Low level vision can be understood as a signal transform that relates input stimuli with output neural responses. Advances in neural recording allow gathering thousands of such input-output pairs. In that way, the regression approach allows to model both the encoding or the decoding process using machine learning techniques. The black-box approach has become popular over the last years in visual brain reading to decode visual stimuli from neural recordings [KamitaniNatNeurosci05, KayNature06, MaloECVPSymp16]. The first attempts used plain linear regression, which is inappropriate given the nonlinear nature of the encoding, but the current practice is to use nonlinear techniques such as Support Vector Regression or Kernel Ridge Regression. However, understanding visual information processing goes beyond blind regression: a more explicit description of the transforms is needed In this work we explore the use of the analytic inverse of classical encoding models (i.e. the conventional filters + nonlinearities [GalánMODVIS-VSS16]) in modeling the decoding process. We show that the analytic inverse is important to improve the decoding performance and to propose novel ways to get the parameters of the forward model in the presence of noise and unknown elements of the model. See results and code here: http://isp.uv.es/ECVPinversion.html

Funding: CICYT BFU2014-59776-R, CICYT TEC2013-50520-EXP

[1P094] Mapping the visual brain areas susceptible to phosphene induction through brain stimulation

Lukas F Schaeffner and Andrew Welchman

Department of Psychology, University of Cambridge

Transcranial magnetic stimulation (TMS) is a non-invasive brain stimulation technique whose effects on neural activity can be uncertain. Within the visual cortex, however, phosphenes can be used as a marker of TMS-induced neural activation. Here we sought to identify which portions of the visual cortex are susceptible to TMS-induced phosphenes. We tested 30 participants, finding that seven reported phosphenes reliably. We then systematically mapped out the locations where single pulse TMS induced phosphenes. We applied stimulation at equidistant targets in a 6x8cm grid that was fitted to individual brain surfaces, using MRI data and neuro-navigation. The grid expanded laterally and dorsally from the occipital pole. Stimulator output was adjusted for the underlying scalp-cortex distance to create comparable stimulation effects. We measured the probability of inducing phosephenes and related this to the underlying visual organization as determined from functional MRI measurements. We show that TMS can reliably induce phosphenes in early (V1, V2d and V2v) and dorsal (V3d and V3a) visual areas close to the interhemispheric cleft. However, phosphenes are less likely in more lateral locations. This suggests that early and dorsal visual areas are particularly amenable to TMS to understand the functional roles of these areas in visual perception.

Funding: European Community's Seventh Framework Programme (FP7/2007–2013) under agreement PITN-GA-2011-290011; Wellcome Trust Senior Research Fellowship to AEW (095183/Z/10/Z)

[1P095] Feedback signals from the local surround are combined with feedforward information in human V1

Yulia Revina1, Lucy Petro1, Sebastian Blum2, Nikolaus Kriegeskorte3 and Lars Muckli1

1Centre for Cognitive Neuroimaging, University of Glasgow, University of Glasgow UK

2University of Osnabrück Germany

3MRC Cognition & Brain Sciences Unit Cambridge UK

Most input to V1 is nonfeedforward, originating from lateral and feedback connections. Using functional magnetic resonance imaging (fMRI) and multivariate pattern analysis (MVPA), Smith & Muckli (2010) showed using natural scene stimuli that nonfeedforward stimulated regions of V1 (i.e. responding to an occluded image quadrant) contain contextual information about the surrounding image, fed back from higher visual areas. We investigated whether feedback signals carry information about the full configuration of the scene (global surround) or information close to the occluded quadrant (local surround). Participants viewed stimuli composed of four Gabors oriented at either 45° or 135°, one in each quadrant. There were four possible global structures: Right (all Gabors at 45°), Left (all 135°), Diamond, and Xshape. Each stimulus was presented in feedback (occluded quadrant) and feedforward (corresponding quadrant visible) conditions. We decoded the stimuli using V1 voxels relating to the quadrant. We could not decode the stimuli in the occluded quadrant. However, decoding was above chance in one of the identical feedforward conditions (the same orientation), but only if there was a difference in the local surround in the two stimuli. This suggests that feedback about the surround combines with feedforward information in the quadrant.

Funding: ERC StG 2012_311751-Brain reading of contextual feedback and predictions (Lars Muckli) & BBSRC DTP Scholarship (Yulia Revina)

[1P096] Perceptual Grouping and Feature Based Attention by Firing Coherence Based on Recurrent Connections

August Romeo and Hans Supèr

Departament de Cognició i Desenvolupament, Universitat de Barcelona, Catalonia

Perceptual grouping is achievable from the use of spiking neuron models (Izhikevich’s model or other models) and connection weights depending on distances. Simple three-valued spike-mediated synapses suffice to display the emergence of partial synchrony and make this cognitive task possible. Moreover, feature selectiveness can take place by virtue of a related model which also includes synchronization through discrete lateral couplings and, in addition, incorporates the feature-similarity hypothesis. For simultaneous presentations, the attended feature elicits a higher response while, in the case of sequential single-feature stimuli, repetition of the attended feature also produces an enhancement of the response, displayed by greater coherence and higher spiking rates.

[1P097] Spiking–neuron model for the interaction between visual and motor representations of action in premotor cortex

Mohammad Hovaidi Ardestani and Martin A. Giese

Section for Computational Sensomotorics, Department of Cognitive Neurology, Hertie Institute for Clinical Brain, Research Centre for Integrated Neuroscience, University Clinic, Tübingen

Action perception and action execution are intrinsically linked in the human brain. Experiments show that concurrent motor execution influences the visual perception of actions. This interaction is mediated by action-selective neurons in premotor and parietal cortex. METHODS: Our model is based on two coupled dynamic neural field, one modelling a representation of perceived action patters (vision field), and one representing associated motor programs (motor field). The fields consist of coupled ensembles of Exponential Integrate-and-Fire neurons. The fields stabilize travelling localized activity peaks that are following the stimulus or propagate autonomously after a go-signal. Both fields are coupled by interaction kernels, resulting in a stabilization of traveling pulses that propagate synchronously in both fields. We used the model to reproduce the result of a psychophysical experiment that tested the detection of point-light stimuli in noise during concurrent motor execution. RESULTS: Consistent with the experimental data, we find a facilitation of the detection of visual action patterns by concurrent motor execution if the executed motor pattern is spatio-temporally compatible with the observed pattern, and interference if it is incoherent. CONCLUSION: Dynamic neural networks with biophysically realistic neurons can reproduce basic signatures of perception-action coupling in behavioral experiments.

Funding: Supported by EC FP7: HBP FP7-ICT-2013-FET-F/604102PEOPLE-2011-ITN(Marie Curie): ABC PITN-GA-011-290011, Koroibot FP7-ICT-2013-10/611909, German Federal Ministry of Education and Research: BMBF, FKZ: 01GQ1002A, Deutsche Forschungsgemeinschaft: DFG GI 30

[1P098] Decoding eye-of-origin signals in and beyond primary visual cortex

Milena Kaestner1, Ryan T. Maloney1, Marina Bloj2, Julie M. Harris3 and Alex R. Wade1

1Department of Psychology, University of York, UK

2University of Bradford, UK

3University of St. Andrews, UK

Beyond primary visual cortex (V1), eye-specific information encoded in ocular dominance columns is thought to merge into a single binocular stream. However, recent evidence suggests that eye-of-origin signals must be available after V1, supporting the computation of motion in depth from inter-ocular velocity differences (eg. Czuba, Huk, Cormack & Kohn, 2014). Here, we use 3 T fMRI pattern classification to decode how these signals are maintained in and beyond V1. Eye-of-origin stimuli were temporally broadband random fields of Laplacian-of-Gaussian elements (50% contrast). Elements moved (speeds from 0.2-8°/s) either up/down or left/right. We presented stimuli to the left eye, right eye and binocularly. Seven event-related runs consisted of fourteen repeats of each condition (N = 7 participants). Retinotopically-defined regions of interest (ROIs) were sub-divided using a functional localiser to identify voxels in and outside the retinotopic extent of our eye-of-origin stimulus. We trained a linear classification algorithm to decode responses to each event within each ROI. In the foveal regions of V1-V3, decoder accuracy was significantly above chance across multiple cross-validation folds, indicating the availability of eye-of-origin information in all these areas. Decoding accuracy in negatively-responding voxels outside the stimulus driven region was similar, suggesting that extraclassical receptive fields also respond selectively to eye-of-origin.

[1P099] Reducing Visually Objectionable Noise in Hyperspectral Renderings

Thomas S Maier1, Roland Fleming2 and Fran González García1

1Maxwell, Next Limit S.L.

2Department of Psychology Justus-Liebig-University Giessen, Germany

Most modern render engines use stochastic sampling algorithms, like Monte Carlo, which yield highly realistic images, but suffer from visible noise. A naive solution is to collect more samples for all pixels, but this is computationally extremely costly. We developed a novel image quality metric based on the Jenson-Shannon divergence (JSD), for comparing the normed spectra of different pixels. This metric enables a stopping criterion, which means the sampling for each pixel stops by reaching a certain quality level. The JSD technique has several parameters, which we are determining through psychophysical experiments. The threshold function depends on various factors of the rendering (material, lightning) and the human visual system (contrast sensitivity, masking, etc.). We generated diverse scenes with a wide selection of textured objects and varied lightning. In two different 2AFC tasks subjects were asked to identify which of two images matched a noise-free reference. In the first task we split the scenes into regions based on their luminances, and in the second task by object identity. This allows us to evaluate how well the parameter values generalize across contexts. Our results identify the parameter values required to obtain visually acceptable renderings, enabling substantial gains in rendering speed.

Funding: EU Marie Curie Initial Training Network ‘‘PRISM’’ (FP7-PEOPLE-2012-ITN, Grant Agreement: 316746).

[1P100] Spatial phase coherence analysis reveals discrete cortical modules within early visual cortex

Nicolás Gravel1, Ben Harvey2, Serge O. Dumoulin3, Remco Renken1 and Frans W. Cornelissen1

1Experimental Ophthalmology, University of Groningen

2Faculty of Psychology and Education Sciences, University of Coimbra, Coimbra Portugal

3Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands

Resting-state fMRI is widely used to study brain connectivity. However, interpreting patterns of resting state (RS) fMRI activity remains challenging as they may arise through different neural mechanisms than those triggered by exogenous events. Currently, this limits the use of RS-fMRI for understanding cortical function in health and disease. Here, we establish structural determinants of RS functional connectivity by examining the spatial phase coherence (SPC) of blood-oxygen level dependent (BOLD) signals obtained during 7 T fMRI in visual field mapping (VFM) and RS. ThisG. D'Annunzio” Chieti-Pescara

3University of Oslo

Luminance gradients can determine strong brightness illusions, which, depending on how gradients are organized around a target area (T), may result in brightness enhancement (e.g. glare effect) or brightness depression (i.e. ‘darkness enhancement’). The effects of such illusions on the eye pupil’s response were studied in an experiment with static and dynamic patterns (similar to the standard ‘glare effect’ pattern) by employing a remote eye-tracking device. Control stimuli were patterns in which luminance gradients were rotated 180° with respect to T, determining peripheral brightness effects external to T. Factors were thus Luminosity (bright, dark), Effect (central, peripheral) and Pattern (static, dynamic). Results show a main effect of Luminosity and Pattern, and a significant interaction Luminosity x Effect for both static and dynamic patterns. Summarizing, central bright patterns determined smaller pupils whilst central dark patterns determined larger pupils; the effects for dynamic stimuli were twofold those for static stimuli. Results confirm findings from previous studies with static patterns, while showing that the effect of illusory brightness patterns on pupil diameter extends to illusory darkness patterns and is enhanced by dynamic stimuli.

[2P071] Anchoring theory outmatches ODOG in tests of four lightness illusions

Elias Economou1, Alexandros Dimitriadis1, Suncica Zdravkovic2 and Alan Gilchrist3

1Psychology Department, University of Crete

2University of Novi Sad University of Belgrade

3Rutgers University

Anchoring Theory considers perceptual grouping a critical factor in lightness perception. According to the theory, manipulating grouping factors can alter target surface lightness. Blakeslee and McCourt’s ODOG model, on the other hand, emphasizes retinotopic relations among surfaces with no explicit role for grouping effects. We tested ODOG against Anchoring Theory for several illusion variations (Reverse Contrast, Dungeon Illusion, Benary Cross, and Benussi-Koffka illusion) in which targets can group with competing frames of reference. We manipulated proximity, good continuation, articulation, and other grouping factors to affect the grouping of the targets. Where published data were unavailable, we asked observers (separate groups of 10) to match the lightness of targets by adjusting a variable gray patch on a computer screen. The psychophysical data were compared with Anchoring Theory predictions and with ODOG outputs derived by running each variation through the ODOG program. We found that grouping exerts a strong effect on surface lightness (main effects for articulation, good continuation, and common orientation were all significant, p.values <0.05) and that the empirical data are generally consistent with Anchoring Theory predictions but not consistent with ODOG predictions, both in magnitude and direction.

[2P072] The threat bias for fearful expressions is evident in apparent contrast

Abigail L Webb and Paul B Hibbard

Psychology, University of Essex, UK

Fearful face stimuli elicit attentional biases during visual processing (Bannerman et al., 2012) and gain preferential access to awareness under conditions of visual suppression (Yang et al., 2007). This threat bias depends in part on the low-level visual properties of faces, and can occur even under conditions where observers are unable to correctly identify the emotion portrayed (Gray et al., 2013). Typically, these studies use stimuli that are matched for their physical, RMS contrast. Using images that were spatially filtered to contain high, low or broad spatial frequency information, we assessed whether different facial expressions that are matched for physical contrast differ in their apparent contrast. Observers were presented with stimuli depicting a neutral, angry, disgusted, happy or fearful expression, and adjusted the contrast until it matched that of a neutral standard. For broadband stimuli, fearful faces had a higher apparent contrast than neutral faces. This effect was also present in faces filtered to contain only high spatial frequency information, but not for faces containing low frequency information. These findings demonstrate that fearful faces are perceived with higher apparent contrast, when matched for physical contrast, and that this effect is confined to high spatial frequency information.

Funding: ESRC Studentship

[2P073] Proposal for a glare risk scale along a specific route in daylight hours

Vincent Boucher

OUEST, CEREMA

Glare phenomenon can appear in daylight under certain specific conditions related to the sun’s position with respect to the driver’s line of sight. Day-time driving conditions have rarely been studied, even though a large number of works have been dedicated to night-time glare and interior environment. We are interested here to simulate the day-time visual adaptation and establish a methodology for assessing disability glare in driving conditions when the sun’s light reflects on the roadway. Using High Dynamic Range (HDR) acquired onboard a vehicle, we use a known visual adaptation model to compute a retina-like response signal and to suggest a scale of glare risk along a route.

[2P074] Visual impression of the fabric while rotating

Megumi Yoshikawa1, Aki Kondo1, Chiaki Umebayashi2, Toshinori Harada2 and Sachiko Sukigara1

1Department of Advanced Fibro-Science, Kyoto Institute of Technology

2KANKO Company Ltd, Japan

Visual impressions of clothing are not always the same, may be changed by the reflection of light from the fabric surface and viewing angle. In the present study, we examined the relationship between light reflectance from fabric surface and visual evaluations of fabric lightness and also high-grade feel. Light reflectance distribution (CIELAB L*) was measured by using a gonio-spectrophotometric measurement system while the fabrics were rotating under the constant condition of 45/-60 illumination/viewing angles. The visual feel of 6 cotton/polyester blended fabrics was judged using the Scheffe-Nakaya’s paired comparison method. Participants observed fabric pairs at the 45/−60 angle while rotating them freely to change the light reflection and rated the difference of “Light-dark change” and “High-grade feel” of each fabric pair on a ±3-point scale. The results showed that the change in L* is positively correlated with the evaluations of “Light-dark change” and “High-grade feel”. Additionally, comparing the fabrics having same structure and color, the more clear lightness change was observed for the fabric made from finer yarns. These findings suggest that the yarn count is a factor influencing the lightness of fabrics, and the changes in lightness while rotating had an effect on the high-grade feeling of fabrics.

[2P075] “Glowing gray” does exist: the influence of luminance ramps on whiteness perception

Yuki Kobayashi, Soyogu Matsushita and Kazunori Morikawa

Graduate School of Human Sciences, Osaka University, Japan

A white patch surrounded by luminance ramps is perceived as if it is glowing. Zavagno and Caputo (2005) employed the method of adjustment and demonstrated that this phenomenon is observed even when the luminance of the central patch is lower than that of subjective white, indicating the perception of “glowing gray.” However, in their experiments, the luminance threshold for white was measured with uniformly colored surroundings. Therefore, there still remains the question whether subjective white with luminance ramps is at higher luminance than glowing objects. In this study, we used the stimulus with ramps for measuring subjective white as well, and examined the luminance threshold for white and luminosity. The result indicated that the threshold for luminosity was lower than that for white; that is, luminance ramps had little influence on the threshold for white whereas that for luminosity was remarkably lowered. We confirmed the existence of “glowing gray”, and speculate the perception of luminosity and white are independent of each other, contrary to the intuitive assumption that luminosity can occur at higher luminance than white.

[2P076] Attention as a new parameter in modeling brightness induction

Kuntal Ghosh1, Ashish Bakshi1, Sourya Roy2 and Arijit Mallick3

1Center for Soft Computing Research, Indian Statistical Institute Kolkata

2Department of Instrumentation and Electronics Engineering Jadavpur University Kolkata

3IRCCyN Ecole Centrale de Nantes France

Attention has been demonstrated to enhance both contrast sensitivity and spatial resolution of an observer’s vision (Carrasco et. al. 2004), and to differentially modulate the Magnocellular (M) and Parvocellular (P) channels of visual information (McAlonan et. al. 2008). However, the role of attention has generally been ignored in constructing models of brightness perception. We propose a new filtering model of brightness perception, which we term as the Attentive Vision Filter (AVF). AVF mimics the visual pathway by linearly combining the outputs of the M-channel & the P-channel, through a weight parameter termed as the Factor of Attention (FOA). The M and P channels in turn are modelled using Gaussian based spatial filters. We find that for various brightness illusions there are two specific values of the FOA that can explain either brightness-contrast or brightness-assimilation types of illusions. We then compare our model with the classical filtering based ODOG (Blakeslee & McCourt 1999, 2004) model, an established model of brightness perception. It has been found that in case of the White & Shifted-White stimuli ODOG fails significantly when grey illusion patches are extended in length beyond a threshold. We show that the proposed model does not suffer from this limitation.

Funding: This work has been done under the Indian Statistical Institute Project, “Computational Model of Brightness Perception in Images” as part of the Research Programs and Plan Budget Proposals, Government of India

Leslie Guadron1, Jeroen Goossens1, Leonie Geerdinck2 and Maurice Donners2

1Donders Institute for Brain, Cognition and Behavior, Radboud University

2Philips Lighting Eindhoven, The Netherlands

Discomfort glare is the perception that a light is visually uncomfortable, though vision is unimpeded. This is an important factor in various lighting applications. Many models are available for predicting glare, but they’re not very accurate when predicting glare from non-uniform sources. We have developed a computational model that can predict the discomfort that is elicited by luminaires with an inhomogeneous luminance distribution at the exit window. The model uses a Ratio of Gaussians method to calculate the center-surround activation of retinal ganglion cells. Our model calculates the activation across the entire retina and uses the amount of activation as a predictor of the level of discomfort that will be experienced. We collected data from subjects to validate our model’s results. We presented LED luminaires with different pitches (distance between LEDs) at different distances and eccentricities. Subjects rated each luminaire using a subjective scale. We found a relationship between the visual angle of LED pitch and size and the perceived level of discomfort. These results can be attributed to the fact that the ganglion cell receptive field sizes increase with eccentricity. Our model seems to be able to link the perception of discomfort to the physiology of the retina.

Funding: EU Marie Curie Initial Training Networks (ITN) Grant: HealthPAC, no. 604063

[2P078] Perceived emotional valence of faces is affected by the spectral slope but not the brightness of the image

Claudia Menzel, Christoph Redies and Gregor Hayn-Leichsenring

Institute of Anatomy I, University Hospital Jena

It has been speculated that image properties, such as brightness and the spectral content of an image, play a role in the processing of emotional valence of human faces. Here, we studied whether these image properties affect the perceived emotional valence of neutral faces by manipulating the properties in face photographs. Additionally, we created neutral cartoon faces (“smileys”) and manipulated the properties of their background. We asked participants to rate the emotion of the (photographed and cartoon) faces on a continuous-looking scale from positive to negative. Brightness did not affect the perceived emotional valence in neither face photographs nor cartoon faces. Our data, thus, are not compatible with a brightness bias for faces. The manipulation of the spectral slope, however, affected the rating: Faces in images with a slope that is steeper than the original slope (i.e., enhanced low spatial frequencies) were perceived as more negative than those with a shallower slope (i.e., enhanced high spatial frequencies). Thus, enhancing low spatial frequency power in face photographs leads to more negative emotional valence. This effect was restricted to face photographs and was not observed in cartoon faces.

Funding: Grant RE616/7-1 from the Deutsche Forschungsgemeinschaft

[2P079] Turning a horse into a unicorn: How a double dissociation can be produced by custom-made mask functions in a response priming experiment

Melanie Schröder and Thomas Schmidt

Sozialwissenschaften/Allgemeine Psychologie, Technische Universität Kaiserslautern, Germany

In our response priming experiment, participants respond to the color of a target preceded by color primes while a metacontrast mask is presented simultaneously with the target. We use four custom-made mask functions where the mask’s luminance contrast is systematically coupled with prime-target SOA so that it can be varied without changing the target. The mask intensity was either steady (weak vs. strong) or varying (ascending vs. descending) with increasing SOA. This decoupling of the different properties of the mask (luminance contrast) and target (color contrast), made it possible to produce a double dissociation between masking and response priming. Priming effects increase with SOA in all four mask functions (weak vs. strong/ascending vs. descending), even though the prime discrimination performance either strongly increases or strongly decreases with SOA. Compared to the steady mask conditions, increasing and decreasing mask functions produce a much steeper dissociation pattern. We conclude that mask and target properties should be varied independently, and that custom-made mask functions can strongly amplify or even enable double dissociations between visibility and priming.

Jose F Barraza and Andrés Martín

ILAV, CONICET, Argentina

It is well known that the brightness of a surface depends on a variety of factors. Luminance is one of the main determinants of this brightness and its articulation in the image may produce changes in the perception. We investigate how a patch, which was textured with luminance noise, is perceived in comparison with an homogeneous one. Our hypothesis was that the two patches must be perceived as equally bright when their mean luminance were the same. To test this hypothesis, we performed two experiments. First, we estimated by mean of flicker photometry the brightness equality between textured and homogeneous patches. Secondly, we used a two-alternatives-unforced-choice paradigm to measure the brightness PSE between textured and homogeneous patches. The first experiment showed that subjects found the minimum flicker situation when textured and homogeneous patches had the same mean luminance, which indicates that the visual system is effectively sensing mean luminance. However, the results of the second experiment show a systematic perceptual bias indicating that the textured patch was perceived darker than the homogeneous patch. This result suggests that textures modify the brightness of surfaces perhaps, because its interaction with the background is different than when the surface is homogeneous.

[2P081] Influence of diffusibility of illumination on the impression of surface appearance

Yoko Mizokami, Yuki Nabae and Hirohisa Yaguchi

Graduate School of Advanced Integration Science, Chiba University

The appearance of object surface could be largely influenced by lighting conditions. It is known that the components of specular and diffuse reflection change depending on the diffuseness of illumination. However, it has not been systematically analyzed how surface appearance is influenced by the diffuseness of illumination. We investigated how the impression of surface appearance of test samples with different roughness and shape changes under diffused light and direct light using real samples in real miniature rooms. We prepared plane test samples with three different levels of surface roughness and spheres with matt and gloss surface. A sample was placed in the center of a miniature room with either diffuse or directed light, and observer evaluated its appearance. We used a semantic differential method to examine what types of factors were influenced by the diffusibility of illumination. The result of analysis based on 20 adjective-pairs showed that glossiness and smoothness were main factors. Samples tended to appear less glossy and smoother under diffused light than direct light, and their difference was larger in a sample with rough surface. This implies that we should consider the surface properties of objects when examining the influence of diffusibility of illumination on surface appearance.

Funding: JSPS KAKENHI Grant Number 16K00368

[2P082] The direction of lightness induction is affected by grouping stability and intentionality

Tiziano Agostini1, Mauro Murgia1, Valter Prpic1, Ilaria Santoro1, Fabrizio Sors1 and Alessandra Galmonte2

1Department of Life Sciences, University of Trieste

Department of Life Sciences - University of Trieste (Italy)

2Department of Neurological Biomedical and Movement Sciences University of Verona (Italy)

The relationships among perceptual elements in a visual field determine both contrast and assimilation phenomena: Perceptual differences are enhanced in contrast and decreased in assimilation. Gestalt psychologists raised an intriguing paradox by explaining both phenomena as the result of perceptual belongingness; in fact, Benary proposed that belongingness determines contrast, whereas Fuchs suggested that it determines assimilation. We propose that both grouping stability and grouping intentionality are related to this paradox. In four experiments we manipulated both stability and intentionality, to verify whether contrast or assimilation will occur. We found that intentionality and multi-stability elicit assimilation; whereas non-intentionality and stability elicit contrast. Results are discussed within the previous literature on the relationship between lightness induction and perceptual belongingness.

[2P083] Mechanisms underlying simultaneous brightness induction: Early and innate

Dylan Rose1, Sarah Crucilla2, Amy Kalia3, Peter Bex4 and Pawan Sinha3

1Psychology, Northeastern University

2Byram Hills High School

3MIT

4Northeastern University

In the simultaneous brightness induction illusion, two equi-luminant patches, one placed on a darker background than the other, appear to have different brightness. An understanding of the underlying mechanisms is likely to illuminate the larger issue of how the brain makes photometric judgments. A specific question in this regard concerns the role visual experience plays in inducing this illusion. Our work with newly sighted children through Project Prakash has demonstrated immediate susceptibility to the simultaneous brightness illusion after sight onset, suggesting that the computations underlying this percept are driven by innately specified circuit mechanisms. To investigate the nature of these mechanisms, we conducted studies with normally sighted individuals using binocular displays that had subtly different monocular components. Specifically, the two eyes were presented with opposite shallow luminance gradients, which fuse into a homogenous cyclopean view. Probe dots were placed transiently on one or the other of the monocular inputs. We found that eye of origin, though not evident consciously, had a profound influence on the eventual brightness percept of the probe dots. We infer that the mechanisms underlying these brightness percepts are at a stage of visual processing that precedes binocular information fusion.

Alejandro Lerer, Matthias Keil and Hans Supèr

Cognició, i Desenvolupament, Universitat de Barcelona & Institut de Neurociències, UB, Catalonia

Little is known about how or whether dedicated neurons of the visual cortex encode gradual changes of luminance (GcL). We approach this question computationally, where we describe possible advantages for explicitly encoding GcL, and explain how corresponding putative neurons could be used for estimating the illumination direction. With this objective, we compiled three sets of intrinsic images (IIs) by extracting low and high spatial frequencies from natural images. The third set contains the full frequency range. Each set of IIs was subsequently whitened with the ZCA transformation, and dictionaries with receptive fields (RFs) were learnt from each set via unsupervised learning. In the end we used the dictionaries for comparing the encoding efficiency of natural images, and found that GcL could be encoded by dedicated neurons about three times more efficient in terms of energy expenditure than with neurons that respond to the full or high spatial frequency range. Furthermore, the RFs of the three dictionaries can classify image features (ROC curves with close to 0.95 accuracy), into reflectance-related or sharp changes in luminance and gradual changes in luminance. We also propose a “utility” of GcL neurons for estimating the local or global direction of illumination within a visual scene.

[2P085] Common and different mechanisms behind White's illusion, simultaneous contrast illusion and the Mach band illusion

Mariann Hudák1 and János Geier2

1Department of General Psychology, Pázmány Péter Catholic University

2Stereo Vision Ltd

We assume that lightness illusions show individual differences concerning their intensity, since the underlying biological parameters of each individual might slightly differ, too. We measured the intensity (N = 130) of the White illusion and simultaneous lightness contrast illusion using cancellation technique, and that of the Mach band illusion by a matching paradigm. For the latter, subject adjusted the width and the lightness of a white and a black Gaussian bar on a grey background until it matched the appearance of Mach bands that were displayed right above these reference bars. Our results show that the intensities of the White illusion and that of the simultaneous contrast illusion significantly correlate, whereas neither of these two illusions correlates with the Mach band illusion. We conclude that the underlying neural mechanism behind the Mach band illusion is different than the one behind the White and the simultaneous contrast illusions. We also conclude that there is a common neural mechanism behind the simultaneous contrast and the White illusions. This common mechanism, however, cannot be lateral inhibition, since the White illusion cannot be explained by lateral inhibition.

[2P086] Centre-Surround Antagonism in the Perception of Motion in Depth

Benjamin James Portelli1, Alex Wade2, Marina Bloj3 and Julie Harris1

1School of Psychology & Neuroscience, University of St Andrews

2University of York, UK

3University of Bradford, UK

Duration thresholds for motion direction increase with stimulus size, for a high-contrast stimulus, but not when contrast is low. This has been attributed to centre-surround antagonism in motion-processing neurons (Tadin, et al., 2003, Nature, 424, 312–315). Here we measured duration thresholds for lateral motion and binocular motion in depth (defined by both binocular disparity and inter-ocular velocity differences). Our aim was to test whether the pattern of threshold elevation was similar for the two types of motion. We measured duration threshold with high- (92%) and low-contrast (3%) Gabor stimuli with sizes ranging from 1.5 to 5 degrees, spatial frequency of 1 cpd and speed of 2° s−1 on the retina. Across 12 observers, we found the characteristic pattern of threshold increase with size for both lateral motion and motion in depth; thresholds for the largest size were 33% higher than for the smallest, for both motion cues. This suggests that the initial processing of motion in depth may use similar mechanisms to those used for two-dimensional motion.

Funding: EPSRC

[2P087] A new analytical method for characterizing nonlinear visual processes

Ryusuke Hayashi1, Hiroki Yokoyama2, Osamu Watanabe3 and Shin'ya Nishida4

1Systems Neuroscience Group, National Institute of Advanced Industrial Science and Technology

2Osaka University

3Muroran Institute of Technology

4NTT Communication Science Laboratory

One of fundamental goals of system neuroscience and psychophysics is to characterize functional relationship between sensory inputs and neuronal or observer’s perceptual responses. The conventional methods, such as reverse correlation and spike-triggered data analysis techniques, however, have limitation to identify complex and inherently non-linear neuronal/perceptual processes since these methods rely on the assumption that the distribution of input stimuli is spherically symmetry or Gaussian. Here, we propose a new analytical method, named Watanabe’s method, for identifying a nonlinear system without requiring any assumption on stimulus distribution. Then, we demonstrate the results of numerical simulations, showing that our method outperforms the conventional spike-triggered data analysis in the parameter estimation of V1 neuron model using natural images whose distribution is non-Gaussian. As an example of application to real psychophysical data, we investigated how multiple sinusoidal gratings with different spatio-temporal frequencies are integrated to judge motion direction. Our analysis revealed the fine structure of second order kernel (the interactive effect of two gratings on direction judgement), which is consistent with the findings in previous studies of visual motion and supports the validity of our methods for nonlinear system identification.

Funding: MEXT KAKENHI Grant Number 26119533, 80444470

[2P088] Kinetic cue for perceptual discrimination between mirror and glass materials

Hideki Tamura, Hiroshi Higashi and Shigeki Nakauchi

Department of Computer Science and Engineering, Toyohashi University of Technology

Even under unnatural illumination, human observers could discriminate between mirror (with completely specular reflected surface) and glass (with transparent and refracted medium), when the object was rotating (Tamura et al., VSS2016). In this study, we investigated what kind of kinetic information contributes to the perceptual discrimination of these materials. Stimuli were horizontally-rotating 3D objects with mirror or glass materials, which were rendered under real-world (natural), color inverted and binary noise (unnatural) light fields. Subjects were instructed to observe the stimulus for 1,000 ms and were asked to judge its material (mirror or glass) according to 2AFC paradigm. We found that observers performed the mirror/glass discrimination well even under unnatural light fields and without contour information if the object was rotating. Relating to this, we found that the spatial variation in the horizontal component of the optic flows significantly differed depending on the materials. The horizontal motion component was more spatially uniform in mirror objects than glass. These suggest that observers were able to discriminate materials directly and/or indirectly from the relative motion between the object and surface reflection pattern.

Funding: This work was supported by JSPS KAKENHI Grant Number 15H05922.

[2P089] The interaction between image motion and surface optics in material perception

Alexandra C Schmid and Katja Doerschner

Psychology, University of Giessen, Germany

Image motion and surface optics have been independently shown to contribute to material perception. We conducted an experiment to explore how these properties might interact. We created novel animations of materials ranging from soft to hard bodies that break apart differently when dropped. Animations were rendered as point-light movies varying in dot density, and “full-cue” optical versions ranging from translucent glossy to opaque matte under a natural illumination field. Observers used a scale to rate each substance on 30 different attributes, categorised into “optical”, “motion”, and “inferred” attributes. The results showed several interactions within and between ratings of optical and dot stimuli. In addition, correlations between motion and inferred attributes produced several discrepancies between optical and dot stimuli. For example, ratings of “shattering” correlated with “crumbling” for dot stimuli, but were independent for optical stimuli, suggesting that subtle distinctions between these motions are only accessible through (and possibly determined by) surface optics. Furthermore, ratings of optical stimuli on these motion attributes contradicted dot stimuli ratings. Perceived differences between hard and soft bodies were also notably more pronounced for optical versus dot stimuli. These novel findings demonstrate a critical interaction between motion and surface optics in the perception of materials.

[2P090] Sensitivity and precision to speed differences across kinetic boundaries

Bilyana Genova, Nadejda Bocheva, Miroslava Stefanova and Simeon Stefanov

Sensory neurobiology, Institute of Neurobiology, Bulgaria

Kinetic boundaries play an essential role in the processing of motion information, in segregating a moving object from the background, and in evaluating objects’ depth. In natural conditions, the independent motions of objects of different shape generate various combinations of velocity vectors at the motion boundary. Here, we examined how the sensitivity and precision to speed differences across the motion discontinuities varied depending on their speed, direction, and size. The stimuli consisted of band-pass dots presented in a circular aperture. The motion of the standard and the test with different combinations of speed and direction in the two semi-circles generated a vertical boundary. The standard moved horizontally to the left or the right with a constant speed. The direction of the test motion was varied between 8 possible directions in the range 0o to 315o. Four observers had a task to discriminate which semi-circle contained the faster motion. Our data show a significant bias that depends on the angular difference between the motion vectors at the boundary and whether the standard motion is towards or away from it. The results are discussed with respect to the role of area MT and motion processing in spatial layout determination.

Funding: Supported by Grant 173/14.07.2014. of the Ministry of education and science, Bulgaria

[2P091] Computing the IOC from Gabor filter outputs: Component Level Feature Model version 2

Linda Bowns

Cambridge Computational Biology Institute, DAMTP, Centre for Mathematical Sciences, University of Cambridge, UK

When an object translates in a scene it is imperative that its contrast is independent of its movement, and yet the dominant models of human motion processing (i.e. spatio-temporal energy models) have used contrast to compute a correlate of motion, namely “motion energy”. The Component Level Feature Model (CLFM) of motion processing, (Bowns, 2011), uses established properties of spatio-temporal energy models, namely spatial Gabor filters, and the Intersection of Constraints rule (IOC), for computing direction (up to a reflection), but does not use “motion energy”, and importantly, is invariant to contrast. This paper describes modifications to the Component Level Feature Model that are sufficient to enable accurate computation of ‘absolute’ direction and speed. Results from a MATLAB simulation of CLFM v2 are reported here for a range of stimuli, including two component plaid stimuli, as well as flat spectrum stimuli, i.e. translating random dot patterns with different dot densities. These results together with those described in Bowns (2011, 2013) provide strong proof of concept of the Component Level Feature Model.

[2P092] Comparing perception of motion-in-depth for anti- and de-correlated random dot stimuli

Martin Giesel1, Alex Wade2, Marina Bloj3 and Julie M. Harris1

1School of Psychology and Neuroscience, University of St Andrews

2University of York

3University of Bradford, UK

Movement in depth (MID) can be detected using two binocular cues: change of disparity over time (CD) and inter-ocular velocity differences (IOVD). To investigate the underlying detection mechanisms, stimuli can be constructed with only CD, only IOVD, or both cues (FULL). Two different methods to isolate IOVD have been employed frequently: anti-correlated (aIOVD) and de-correlated (dIOVD) motion signals. Czuba et al., (2010, JNeurophysiol, 104, 2886 - 2899) found similar direction discrimination sensitivities for aIOVD and FULL stimuli. We set out to compare aIOVD, dIOVD, and FULL stimuli by measuring motion coherence thresholds using random-dot stereograms. For all conditions, stimuli represented a cloud of 3D dots, moving either towards or away. In the FULL condition, signal dot motion spanned a cylinder in depth with the signal dots randomly scattered through the volume. When reaching the end of the cylinder, signal dots flipped to the opposite end and continued their motion. Noise dots had similar correlational properties as signals but were randomly repositioned with variable lifetimes. Motion coherence thresholds were similar for aIOVD and FULL for most observers but consistently differed from thresholds for dIOVD. Our findings suggest that aIOVD and dIOVD stimuli do not isolate identical MID mechanisms.

Funding: Supported by BBSRC grant BB/M001660/1

[2P093] Event-based model of vision: from ATIS to hierarchical motion processing

Mina A Khoei and Ryad Benosman

Vision and Natural Computation team, Vision Institute, Pierre and Marie Curie University (Paris 6), France

Formulating the hierarchical function of the visual system by various models is a great part of vision research. However, each model includes an unavoidable degree of simplifications and assumptions about the scene, visual system and their interaction. Herein, we have addressed a significant limitation of conventional models in neuroscience and computer vision, raising from unrealistic stimulation: the scene is sampled by frames in regular time intervals, resulting in huge redundant input, also during the spatiotemporal periods without any change in the light structure of the scene. Empirical evidences suggest that biological vision is asynchronously stimulated instead of relying on frame-like sampling, and this operational principle is consistent with the response characteristics of retinal cells (temporal resolution 1 − 10 ms). We have introduced a motion processing model stimulated with ATIS (Asynchronous Time-Based Image Sensor) that provides highly asynchronous and local luminous data from moving scene, so called visual events. In this event-based framework we have implemented a hierarchical probabilistic model of motion processing, compatible with independent activation of photo-receptors. The theoretical insights of our model stresses on efficiency of asynchronous time-based computation as a principal neural strategy demonstrated in delay compensation, precise tracking and on-time actions.

Funding: This work received the support from the LABEX LIFESENSES [ANR-10-LABX-65], managed by the French state funds (ANR) within the Investissements d'Avenir program [ANR-11-IDEX-0004-02]. It also received financial support from the EU Project [644096- ECOMODE]

[2P094] Spatial context alters the contribution of motion-coding mechanisms to contrast detection

Alison Chambers and Neil Roach

Visual Neuroscience, University of Nottingham, UK

Contrast sensitivity can be substantially modulated by the presence of nearby stimuli. For instance, the ability to detect a target stimulus presented at the leading edge of a drifting grating is dependent on the relative phase of the two stimuli (Roach et al., 2011). Previously we have shown this phase-specific modulation of sensitivity is characterised by several unusual properties, including dependence on the absolute, but not relative spatial frequency of the target and inducing gratings. Here we develop a multiscale image-based model of motion coding to provide insight into the mechanisms underlying these effects. When drifting target gratings are presented in isolation, simulated contrast sensitivity is determined by the responses of space-time oriented filters with tuning preferences that are matched to the stimulus. Critically however, this is seldom the case when targets are presented along with inducing gratings. We demonstrate that phase-dependent modulations of sensitivity arise through the recruitment of filters that are not well matched to the target, but that are co-activated by both target and inducing stimuli. The model accounts for the spatial frequency tuning of these effects, as well as a range of additional properties (e.g. dependency on inducer contrast).

[2P095] Global motion influences the detection of motion-in-depth

Kait Clark and Simon Rushton

School of Psychology, Cardiff University, UK

Detecting motion-in-depth is more difficult than detecting equivalent lateral motion (e.g. Tyler, 1971). Because there is an early averaging of left and right motion signals, some work suggests the two monocular signals could effectively cancel out when an object moves only in depth (e.g. Harris, McKee, & Watamaniuk, 1998). In the literature on “flow-parsing” (Rushton & Warren, 2006), it has also been shown that an early subtraction of global components of motion from the retinal image isolate scene-relative object movement (Warren & Rushton, 2009). Here we examine the relationship between motion-in-depth and flow-parsing processes. Using a display with a probe object within an array of background objects, we first measured reaction times to detect the motion-in-depth of the probe in the presence of static background objects. As expected, reaction time was maximal when the movement of the probe was directly towards the observer (pure motion-in-depth). When the objects moved in a radial pattern on the opposing side of the screen, the trajectory that produced the maximal reaction time changed. This change was in line with a subtraction of global motion prior to the detection of motion-in-depth, suggesting an early contribution of global motion information to the perception of motion-in-depth.

Funding: ESRC ES/M00001X

[2P096] Gravity-specific representation in human EEG

Zhaoqi Hu, Ying Wang and Yi Jiang

Institute of Psychology, Chinese Academy of Sciences

State Key Laboratory of Brain and Cognitive Science CAS Center for Excellence in Brain Science and Intelligence Technology Institute of Psychology, Chinese Academy of Sciences

Processing gravitational motion is essential for the survival of terrestrial species like human beings. Evolved within the Earth’s gravitational field, the human brain has developed specific mechanisms sensitive to visual gravitational motion, as if it has internalized the law of gravity. Here we investigated whether such internal model of gravity is selectively tuned to gravitational acceleration. We recorded electroencephalogram (EEG) of observers who viewed basketballs moving downwards or upwards at various accelerations (9.8, 4.8, 14.8 m/s2) or moving in duration-matched uniform motion, and their task was to estimate the duration from the ball occluded by a grey bar till it crossed the bar. EEG amplitude at the parietal and occipital sites showed a significantly enhanced differentiation between gravitational motion (9.8 m/s2) and its matched uniform motion, as compared with the other motion pairs (4.8 or 14.8 m/s2). Crucially, this effect was only observed when the motions were in the downwards rather than in the upwards conditions. These results provide EEG evidence for the internalization of gravity law in the human brain, suggesting that visual motion processing involves neural mechanisms specifically tuned to gravitational acceleration.

Funding: Supported by grants from the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB02010003) and the National Natural Science Foundation of China (31100733, 31525011)

[2P097] Extra-retinal information for disambiguating depth from motion parallax

Kenzo Sakurai1, Shihori Furukawa1, William Beaudot2 and Hiroshi Ono3

1Psychology, Tohoku Gakuin University

2KyberVision Japan

3York University

In contrast to ambiguous depth perception of kinetic depth effect (KDE), perceived depth from a conventional motion parallax display is unambiguous when the stimuli were yoked to monocular observers’ head movement (Rogers & Graham, 1979). One possible account for this difference is that visual system uses some extra-retinal information that disambiguates the perceived depth from the motion parallax display. The information can be vestibular signal (Rogers & Rogers, 1992) or pursuit eye movement signal during translational head motion (Nawrot & Joyce, 2006). In order to test the latter pursuit eye movement signal theory, we investigated whether depth reversal would occur when observers viewed the conventional motion parallax stimuli through a head mounted display (HMD) yorking to their head movement. Sinusoidally corrugated surfaces with 3 different spatial frequencies (0.067, 0.2, 0.467 cpd) were presented on an external display or a HMD, and observers reported whether the surface corrugation below the fixation cross was convex or concave. Results showed that the rate of depth reversal was significantly above chance level when the corrugation spatial frequency was 0.2 cpd, but not with other spatial frequencies. These results suggest that the visual system uses some extra-retinal signal other than the pursuit eye movement signal.

Funding: Supported by JSPS Grant-in-Aid for Scientific Research (B) Grant Number 25285202.

[2P098] Investigating the sound-induced flash illusion in people with ASD: An MEG study

Jason S Chan1, Marcus Naumer2, Christine Freitag2, Michael Siniatchkin3 and Jochen Kaiser2

1School of Applied Psychology, University College Cork

2Goethe-University

Goethe-University

3Kiel University

The sound-induced flash illusion (SiFi) is an audio-visual illusion whereby two beeps are presented along with a single flash. Participants typically perceive two flashes if the auditory beeps are presented in rapid succession. This illusion has been used to demonstrate that specific populations exhibit multisensory deficits (e.g., in people with autism spectrum disorder (ASD), older adults, older adults prone to falling, and people with mild cognitive impairments). In these populations, the behavioural outcome is the same but the underlying neurological reasons can be completely different. Using magnetoencephalography (MEG), we previously demonstrated that older adults perceive more illusion, compared to younger adults because of increased pre-stimulus beta-band activity. In the current study, a group of young people with ASD were presented the SiFi. Once again, they perceived significantly more illusions, across a wider range of stimulus-onset asynchronies, compared to healthy control. Using MEG, we find that this is due differences in pre-stimulus alpha activity between the two populations. The specific source locations will be discussed. These results suggest that while different populations exhibit the same audio-visual behavioural outcomes, the underlying network maybe very different.

[2P099] Comparing Finger Movement Directions and Haptically Perceived Texture Orientation

Alexandra Lezkan and Knut Drewing

Department of General Psychology, Justus-Liebig University Giessen, Germany

Exploratory movements and haptic perception are highly interlinked. We observed for grating textures with sine-wave ridges that, over the course of exploration, exploration direction is adjusted to be orthogonal to ridge orientation (Lezkan & Drewing, 2016). In the present experiment we measured perceptual and movement responses to texture orientations between −60° (counter clockwise) and +60° (clockwise) from movement orthogonal. Participants explored textures along a predefined path. In the perceptual experiment part they reported whether the texture was rotated clockwise. In the movement part an additional movement in a free chosen direction followed. Besides psychometric curves, we fitted “movometric” curves to the proportion of trials with clockwise shifting of movement direction. The pattern of “motor judgments” reflected the adjustment towards moving orthogonally across the gratings and showed that, similar to perceptual judgments, the required change in movement direction was more frequently recognized when the deviation from orthogonal was large. The precision of motor judgments was somewhat lower than perceptual precision, but both for perception and movement, precision was higher for textures with higher periods as compared to lower periods. Taken together, our results suggest that the same signals are used for perception and motor control in the haptic perception of gratings.

Funding: This research was supported by the German Research Foundation (DFG; grant SFB/TRR135/1, A05).

[2P100] Musical training modulates brain recalibration of audiovisual simultaneity

Crescent Jicol1, Frank Pollick2 and Karin Petrini1

1Psychology, University of Bath, UK

2University of Glasgow, Scotland

In order to overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity from its daily experience of different audiovisual events. Whether extended experience with specific audiovisual events modulate this recalibration process is still unclear. Musical expertise is a perfect model to discriminate among people with different levels of experience with audiovisual events. We tested a group of 11 drummers, 11 musicians (e.g., guitarists and pianists) and 11 non-musicians on a simultaneity judgment task before and after adaptation with two audiovisual events. For the flash-beep displays, the participants’ point of subjective simultaneity shifted in the direction of the adapted lag, while for the drumming displays a shift in the direction of the audio leading lag was found irrespective of the direction of the adapted lag. The effect of adaptation for the musicians and drummers was larger than and in the opposite direction to that of non-musicians, while sensitivity to audiovisual asynchrony was greater in drummers than either musicians or non-musicians. These findings demonstrate that musical training modulates the recalibration to audiovisual events in the temporal domain, and that playing the drum enhances sensitivity to audiovisual simultaneity more than playing other instruments.

[2P101] Comparing ambiguous apparent motion in tactile and visual stimuli

Harry H Haladjian1, Stuart Anstis2, Tatjana Seizova-Cajic3, Mark Wexler1, Patrick Cavanagh14

1Laboratoire Psychologie de la Perception, Université Paris Descartes

2University of California San Diego

3University of Sydney

4Dartmouth College

We examined the haptic version of the apparent motion quartet. In the visual version, alternating flashes of diagonal pairs of dots produce apparent motion in the horizontal or vertical direction. Proximity is important: if the dots form a rectangle, apparent motion favours the shorter over the longer sides. Here, we attached vibrating tactors to the thumb and index fingers of both hands. Apparent motion was felt either within hands (index finger to thumb) or across the empty space between hands. Subjects slowly moved their hands toward and away from each other and indicated when the felt motion changed from within to between the hands. Subjects reported that the motion organisation was not always clear but were able to complete the task. The point at which motion within and between the hands was reported equally often occurred when the distance between the vibrators on the two hands was 44% greater than that between the vibrators on the thumb and index finger of each hand. Thus, surprisingly, sensations across the empty space between hands act as if they are closer together than those across the space within the hands. The switch-over ratio was compared between touch and vision over various configurations.

Funding: European Research Council under the European Union's 7th Framework Programme (FP7/2007-2013)/ERC grant agreement n°AG324070 to PC; UCSD Dept. of Psychology grant to SA; Australian Research Council (Discovery Project DP110104691) grant to TSC.

[2P102] Multisensory adaptation: How visual are haptics?

Stefan J Breitschaft and Claus-Christian Carbon

Department of General Psychology and Methodology, University of Bamberg

Aftereffects and adaptation are widespread phenomena in perception. Kahrimanovic, Bergmann-Tiest and Kappers (2009) found divergent haptic aftereffects after adaptation to rough and smooth stimuli. As haptic perception depends on several extero- and interoceptive inputs (see haptics framework by Carbon & Jakesch, 2013), we aimed to test, whether haptic adaptation can be induced by cross-modal adaptors such as extreme visual adaptors or additional mental imagery of extreme adaptors as well. 36 participants rated the roughness of ten abrasive papers ranging from 60 to 600 grain via a 101-point scale (smooth 0–100 rough). After baseline-rating participants got a haptic-, visual- and imagery-modality-condition, each containing a randomly ordered extreme smooth and rough adaptation-block, beginning with an 20-second-adaptation-phase in every trial, followed by a rating phase. Data of the first evaluation block were analyzed by means of between-participants-ANOVA. Results mirrored previous adaptation effects from the haptic domain (indicated by an adaptation effect to an extreme rough adaptor and contrast effect to an extreme smooth adaptor) demonstrating top-down effects here. Decreased perceived roughness was found in the visual-rough-condition, meaning an adaptation effect was induced by visual adaptation. In contrast, mental imagery yields an assimilation effect of roughness perception towards the adaptation effect.

[2P103] The apparent elongation of a disk by its rotation as haptic phenomenon

Akira Imai1, Yves Rossetti2 and Patrice Revol2

1Institute of Arts, Department of Psychology, Shinshu University

2ImpAct INSERM U1028

A coin turned end over end between thumb and forefinger of preferred hand while held it by non-preferred hand feels longer to the turning hand. This apparent elongation of the disk could be called as 'rotating-disk' illusion (Cormack, 1973). The phenomenon assumed to involve some illusory mechanisms in both hands. We tested the robustness of this illusion in experiment 1, and then divided the effects on this illusion by both hands into each one of hand. Eight participants rotated five disks one by one, and estimated perceived size of each disk as same way as Cormack's. The apparent size of disk was growing rapidly for 30 seconds and not to become asymptotic within 60 seconds suggesting that our results were feasible for those of Cormack. In experiment 2, we constructed a device which made participant rotate the disk by only one hand. The illusion did not increase by the rotation of preferred-hand, but appeared to grow gradually by the rotation of non-preferred-hand. The apparent elongation did not occur as effects of rotation by preferred-hand, but did appear as those by non-preferred hand, suggesting that the holding fingers usually used to rotation might have great influences on the illusion.

[2P105] Hearing one’s eye movements: effects of online eye velocity-based auditory feedback on smooth pursuit eye movements after transient target disappearance

Arthur Portron1, Eric O. Boyer2, Frederic Bevilacqua2 and Jean Lorenceau1

1Departement d'etudes cognitives, Laboratoire des Systems Perceptifs, Ecole Normale Superieure

2Institut de Recherche et Coordination Acoustique/Musique Paris

Due to poor proprioceptive and kinesthetic signals, eye-movements are a “cognitive black hole” as individuals can tell very little on the sequence of eye movements performed to reach a “visual goal”. Here, we investigate whether providing an auditory feedback coupled to eye-movements helps improving oculomotor control (Boyer, 2015). To that aim, we asked untrained participants (N = 20) to track a horizontally moving target disappearing after 900 ms behind occluders of different kinds (visible or invisible uniform masks; static or flickering textures; 4 blocks of 80 trials). Observers were to maintain smooth pursuit for 700 ms after target disappearance, a task known to be very difficult (Madelain & Krauzlis, 2003). In half the trials, participants received an auditory feedback based on eye velocity: eye data (EyeLink1000) were used to control the cutoff frequency of a filtered pink noise. The resulting sound mimics eye-speed fluctuations, with saccades muting the sound feedback. Results indicate that pursuit is best maintained on a flickering background, as compared to other occluders. No clear effect of sound on pursuit maintenance was found, but large inter-individual differences were observed, with sound improving or sometimes degrading pursuit gain, suggesting different cognitive and oculomotor profiles, a point that will be discussed.

Funding: PhD grant from Île-de-France, ANR-Blanc program 2011 (Legos project ANR-11-BS02-012), Labex SMART (supported by French state funds managed by the ANR within the `Investissements d'Avenir' program under reference ANR-11-IDEX-0004-02)

[2P106] Perceived audio-visual simultaneity as a function of stimulus intensity

Ryan Horsfall, Sophie Wuerger and Georg Meyer

Institute of Psychology, Health and Society, University of Liverpool, UK

Recent behavioural findings suggest that auditory-visual integration mechanisms feed into separate ‘action’ and ‘perception’ streams (Leone & McCourt, 2015). Our experiment aimed to replicate this research with different stimulus parameters to evaluate its robustness. Two experimental tasks were used: a temporal order judgement (TOJ) and a simple reaction time task (RT). The stimuli for both tasks were identical bimodal flash/bleep stimuli with varying stimulus onset asynchronies (SOAs) (−200, −150, −100, −50, 0, 50, 100, 150, 200 msec). Three stimulus intensity conditions were run: dim light/quiet sound; dim light/loud sound; bright light/quiet sound. In the TOJ task participants had to indicate whether the visual stimulus preceded the auditory stimulus or vice versa. In the RT task observers had to respond as quickly as possible to the onset of the bimodal stimulus. Our preliminary results suggest that stimulus intensity affects perceived simultaneity (TOJ task). Observers’ reaction times tend to be shortest at physical simultaneity but our preliminary data do not allow us to demonstrate significant differences between the perceived point of simultaneity and the SOA yielding minimum reaction times. Further results, including the effect of varying stimulus intensities across both tasks, alongside the introduction of a simultaneity judgement task, will be discussed.

[2P107] Integrating vision and haptics for determining object location

Mark A Adams, Peter Scarfe and Andrew Glennerster

Psychology, University of Reading, UK

Both vision and haptic information are useful for determining the location an object, but currently little is known about how these cues are combined by freely moving observers. Here we examined whether people combine vision and haptic cues optimally according to a maximum likelihood estimator (MLE). To do this we used a novel methodology integrating immersive virtual reality and haptic robotics. In the haptic-alone task, participants reached out to touch three reference spheres placed on a circle (radius 22 cm) and then a target sphere. Using 2AFC paradigm, participants judged whether the target was above or below the plane defined by the reference spheres. Participants wore a head mounted display (HMD) with a blank display. For the vision-only task, the spatial and temporal aspects of the stimuli were identical but the spheres were presented in virtual reality using the HMD. For vision and haptics, both cues were available, allowing people to explore the visible targets by touch. Precision for the vision-alone task was similar to that in the haptic-alone task. The observed PSE for the combined cue stimulus was compatible with a MLE prediction based on the sensitivity and bias for the individual cues.

Funding: EPSRC

[2P108] The clash of spatial representations: Modality switching knocks out the Simon effect

Manuela Ruzzoli1, Leonor Castro1, Salvador Soto-Faraco12

1Center for Brain and Cognition, University Pompeu Fabra

2ICREA

Different sensory modalities represent spatial information in radically different formats that must be integrated in a unified reference frame for action. On the perceptual side, converging evidence suggests a dominance of vision over other senses in coordinating spatial representations, in an eye-centred reference frame. However, current research mainly focuses on stimulus processing, neglecting the relationship between stimulus and response. In this study, we contrasted stimulus-response spatial compatibility effects via Simon task, across modalities (vision and touch). When tested in isolation, vision operates in an external spatial frame of reference (left-right hemifield), whilst spatial reference frame in touch is defined anatomically (left-right body parts). Interestingly, when vision and touch are intermingled unpredictably, hence relevant spatial reference frames mixed, we found that the Simon effect disappeared for the visual modality, but persisted (in its native anatomical reference frame), for touch. Our results highlight the importance of action-oriented reference frames in spatial representations. We believe that stimulus-response contingency is in charge of spatial information management.

Funding: Juan de la Cierva fellowship (JCI-2012-12335); European Research Council (ERC-2010-StG-263145 MIA), Ministerio de Economia y Competitividad (PSI2013-42626-P), AGAUR Generalitat de Catalunya (2014SGR856)

[2P109] The variation in the signaling frequency in a multisensory experimental study causes different modality effect on the quality and quantity of the equilibrium function

Denis Kozhevnikov

Psychology and Social Work, Moscow University of Humanities

The multisensory EEG and behavioral studies of the audiovisual complex with the signal-frequency manipulation show a significant interaction effect caused by signaling frequency, sensory modality, and individual differences on the psychophysical measures. In the present study the audiovisual complex was examined by means of the stabilometry, i.e. an objective measure of body oscillations during upright standing indicating changes in psychophysiological state and executive control functions. Participants were tested in two modality conditions, i.e. visual and auditory ones, with the respective modality information presented at 3, 5, and 10 Hz in each condition. The quantity and quality of the equilibrium function was evaluated prior to exposure (the pre-stimulation stage), during the exposure (the stimulation stage), and after the exposure (the post-stimulation stage). The results obtained showed significantly negative effect of the 5-Hz stimulation on the equilibrium function, regardless of the sensory modality involved, whereas the 3-Hz stimulation had a moderately positive impact in case of the auditory condition, and the 10-Hz stimulation had a strongly positive effect in the visual one.

Ljubica Jovanovic and Pascal Mamassian

Département d'études cognitives, Laboratoire des Systèmes Perceptifs, Ecole Normale Supérieure, Paris

Temporal coincidence of events is an important cue for multisensory integration. Even though the brain accommodates timing differences between senses (Fujisaki et al., 2004; Vroomen et al., 2004), underlying mechanisms are still not completely understood. We investigated temporal integration of visual and auditory events in two experiments. Stimuli had varying magnitude of asynchrony between the senses (e.g. visual event presented 50 ms before auditory). In the first experiment, participants estimated the onset of the stimuli following a self-paced key press. The task was to answer whether an event (visual, auditory or multimodal) appeared sooner or later than the average temporal onset of stimuli (method of single stimuli). In the second experiment, participants detected the onset of randomly timed multisensory events (speeded response). They were explicitly asked to attend to only one modality, ignoring the other sensory event. In the first experiment, the point of subjective equality was mainly driven by the attended modality, but it was also affected by the non-attended. In the second experiment, reaction times were mostly driven by the attended modality, but were also slightly influenced by the non-attended. Overall, our results suggest that both modalities contribute to the perception of timing of multisensory event.

Funding: The PACE Project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodwska-Curie grant agreement No 642961

[2P111] Haptic shape adaptation is not object dependent

Catharina Glowania1, Loes van Dam2, Sarah Hanke1 and Marc Ernst3

1Cognitive Neuroscience, Bielefeld University

2University of Essex

3Ulm University

When touching a slanted surface, the impression arises that after a period of time the surface appears less slanted or even level (adaptation). If subsequently presented with a physically level surface, this surface will be perceived as slanted in the opposite direction (adaptation aftereffect). Haptic shape perception, however, relies on both posture information as well as cutaneous (touch) information. The question arises whether haptic shape adaptation is object or posture based. If haptic adaptation is object-related it should only occur if an object is actively touched. Posture adaptation should affect haptic shape perception regardless of whether an object is touched during adaptation. To address this question, participants adapted to a virtual slant using the index fingers of both hands. In one condition, adaptation was induced by actively touching the surface (object present) in a second condition, participants adapted by keeping their fingers in mid-air at indicated locations (object absent). Results showed adaptation aftereffects for both adaptation conditions to equal extents, regardless of whether an object was present or not. This indicates that haptic shape adaptation can be fully explained by posture adaptation (proprioception). This implies that object constancy heavily depends on previous postures being similar.

Funding: This work was funded by the DFG Cluster of Excellence: Cognitive Interaction Technology ‘CITEC' (EXC 277)

[2P112] Comparing physiological arousal for visually and haptically explored stimuli

Roberta Etzi and Alberto Gallace

Department of Psychology, Università degli Studi di Milano-Bicocca

Although it is frequently reported that vision dominates over the other sensory modalities, it is still unclear whether this effect is related to a greater state of arousal. Here we report the results of a study on the psycho-physiological reactions to materials explored either by vision or by touch. While a group of participants (Group 1) was slowly stroked on the forearm by means of different materials, a second group of participants (Group 2) visually explored the materials. The participants’ task consisted in rating the pleasantness of the stimulation (Group 1) and the imagined pleasantness of being touched by those stimuli (Group 2). Skin conductance responses were also recorded in both groups. The results revealed that tactile exploration of the materials induced higher skin conductance responses as compared to visual exploration; this difference was higher for women than for men. The materials were rated as less pleasant when presented visually than when presented haptically. The results of an additional preliminary study (Group 3) showed that when participants watched videos of a person being stroked their arousal increased more than in Group 1 and 2. These findings are relevant to investigate the mechanisms of sensory dominance and visuo-tactile hedonic perception.

[2P113] Visual mechanisms in the face-sensitive posterior superior temporal sulcus facilitate auditory-only speaker recognition in high levels of auditory noise

Corrina Maguinness and Katharina von Kriegstein

Neural Mechanisms of Human Communication Research Group, Max Planck Institute for Human Cognitive and Brain Sciences

When listening to someone’s voice we often also view their corresponding moving face. Even in the absence of facial input the brain recruits a face-sensitive region, the fusiform face area (FFA), to enhance auditory-only recognition of speakers known by face (“face-benefit”). These visual mechanisms could be particularly important under noisy listening conditions. Here, we used fMRI to examine responses in face-sensitive regions while participants recognised auditory-only speakers (previously learned by face or visual control) in low-SNR (−4 dB) or high-SNR (4 dB) listening conditions. We observed that in high-SNR conditions the behavioural face-benefit score was associated with increased FFA responses. Conversely, in low-SNR conditions the recognition of face-learned speakers engaged the bilateral face-sensitive posterior superior temporal sulcus (pSTS), regions sensitive to dynamic facial cues. The face-benefit score correlated significantly with functional connectivity between the right pSTS and a voice-identity region in the right anterior STS. We interpret these results within the framework of an auditory-visual model where stored facial cues are used in an adaptable manner to support speaker recognition. In high levels of auditory noise listeners may try to rely more on dynamic aspects of the voice and complementary dynamic face identity cues for recognition.

Funding: This work was supported by a Max Planck Research Group grant awarded to K.v.K

[2P114] Response times in audio-visual cue-conflict stimuli

Baptiste Caziot and Pascal Mamassian

Laboratoire des Systemes Perceptifs, Ecole Normale Superieure, Paris, France

How are response times affected by conflicting sensory modalities? We recorded perceptual reports and RTs for discrepant audio and visual cues. A shape subtending approximately 10 deg was displayed twice during 83 ms separated by 333 ms on a monitor. The size of the shape changed between the two occurrences so as to simulate a displacement in depth. Coincident with the visual displays, white noise was played through headphones with varying loudness also simulating a distance change (inverse square law). Participants reported whether the target was approaching or receding. Across trials, we varied mean displacements of the audio-visual targets and introduced a variable conflict between the two cues. Perceptual reports were modulated by the average displacement, regardless of the conflict between the cues, indicating that they were almost equally weighted, with a very small advantage (5% on average) for the visual cue. RTs appeared to be modulated entirely by the perceived displacement of the target. There was no evidence that cue conflicts had any impact on RT distributions (AUC = 0.51 on average), suggesting that responses were mediated by a single decision process accumulating a fused estimate of the cues.

Funding: NSF/ANR CRCNS #1430262

[2P115] “When sounds speak faster than words”: Audiovisual semantic congruency enhances early visual object processing

Yi-Chuan Chen and Charles Spence

Department of Experimental Psychology, University of Oxford, UK

The present study examined crossmodal semantic priming effects elicited by naturalistic sounds or spoken words on early stage of visual picture processing where the picture was detectable but its category had yet to be fully determined. In each trial, an auditory prime was followed by a picture which was presented briefly and then masked immediately. The participants had to detect the presentation of any picture (detection task) or any picture belonging to the category of living- (vs. non-living-) things (categorization task). In the detection task, naturalistic sounds elicited a crossmodal semantic priming effect on picture sensitivity (i.e., a higher d’ in the congruent than in the incongruent condition) at a shorter stimulus onset asynchrony (350 ms) than spoken words (1000 ms). In the categorization task, picture sensitivity was lower than in the detection task, but it was not modulated by either type of auditory prime. The results therefore demonstrate that semantic information from the auditory modality primed the early processing of a visual object even before knowing its semantic category. The faster crossmodal semantic priming effect by naturalistic sounds than by spoken words was attributable to the former's accessing meaning directly, whereas the word’s meaning is accessed via lexical representations.

Funding: This study is supported by the Arts and Humanities Research Council (AHRC), Rethinking the Senses grant (AH/L007053/1).

[2P116] Effect of audio-visual source misalignment on timing performance

John Cass1, Erik van der Burg2 and Tarryn Baldson3

1School of Social Sciences & Psychology, Western Sydney University

2Vrije Universiteit Amsterdam

3University of New South Wales

This study investigates the psychophysical effect of audio-visual source displacement on auditory timing performance. Two speakers hidden behind a screen were placed along the horizontal meridian at various separations. Each speaker produced a white noise burst using a range of onset lags and subjects reported their temporal order. Predictably, performance improved with increasing speaker separation. This allowed us to equate performance by choosing the speaker separation corresponding to 50% improvement. This provided a baseline for Experiment 2: each trial was accompanied by two synchronous disks projected horizontally onto the screen at various angles of displacement relative to the speakers. The luminance of both disks changed abruptly coincident with the first noise burst, then again with the second burst. Performance improved maximally when the audiovisual signals were aligned then deteriorated gradually with increasing disk eccentricity. Intriguingly, even small audio-visual misalignments in the direction of fixation bore no improvement in auditory TOJ performance. These results suggest that the perceived (ventriloquized) location of auditory events, rather than their physical location, limit the resolution with which humans make auditory timing judgments.

[2P117] Adaptation to softness in haptic perception - temporal and spatial aspects

Anna Metzger and Knut Drewing

General Psychology, Justus-Liebig University Giessen, Germany

Recent sensory experience (temporal adaptation) and nearby surround (spatial adaptation) can influence perception. We studied the impact of temporal and spatial adaptation on haptic softness perception. Participants compared two silicon rubber stimuli (standard and comparison) by indenting them simultaneously with their index fingers and reported which one felt softer. To induce temporal adaptation, an adaptation stimulus was indented repeatedly with the index finger before the standard was explored with it. To induce spatial adaptation the adaptation stimulus was indented with the middle finger at the same time as the standard was explored with the index finger. We used adaptation stimuli with higher, lower and same compliance as the standard stimulus. We measured Points of Subjective Equality (PSEs) of two standard stimuli to a set of comparison stimuli, and compared them to PSEs measured without adaptation. We found temporal adaptation effects: after adaptation to harder stimuli, the standard stimuli were perceived to be softer, and after adaptation to softer stimuli, the standard stimuli were perceived to be harder. Adaptation to softness suggests that there might be neural channels tuned to different softness values and that softness is an independent primary perceptual quality.

Funding: This work was supported by a grant from the Deutsche Forschungsgemeinschaft (SFB/TRR 135, A5).

[2P118] Spatiotemporal interactions in the ventriloquist effect

Min S Li and Massimiliano Di Luca

School of Psychology, University of Birmingham, UK

The location of auditory stimulus is perceptually shifted towards a synchronous visual stimulus presented in close proximity, a phenomenon called ventriloquist effect. We explore how illusion is modified by the combined influence of spatial discrepancy and asynchrony. Participants reported whether sound came from left or right of fixation while we presented visual stimulus with 8 spatial discrepancies ranging up to 25o and 3 temporal discrepancies (synchronous, and two equal but opposite asynchronies). Participants also performed temporal order judgments to draw attention to asynchrony. We calculated the location that was perceived straight ahead to determine the weight assigned to visual information while assessing sound location. Our results confirm that synchronous visual stimuli influence perceived auditory location, and the influence decreases with increasing spatial discrepancy. With small spatial discrepancies, judgment precision improves beyond the level of audio-only judgments for all three asynchronies. We also found strong visual capture with audio-first asynchronies of about 120 ms and small spatial discrepancy. Finally, in synchronous trials participants adjusted the weight given to vision based on the magnitude of asynchrony tested in conditions. These findings highlight a complex pattern of interactions in the ventriloquist effect that depend on the combined spatial and temporal discrepancy.

[2P119] Investigations of inter-ocular grouping for luminance- and contrast-modulated stimuli

Jan Skerswetat, Monika A. Formankiewicz and Sarah J. Waugh

Anglia Vision Research, Department of Vision and Hearing Sciences, Anglia Ruskin University

Rivalrous luminance stimuli (L) presented dichoptically, each containing parts of two images, can generate periods during which one image is perceived due to inter-ocular grouping (IOG) or processing beyond the monocular level. We investigated the effects of different stimulus visibility level on IOG using L and luminance-modulated noise (LM) stimuli and compared the results with those of contrast-modulated noise (CM) stimuli. Rivalrous grating stimuli, 2 deg in diameter, were constructed such that half of each contained a horizontal, and the other half a vertical, 2 c/deg sinusoid. Contrasts for L-and LM-stimuli were 0.98, 0.08, 0.03 and 0.78, 0.10, 0.06, respectively. The contrast-modulation depth for CM-stimuli was 1.00. Participants had to indicate whether exclusive horizontal or vertical IOG, superimposed, or any other percept, was seen. IOG for L- and LM-stimuli was perceived proportionally more [p < 0.05] for all contrast conditions compared to CM-stimuli. Decreasing L and LM contrast led to an increase of IOG. CM-stimuli produced mainly superimposed percepts, suggesting binocular combination rather than IOG. The results suggest different initial processing sites for IOG and superimposition as well as a predominately binocular processing site for CM compared to L- and LM-stimuli.

Funding: Jan Skerswetat was funded by an Anglia Ruskin University, Faculty of Science and Technology, Research Studentship. Equipment used during experimentation was funded by a grant from the Evelyn Trust.

[2P120] Differential modulation of foreground and background in early visual cortex by feedback during bistable Gestalt perception

Pablo R Grassi, Natalia Zaretskaya and Andreas Bartels

Centre for Integrative Neuroscience, University of Tübingen

A growing body of literature suggests that feedback modulation of early processing is ubiquitous and central to cortical computation. In particular stimuli with high-level content have been shown to suppress early visual regions, typically interpreted in the framework of predictive coding. However, physical stimulus differences can preclude clear interpretations in terms of feedback. Here we examined activity modulation in V1-V2 during distinct perceptual states associated to the same physical input. This ensures that observed modulations cannot be accounted for by changes in physical stimulus properties, and can therefore only be due to percept-related feedback from higher-level regions. We used a bistable dynamic stimulus that could either be perceived as a large illusory square or as locally moving dots. We found that perceptual binding of local elements into an illusory Gestalt led to spatially segregated modulations: retinotopic representations of illusory contours and foreground were enhanced, while inducers and background suppressed. The results extend prior findings to the illusory-perceptual state of physically unchanged stimuli, and show also percept-driven background suppression in the human brain. Based on our prior work, we hypothesize that parietal cortex is responsible for the modulations through recurrent connections in a predictive coding account of visual processing.

[2P121] Figure-ground organization interferes with the propagation of perceptual reversal in binocular rivalry

Naoki Kogo, Charlotte Spaas, Johan Wagemans, Sjoerd Stuit and Raymond van Ee

Brain and Cognition, University of Leuven, Belgium

How perceptual organization emerges through the dynamics of the hierarchically organized visual system is essential to understand human vision. Figure-ground organization is a typical Gestalt phenomenon emerging through dynamic interactions between the local properties and global configurations of images. To investigate this dynamics, we analyzed the effect of figure-ground organization on a “traveling wave” in binocular rivalry where the reversal of perceptual dominance is triggered at a particular location and spreads in a wave-like fashion. The traveling wave was induced in a semi-circular pattern with either a small, a large or no occluder present in the middle of the semi-circle. The semi-circle was presented either vertically or horizontally. Ten participants participated in the experiments. The subjects’ task was to report if the traveling wave reached the first edge of the occluder and if it reached the end of the semi-circle (target) by a key press. In the vertical configuration, the probability of the traveling wave reaching the target was reduced by the presence of the occluder. In the horizontal condition, this effect was not evident. This suggests different dynamics of neural interactions when global signals are processed intra- and inter- hemispherically.

Funding: NK: Fund for Scientific Research Flanders (FWO) post-doc grant 12L5112L, JW: Methusalem program by Flemish Government METH/08/02 and METH/14/02

[2P122] Hysteresis in Processing of Perceptual Ambiguity on Three Different Timescales

Jürgen Kornmeier1, Harald Atmanspacher2 and Marieke van Rooij3

1Perception and Cognition, Institute for Frontier Areas of Psychology and Mental Health & University Eye-Hospital Freiburg Germany

2Collegium Helveticum Zürich Switzerland

3Behavioural Science Institute Radboud University Nijmegen the Netherlands

Background: Sensory information is a priori incomplete and ambiguous. Our perceptual system has to rely on concepts from perceptual memory in order to disambiguate the sensory information and to create stable and reliable percepts. In this study we presented the Necker lattices and disambiguated variants with different degrees of ambiguity in ordered sequences and studied the influence of memory on the perceptual outcome. Methods: Fifteen healthy participants observed two periods of ordered lattice sequences with stepwise increasing and decreasing ambiguity and indicated their percepts. Two experimental conditions differed by the identity of the starting stimulus. We compared differences in the effects of presentation order on perception between conditions and periods. Results: Perception of stimuli with stepwise increasing and decreasing ambiguity followed psychometric functions, with maximal ambiguity at the inflection points. We found significant hysteresis-like lateral shifts of the psychometric functions between conditions and periods. Discussion: Our results indicate memory contributions to perceptual outcomes on three different time scales from milliseconds over seconds up to lifetime memory. The present hysteresis paradigm allows differentiation and quantification of memory contributions to the perceptual construction process.

[2P123] Under what conditions is optokinetic nystagmus a reliable measure of perceptual dominance in binocular rivalry?

Péter Soltész1, Alexander Pastukhov2, Jochen Braun3 and Ilona Kovács1

1Institute of Psychology, Pázmány Péter Catholic University, Budapest Hungary

2Otto-Friedrich-Universität Bamberg Germany

3Otto von Guericke Universitat Magdeburg Germany

Current computational models of multistable perception (Pastukhov et al., 2013) are focusig on the dynamic balance of competition, adaptation, and noise under conditions of binocular rivalry (BR). Optokinetic nystagmus (OKN) has recently been exploited as an objective measure of perceptual dominance in BR (Frassle et al., 2014). BR-OKN might also reveal meaningful differences in the dynamic balance of perception in patients with known perceptual alterations, therefore, it is a promising paradigm for translational studies. In spite of its objectivity, a significant drawback of the paradigm is that BR induced OKN heavily depends on instructions as well as on a number of stimulus parameters (spatial frequency, speed, frame size, fixation marks, etc). We have investigated the impact of those with the purpose of establishing a standard paradigm of BR-OKN. Eye-movements of adult observers, induced by sinusoidal gratings drifting in opposite directions were recorded, and the impact of instructions and stimulus parameters were systematically tested. We concluded that under a number of conditions, OKN is not readily induced in naïve subjects, however, an optimal instruction/stimulus configuration exists where BR-OKN is a reliable measure of perceptual dominance, and seems stable and general enough to be used both in modeling and translational studies.

Funding: Supported by OTKA NN 110466 to I.K., and DFG funding to J.B.

Arash Sahraie, Marius Golubickis, Aleksandar Visoikomogilski and Neil Macrae

Psychology, University of Aberdeen, Scotland

Rival stimuli compete for access to visual awareness under conditions of binocular rivalry. Typically, the dominant percept alternates between two dichoptically viewed images every few seconds. For example, in face-house rivalry, images of faces dominate for longer periods than those of houses. The emotional expression of facial stimuli can also alter dominance durations, such that faces depicting fearful or happy expressions dominate longer than neutral expressions. Extending research of this kind, here we report two studies in which face-valence was manipulated via social-learning experiences. In Experiment 1, valence was varied by pairing positive or negative personality-related information with faces. In Experiment 2, it was manipulated through the status of players (i.e., excluder or includer) in a ball tossing game (i.e., Cyberball) that is commonly used to trigger ostracism. In both experiments, we show that face dominance is significantly longer for stimuli associated with negativity, thereby demonstrating the effects of social learning on binocular rivalry.

[2P125] Sensitivity and response criteria in reporting binocular rivalry

J. Antonio Aznar-Casanova1, Manuel Moreno-Sánchez1 and Robert O’Shea2

1Cognition, Development and Education Psychology, Universitat de Barcelona

2Murdoch University Australia

Observers typically report binocular rivalry by pressing one key whenever and for as long as one rival image is dominant and another key whenever and for as long as the other image is dominant. Deciding when to press a key involves sensitivity to the dominant image and response criterion. We studied sensitivity and response criterion with unambiguous combinations of the two images to determine if they are correlated with reports of binocular rivalry between the same images. Sixty-six participants pressed keys to report binocular rivalry between 4-minute displays of dichoptically orthogonal oblique gratings. Then they performed a 2AFC task to dioptic displays of optical superimposition of the two rival images, ranging over seven values, from one image’s having 90% contrast and the other’s having 10% contrast, to the two images’ having 50% contrast. We presented the combined images for either 250 ms or 1000 ms. We found that the median binocular-rivalry dominance duration was high when participants had a liberal response criterion in the 250-ms 2AFC optical-superimposition task. No other possible correlations were significant. These results suggest that sensitivity does not affect reporting of binocular rivalry but response criterion does.

Funding: This work was funded through a grant awarded by the Spanish Ministry of Economy and Competitiveness (MINECO)

[2P126] Differentiating aversive conditioning in bistable perception: avoidance of a percept vs. salience of a stimulus

Gregor Wilbertz and Philipp Sterzer

Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin

Bistable perception mostly depends on basic image characteristics, but there is also growing interest in the influence of psychological factors (top-down effects) on perceptual inference. An interesting case is given for the putative effect of negative valence on bistable perception because this could either lead to a decrease of perceptual dominance (cf. avoidance in instrumental conditioning) or an increase (cf. psychological salience). Here, we tested the hypothesis that two different types of conditioning can indeed be separated for that case. In a randomized 2-groups design, participants received either a standard pavlovian conditioning procedure where a visual stimulus A (but not B) was repeatedly paired with an aversive sound (group#1); or they received an aversive instrumental conditioning where the outcome of the perceptual inference process during binocular rivalry (i.e. perceptual dominance of stimulus A but not B) was paired with the aversive sound (group#2). In a subsequent binocular rivalry test, relative dominance of the conditioned percept/stimulus increased for a short time in both groups, but dropped thereafter only in the instrumental conditioning group (group#2), yielding a significant time x group effect. This result supports the claim of differential top down effects on perceptual inference that go beyond attention.

Funding: This work was supported by the German Research Foundation (DFG, grant numbers STE 1430/2-1 and STE 1430/7-1).

[2P127] The interaction between temporal properties and spatial density of the mask on continuous flash suppression effectiveness

Weina Zhu, Jan Drewes and David Melcher

School of Information Science/Center for Mind/Brain Sciences (CIMeC), Yunnan University/University of Trento

Continuous Flash Suppression (Tsuchiya & Koch, 2005) is a paradigm, in which a series of different Mondrian patterns is flashed to one eye at a steady rate, suppressing awareness of the image presented to the other eye. CFS has been widely used to investigate visual processing outside of conscious awareness. CFS may depend on the flashing mask continually interrupting visual processing before the stimulus reaches awareness. In this study, we investigated the relationship between masking effectiveness and two mask parameters: temporal frequency and spatial density. We investigated the suppression effectiveness of a wide range of masking frequencies (0–32Hz), using a breakthrough CFS paradigm with photographic face and house stimuli while systematically varying the spatial density of the masks. We found that the break-through contrast differed dramatically with temporal masking frequency as well as spatial density. We fitted the data with a skewed Gaussian function. The peak frequency changed with the spatial density of the masks: the peak frequency increased with reduced spatial density. There was no significant difference in peak frequency between face and house stimuli. These results are consistent with the idea that temporal factors in processing the mask influence its effectiveness in dominating access to awareness.

Funding: This work was supported by the National Natural Science Foundation of China (61005087, 61263042, 61563056), Key Science Project of the Department of Education, Yunnan Province, China (2015Z010), European Researcupported by a European Research Council (ERC)

[2P128] Attentional modulation of binocular rivalry

Manuel Moreno Sánchez and J. Antonio Aznar-Casanova

Cognition, Development and Education Psychology, Universitat de Barcelona

Binocular rivalry (BR) involves a different kind of visual selection from that observed during selective attention, where attending to one of the two stimuli does not render the unattended stimulus invisible. Here we studied the effect of directing attention implicitly onto one rival image on subsequent BR tasks. We designed a numerosity task (NT) embedded in BR (NT + BR). We inserted two types of elements into one of the rival images. Thus, a participant must attend to that image in order to perform the NT, that was to report which element was more numerous. The BR stimuli were two ±45° gratings. The participants performed the NT + BR task and finally repeated the BR task. The main manipulation was to focus the observers’ attention implicitly on one of the competing images. Our results show that the manipulation of the attentional focus significantly increased the durations of dominance periods, but not the perceptual alternation frequency. This suggest that the frequency of BR alternations is an identifying feature of one observer's perception, and the frequency is not sensitive to the attentional manipulation.

Funding: Spanish Ministry of Economy and Competitiveness (MINECO) (reference PSI2012-35194).

[2P129] EEG correlates of memory contribution to perceptual disambiguation

Ellen Joos and Jürgen Kornmeier

Scientific and Experimental Area of Research & Section of Functional Vision Research and Electrophysiology, Institute for Frontier Areas of Psychology and Mental Health & University Eye Center, Freiburg, Germany

Perception of ambiguous figures (e.g. Necker cube) is unstable and alternates spontaneously between different interpretations. Tiny figural changes can disambiguate an ambiguous stimulus, stabilize its percept and increase the amplitudes of two event-related potentials (anterior P200 and posterior P400). In the present study we investigated the influence of sensory evidence and memory on the two ERP amplitudes. Methods: We presented pairs of Necker lattice variants and varied ambiguity of the first (S1) and second (S2) stimulus in four separate conditions. Participants indicated their percept of S1, and identical or changed percepts of S2 compared to S1. EEG to S2 was selectively averaged with respect to the ambiguity of S1 and S2. Results: The amplitude of the S2-related P200 was inversely correlated with the ambiguity of S1. P400 amplitude, in contrast, was inversely correlated with the ambiguities of both S1 and S2, with largest amplitudes when both stimuli were unambiguous. Discussion: The latencies of the two ERP components indicate that both occur during higher processing steps, after lower-level visual analysis. They can be functionally separated by their different dependence on memory content. Remarkably, the influence of memory content on both components indicates a re-evaluation of perceptual constructs at each processing step.

Funding: Financial support from the Deutsche Forschungsgemeinschaft (KO 4764/1-1, TE 280/8-1) is gratefully acknowledged.

[2P130] Long- and short-term memory in repeated visual search

Margit Höfler1, Iain D. Gilchrist2, Anja Ischebeck1 and Christof Körner1

1Department of Psychology, University of Graz

2University of Bristol, UK

When the same display is searched twice or thrice, short-term memory supports visual search: In a subsequent search, participants find only those items faster which they had recently inspected in a previous search. In contrast, when a display is searched many times, long-term memory is involved as search performance increases continuously across trials. Here, we investigated whether both short-term and long-term memory support a repeated search with many repetitions. We had participants search a display 60 times while we recorded their eye movements. The display either remained the same throughout (static condition) or, as a control, the items switched position after each search while the layout remained stable (switch condition). The results showed that long-term memory supported search in the static condition: Search became faster with repetition. However, participants seemed not to benefit from short-term memory: Recently inspected items of a previous search were not found faster in a subsequent search. This suggests that searching a display many times involves memory processes different from those required for searching a display only twice or thrice.

[2P131] Influencing working memory using social and non-social attention cues

Samantha Gregory and Margaret Jackson

Psychology, University of Aberdeen, Scotland

Information processing requires working memory (WM) for goal directed behaviour. We therefore investigated how and under what conditions three central attention cues, which vary in sociability and meaning, could influence WM accuracy. We measured WM for four, six and eight coloured squares as a function of non-predictive central cues (gaze, arrow, line-motion). Squares at encoding were cued validly, invalidly or not cued. Across experiments, we manipulated memoranda location at encoding to be unpredictable (squares appeared on one side of the cue only; Unilateral) or predictably balanced (squares on both sides; Bilateral), and cue-target onset time – 150 ms/500ms SOA. Valid gaze cues significantly enhanced WM at 500 ms but not 150 ms SOA, indicating volitional rather than reflexive processes. This gaze effect was strongest when squares location was unpredictable (Unilateral). When Bilateral, gaze effected WM at higher loads only. The arrow cue (500 ms SOA) mirrored the gaze effects in the Bilateral condition, but did not influence WM for unilaterally presented items. A valid line-motion cue (150 ms SOA; known to orient attention) enhanced WM but only in the Bilateral condition at low memory loads. Thus, different cues influence WM as a function of cue meaning, memoranda laterality, cue-target timing, and memory load.

[2P132] Neural correlates of color working memory: An fMRI study

Naoyuki Osaka1, Takashi Ikeda2 and Mariko Osaka2

1Psychology, Kyoto University

2Osaka University

We investigated, using an fMRI, how color visual patches could be memorized either in visual or verbal working memory depending on the color category borders? Successive color matches across the hue categories defined by distinct basic colors strongly activated the brain’s left inferior frontal gyrus and left inferior parietal lobule possibly due to the phonological loop (PL) which is thought to be localized in the inferior parietal region in the left hemisphere (BA 40; working as short-term phonological-verbal- store) in connection with Broca’s area (BA 44 and ventral part of BA 6 known as the vocalization area). These basic colors are likely verbally encoded in the left prefrontal cortex under the verbal working memory system. However, color matching within the same hue category having slight hue differences activated the right inferior frontal gyrus possibly due to the visuospatial sketchpad (VSSP) connected with the right inferior frontal area (visual short-term store) in the prefrontal brain under the visual working memory system.

[2P133] Working memory precision for emotional expressions of faces

Kaisu Ölander, Ilkka Muukkonen and Viljami Salmela

Institute of Behavioural Sciences, University of Helsinki, Finland

We investigated whether the memory precision for images of human faces depends similarly on the memory load as the precision for primary visual features. Images of 60 identities from the Radboud and FACES databases were continuously morphed between a neutral and emotional (angry, disgusted, fearful, happy or sad) expressions. We measured 1) psychometric functions for emotion intensity discrimination of two simultaneously presented faces, 2) distribution of adjustment errors of intensity for 1–5 faces after a 2 second retention period, and 3) error distributions for a single face while remembering orientation of 1–3 Gabor gratings. Mixture model defined by univariate and Gaussian distributions was fitted to the data. The discrimination thresholds did not depend on the emotion intensity. As a function of memory load, the precision of all facial expressions and grating orientations decreased with a similar slope. However, both discrimination and memory precision varied across emotions, and were best for happy and worst for sad faces. Importantly, precision for a single face was not affected by the gratings. Coefficient of univariate distribution was low throughout conditions. Consistent with previous studies, the results suggest that the memory precision depends both on the memory load and on the complexity of the stimuli.

[2P134] Verification of the reliability of MEG source localization using VBMEG in visual short-term memory

Mitsunobu Kunimi, Nobuo Hiroe, Maro G. Machizawa and Okito Yamashita

Dept. of Computational Brain Imaging, Advanced Telecommunications Research Institute International (ATR), Japan

Previous studies using current source estimation technique in magnetoencephalography (MEG), for example, beamformer and minimum-norm, have reported involvement of the intra-parietal sulcus (IPS) and the intra-occipital sulcus (IOS) during the maintenance of visual information in visual short-term memory (VSTM) (e.g. Robitaille et al., 2010). However, such results of current source estimation may be unreliable because of its underlying ill-posed nature of the inverse problem, provoking a need for test-retest verification. Here, the test-retest reliability of MEG source estimation by the VBMEG (Sato et al., 2004), a source estimation algorithm with high spatial resolution, was examined on neural activities associated with VSTM. Five healthy young adults repeatedly performed a color-change detection task (two set sizes x two hemifields) while brain activities were recorded in an MEG on two different days. Although there were some individual differences on the current sources reflecting set-size effect, consistent activations were identified in the regions around the IPS and the IOS ([BA] 7, 19, 39, 40) for all the participants across different days. We provide further evidence of the IPS and the IOS as neural basis of VSTM.

Funding: This study was supported by the National Institute of Information and Communications Technology and Grant-in-Aid for Young Scientists B # 15K17335 from the Japan Society for the Promotion of Science.

[2P135] Color affects memory not totally but shortly

Haruyuki Kojima and IMURA Ayasa

Psychology, Kanazawa University

Kanazawa University

Red background color enhanced memory performance (e.g. Mehta & Zhu, 2009, Science) and so did red illumination color (Kojima, 2012, ECVP). The present study further investigated the influence of color on learning and memory. METHODS: In Experiment 1, participants were shown two-character non-words, serially one by one, which were colored red, blue or black with white background on a PC monitor. The characters were clear enough to read. They were asked to memorize and reproduce them (fifteen words). In Experiment 2, participants ran with black characters on colored backgrounds. In Experiment 3, stimulus characters were all black on white. They were instructed to write down the words on a paper either with a red, blue, or black pen. Twenty students with normal color vision participated in the each experiments (within-subject design). RESULTS: The total performance did not show any difference among three color conditions either in the three experiments. However, in part, the performance was lower in red than other colors during the middle serial positions in Experiment 2 (p < .05), while it was better with blue pens than red or black in the last two words in Experiment 3 (p = .05). The color may affect to the short-term/working memory.

[2P136] Rapid Access to Visual and Semantic Representations in Iconic Memory

Jasmina Vrankovic, Veronika Coltheart and Nicholas Badcock

Department of Psychology, Macquarie University

We can easily understand the visual environment despite our eyes moving to take in new information three to four times per second. This rapid information flow may initially be registered in iconic memory, a brief high-capacity store containing literal visual representations. Evidence for semantic representations in iconic memory has not been demonstrated. This study investigated whether visual and semantic representations can be accessed in the very early stages of visual memory. Arrays of six objects were presented for 50 ms, 150 ms, or 250 ms. Following array offset, a cue specified full-report (recall of all six objects) or partial-report (recall of one object). Experiments 1 and 2 investigated whether location information (pointer to spatial location) and semantic information (instruction to report object from a particular category) could cue recall. In both experiments, partial-report performance was significantly greater than full-report performance and recall improved with longer exposure duration. Experiments 3 and 4 investigated the duration of visual and semantic representations by delaying cue presentation. Visual representations decayed significantly when the cue was delayed by 100 ms. Semantic representations did not decay for cue delays of up to 500 ms. These findings challenge the initial conceptualisation of iconic memory and its role in subsequent stages of memory.

[2P137] Integration of context and object semantic representations during rapid categorisation within and between the cerebral hemispheres

Anaïs Leroy1, Sylvane Faure2 and Sara Spotorno3

1Psychology, University of Nice Sophia-Antipolis/LAPCOS

2Laboratoire d’Anthropologie et de Psychologie Cognitives et Sociales (LAPCOS) University of Nice Sophia Antipolis France

3School of Psychology University of Aberdeen Scotland UK

Previous research has demonstrated the importance of context-object associations in rapid scene categorisation, showing facilitation arising from semantic consistency. We aimed to disentangle the perceptual and representational bases of this effect, presenting briefly the context and the object within the same image (Experiment 1) or in two separate simultaneous images, with the object embedded in 1/f coloured noise (Experiments 2–4). Using a divided-visual-field paradigm, we also examined the role of the functional asymmetries of the cerebral hemispheres (unilateral presentations, Experiments 1–2) and of hemispheric co-engagement (bilateral presentations, Experiments 3–4) in image categorisation. Participants had to report both the context and the object. We found a consistency effect, although slightly reduced, even for separate presentations, suggesting that the semantic memory for context-object associations is activated partially regardless of whether the two levels are integrated in the same percept. While we did not show any hemispheric difference in the consistency effect, we reported some evidence for a processing superiority of the right hemisphere for context information and of the left hemisphere for object information. Moreover, better performance for bilateral than unilateral presentations suggested a benefit due to interhemispheric interaction. Finally, better context than object categorisation supported a coarse-to-fine model of visual processing.

[2P138] The neural basis of serial behavioral biases in visual working memory

Joao M Barbosa1, Christos Constantinidis2 and Albert Compte1

1Systems Neuroscience, IDIBAPS

2Wake Forest School of Medicine

Bump-attractor models offer an elegant explanation for the physiology and the behavioral precision of working memory via diffusing bumps. So far, this model has largely ignored the influence of previous trials, assuming a resetting of the circuit after the animal’s report. Nevertheless, previous memoranda have been shown to interfere attractively with newly stored locations, consistent with a bump attractor perspective]: instead of being reset, the circuit keeps old memory representations as activity bumps that interfere with future trials. To address the neural basis of this interference, we analyzed behavioral and prefrontal neural data from monkeys performing an oculomotor delayed response task. We found that monkeys showed a bias towards previous reported locations, which was attractive for previous reports very similar to the currently memorized location, and repulsive for more distant previous reports. Although this could be explained by interacting bump attractors, we found that neuronal activity during the fixation period was only partially consistent with this view: pairwise correlations but not single-neuron activity showed the expected pattern for diffusing bump dynamics. This shows that during fixation the prefrontal network is still imprinted with previous memories, possibly underlying serial behavioral biases in working memory.

Funding: Ministry of Economy and competiveness (Ref: BFU201-314838 & FPI program to J.B.), AGAUR (Ref. SGR14-1265)

[2P139] Dissociable brain networks revealed single-repetition learning in Braille reading and Braille writing-from-memory

Lora Likova, Christopher Tyler, Kristyo Mineff, Laura Cacciamani and Spero Nicholas

Brain Imaging Center, Smith-Kettlewell Eye Research Institute, USA

Introduction. Fundamental forms of high-order cognition, such as reading and writing, are usually studied in the context of vision. People without sight, however, use (non-visual) Braille reading (BR) and Braille writing (BW). Are there rapid learning changes reorganizing the recruitment of neural resources in these complex tasks? There have been no previous functional Magnetic Resonance Imaging (fMRI) studies on BW. Consequently, no comparative learning dynamics analysis of BW vs. BR exists. Here, we report the first study of BW, and of rapid learning reorganization in both BR and BW-from-memory. Methods: FMRI was conducted in a Siemens 3 T Trio scanner. Each of five paragraphs of novel Braille text describing objects,faces and navigation sequences was read in Braille, then reproduced twice by BW-from-memory, then read a second time (20 s/task). Results and Conclusions: Remarkably, in both tasks, a single repetition led to highly dissociable changes in the global patterns of activation across the cortex. Dramatic posterior-to-frontal shifts manifested repetition-suppression posteriorly (implying increased efficiency of lower-level processing) simultaneously with repetition-enhancement frontally (implying the engagement of additional, higher-order cognitive processing) with only the single repeat. In many regions, robust activation either completely evaporated or appeared de novo between the original and the repeat epochs.

Funding: NIH/NEI (ROIEY024056) to Lora Likova

[2P140] Temporal Processing of Visual Information and Its Influence on Visual Working Memory Representation

Turgut Coşkun and Aysecan Boduroglu

Social Sciences/Cognitive Psychology, Boğaziçi University

The general approach of visual information acquisition into cognition favors a coarse-to-fine order such that lower spatial frequency (LSF) information is extracted earlier than higher spatial frequency (HSF) information. An alternative approach suggests some flexibility in information acquisition: For example, top down processes may modulate the initial usage of LSF (coarse) or HSF (fine). The aim of this study was to compare these two approaches focusing on the construction of visual working memory (VWM) representations. For this purpose, we utilized a change detection paradigm. The results revealed a flexible order in VWM construction. In the first experiment, when upright faces were presented, an initial Configural-LSF and a later Featural-HSF association were observed. Further, both LSF and HSF information were available at a very initial stage, after encoding the stimuli for 100 ms. In the second experiment, when face images were presented in inverted orientations, the performance of observers were reduced to chance level in the 100 ms exposure duration, for all conditions. Further, in the 500 ms condition, they had a tendency to represent inverted faces in HSF and featurally rather than in LSF and configurally. Thus, there was not a fixed coarse-to-fine order in VWM construction in face processing.

[2P141] Remembering who was where: Visuospatial working memory for emotional faces and the role of oculomotor behavior

Sara Spotorno and Margaret Jackson

School of Psychology, University of Aberdeen, Scotland

Here we investigated for the first time face identity-location binding in visuospatial working memory (VSWM), and examined the influence of WM load and emotional expression. We measured eye movements during encoding of faces presented in random locations on a touchscreen, with WM loads 1 to 4 and angry vs. happy expressions. At retrieval, participants had to touch and drag a single neutral test face, centred on the screen, back to its original position. Performance was measured as (1) accuracy, whether the test face was relocated within 7-deg radius from its original face centre, and (2) precision of relocation within that region. We found accuracy and precision impairments as load increased, and an accuracy advantage for happy faces. Oculomotor behaviour affected mainly accuracy at higher loads (3 or 4) and independently of face emotion. There was a benefit of longer mean fixation duration overall on the tested face, and in particular of longer first ocular inspection, suggesting a crucial role of information gathering especially during early face encoding. Accuracy was also improved when the tested face was one of the last to be fixated, indicative of a recency effect that could protect against interference and decay.

Funding: Economic and Social Research grant ES/L008921/1

[2P142] Exploring the shape-specificity of memory biases in color perception

Toni P Saarela1 and Maria Olkkonen2

1Institute of Behavioural Sciences, University of Helsinki

2Durham University, UK

Background: Perceived hue exhibits a memory-dependent central tendency bias: The perceived hue of a stimulus held in memory shifts towards the average hue of recent stimulus history. We tested whether different shapes elicit unique biases when their hue distributions differ, or whether all biases are towards the average hue across shapes. Methods: Observers compared the hue of two stimuli in a 2IFC task. A 2-second delay separated the reference (first) and test (second) intervals. Two shapes, a circle and a square, were used on different trials. Both had three reference values ranging from blueish to greenish in CIELAB color space; circles were on average greener and squares bluer. Test hue was varied, and on each trial the observer indicated whether it appeared bluer or greener than the reference. Psychometric functions were fit to the proportion-greener data to estimate the perceived hue of the memorized reference. Results: All observers showed a memory bias: Blue hues were remembered greener than veridical, and vice versa. Shape had no systematic effect: Perceived hue was biased towards the average hue of all stimuli, not towards shape-specific averages. Conclusion: The memory bias for hue with simple 2D shapes depends on the overall, not shape-specific, hue distribution.

Funding: Supported by the Academy of Finland grant 287506.

[2P143] Topography of memory interference in visuo-spatial working-memory

David S Bestue, João Barbosa and Albert Compte

Theoretical Neurobiology, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Catalonia

Visuo-spatial Working Memory (vsWM) experiments suggest that prefrontal persistent activity underlies vsWM storage. Bump-attractor models elegantly link prefrontal physiology and behavioral vsWM precision via diffusing bumps of neural activity. We previously validated behaviorally a model-predicted memory bias whereby memory traces attract or repulse each other in the delay period. Here, we start to extend our bump attractor model from one to two dimensions by testing attraction/repulsion biases in the radial and angular dimensions. We conducted a vsWM task where fixating subjects remembered two nearby colored dots through a delay, and reported the location of the color-cued dot. In angle testing trials, the stimuli had the same radial location but different azimuthal angle, while in radial testing trials the stimuli were separated only radially from fixation. In angle testing trials we found changes in the delay-dependent bias between same azimuthal angles in different radial positions, suggesting an important effect of distance from fixation in vsWM rather than a radially-maintained angle effect. In radial testing trials, similar attraction/repulsion biases point to a bi-dimensional mapping of vsWM biases. Quantitatively, we found that similar visual distances entailed different delay-dependent biases for each set of trials, suggesting non-cartesian mapping of vsWM.

Funding: Ministry of Economy and Competitiveness (Ref: BFU 2012-34838), AGAUR (Ref: SGR14-1265)

[2P144] Left extrastriate body area shows sensitivity to the meaning of symbolic gestures: evidence from fMRI adaptation

Agnieszka Kubiak and Gregory Kroliczak

Institute of Psychology, University of Adam Mickiewicz in Poznan

Functional magnetic resonance imaging (fMRI) adaptation paradigm was used to test if semantic information contained in object-related transitive gestures and intransitive symbolic gestures is represented differently in the temporal and parietal cortex. Participants watched back-to-back videos (2.75 s duration) where the meaning of gesture was either repeated or changed with movement kinematics controlled for. The just observed (typically second) gesture was imitated. Attention was controlled by showing trials with a single video. fMRI adaptation – signal decreases or repetition suppression – for watching both gesture categories were revealed in the lateral occipital cortex. Yet, intransitive vs. transitive gesture specific adaptation was observed only in the left caudal middle temporal gyrus (cMTG) and rostral extrastriate body area (rEBA). Repetition enhancement, i.e. signal increase, associated with watching transitive gestures was shown in the precuneus. Our outcomes support traditional views that cMTG represents the concepts of actions or the “conceptual how”, and the precuneus represents visuospatial processing. Notably, rEBA repetition suppression is consistent with sensitivity to action meaning or the “semantic what” of actions. Thus, fMRI adaptation reveals a higher-order function of rEBA and its seminal role in the semantic network.

Funding: Maestro NCN grant 2011/02/A/HS6/00174 to GK

[2P145] To tell or not to tell: gender-related information modulates visual social cognition in healthy women and breast cancer patients

Alexander N Sokolov, Marina A Pavlova, Sara Y Brucker, Diethelm Wallwiener and Elisabeth Simoes

Women's Health Research Institute, Department of Women's Health, Eberhard Karls University of Tübingen Medical School and University Hospital

Implicit negative information perceived as a threat impedes visual social cognition (eg. Pavlova et al., 2014): telling that men are usually better than women on the event arrangement, EA, task (with no initial gender differences) drastically reduces women’s performance. When diagnosed with breast cancer, women face a lot of threatening information that may hinder their cognition, decision making and eventually, coping with the disease. We examined whether gender-related information affected performance on visual social cognition task in patients with mastocarcinoma. Two separate groups of patients (aged 40–55 years) and two control groups of matched healthy women were administered the EA task with standard instruction. In addition, one patient and one control group were told that men were commonly better on the task. With negative information, patients scored lower than controls, and lower than patients with standard instruction, indicating effects of both disease and information. Remarkably, the lowest scores occurred in patients with negative information. The outcome shows for the first time the impact of disease and information on visual social cognition, presumably blocking visual cognitive processing. This offers novel insights on improving physician-patient communication for enhanced visual cognitive processing in oncologic and other diseases.

[2P146] Does better encoding lead to slower forgetting?

Haggar Cohen and Yoni Pertzov

Psychology Department, The Hebrew University of Jerusalem, Israel

Visual Working Memory (VWM) is a crucial and limited cognitive ability. Recent studies have shown that information in VWM is rapidly forgotten, but it is still unclear what processes modulate the rate of forgetting. Here we assessed the influence of encoding advantage and top-down predictions on rapid forgetting, using a delayed-estimation task. Initially, four oriented bars were displayed but one appeared slightly before the rest. Following a variable retention interval (one or six seconds), participants estimated the orientation of one of the bars by rotating a probe bar. In the first experiment, the bar that was given an encoding advantage was probed in 25% of the time (encoding advantage was not predictive); while in the second experiment it was probed 85% of the time (now encoding advantage was predictive). We found that longer delays and shorter display time, led to larger estimation errors, but the two factors did not interact. However, in the second experiment, the interaction was significant; hence, predictive advantage during encoding led to slower forgetting. We conclude that better encoding increases the overall precision of recall, but does not lead automatically to slower forgetting. On the other hand, top-down priority does modulate the rate of rapid forgetting.

[2P147] Object maintenance beyond their visible parts in working memory: Behavioral and ERP evidence

Siyi Chen, Thomas Töllner, Hermann J. Müller and Markus Conci

Department of Psychology, Ludwig-Maximilians-Universität, Munich

The present study investigated the relationship between working memory (WM) storage capacity and processes of object completion for memorizing partly occluded shapes. To this end, we used a change-detection paradigm in which to-be-memorized composite objects (notched shapes abutting an occluding shape) were either primed to induce a completed object or, alternatively, a mosaic interpretation -that is, an uncompleted representation of the presented shapes (see Chen, Müller, & Conci, 2016, J. Exp. Psychol. Hum. Percept. Perform.). Our results showed an effect of completion despite constant visual input: more accurate responses were obtained for completed as compared to mosaic representations when observers were required to memorize two objects, but this effect vanished with four to-be-memorized items. Moreover, a comparable completion effect was also evident in WM-related EEG measures during the retention interval. Specifically, the amplitude of the contralateral delay activity was larger for completed as compared to mosaic interpretations (again, in particular, for the smaller memory set size). In sum, this study demonstrates that WM capacity is characterized both by the number and perceptual fidelity of the represented objects. These findings support the view of WM as reflecting a continuous resource, with capacity limitations depending on the structured representation of to-be-remembered objects.

Martina Poletti1, Michele Rucci1 and Marisa Carrasco2

1Psychological and Brain Science, Boston University

2New York University

Vision is not homogenous within the foveola, the high-acuity region of the fovea. Microsaccades are finely controlled to compensate for this inhomogeneity by bringing the locus of highest visual acuity in the foveola on salient objects. But can such high level control also extend to covert attention? Measuring shifts of attention within the foveola is challenging because fixational eye movements displace the retinal stimulus by an area as large as the foveola itself. We circumvented this problem by using a custom apparatus to stabilize the stimulus on the retina. Our findings show that attention can be selectively allocated toward objects separated by only 20 arcminutes in the foveola, leading to faster detection of targets presented at the attended location. Covert attention within the foveola also enhanced visual discrimination within the foveola; in a spatial cuing task observers reported the orientation of a tiny bar that could appear at four different locations at 14 arcminutes from the center of gaze. Performance was higher and reaction times faster when the cue was informative about the target’s location than when it was not informative or provided wrong information. Our findings reveal that the resolution of attention is much finer than thus far assumed.

Funding: NSF- BCS-1534932, NSF-ORAPLUS-1420212, NIH EY18363

[21S102] Enhanced sensitivity to scene symmetry as a consequence of saccadic spatio-temporal sampling

Meso Andrew I1, Jason Bell2, Guillaume S. Masson1 and Anna Montagnini1

1Institut de Neurosciences de la Timone, CNRS/Aix-Marseille Université

2University of Western Australia

Mirror symmetry is everywhere around us. Perhaps as a consequence, humans and other animals are highly sensitive to it. Our recent work has demonstrated that the presence of symmetry in synthetic scenes consistently distorts the directions of spontaneously occurring saccades, aligning them along the axis of symmetry. This key result is replicated across several task conditions including both free exploration and during active axis discrimination as well as under dynamically refreshed presentations. This leads us to conclude that there is an underlying automated mechanism in play. To explore this, we use the dynamically recorded eye movements for each instance of the stimulus to jitter the image and recreate each resulting spatio-temporal retinal image. We then simulate the temporal integration of this dynamic image and estimate the orientation energy present. The time scales of integration simulated determine whether the dots remain spatially independent or blur into elongated lines predominantly parallel to the axis of symmetry. In the latter case, symmetry becomes easier to detect with standard oriented luminance filter models. We propose and discuss an appropriate temporal component to standard symmetry models which exploits additional orientation information afforded by the saccades.

Funding: Grant: SPEED, ANR-13-SHS2-0006 (GSM, AM), Grant: REM, ANR-13-APPR-0008-02 (AM), The CNRS & ARC #DP110101511 and #LP130100181 (JB)

[21S103] Perceptual re-calibration through transsaccadic change

Matteo Valsecchi and Karl Gegenfurtner

Department of General Psychology, Justus-Liebig-Universität Giessen

Humans experience the visual world as being relatively uniform and unchanging as we move our eyes. This may appear puzzling considering the large inhomogeneities between the representations of the fovea and periphery in our visual system. In a series of experiments we demonstrate that exposing observers to consistent transsaccadic changes in the size of the saccadic target can generate corresponding changes in perceived size, so that over the course of a few hundred trials the relative size of a peripheral stimulus appears smaller if a transsaccadic size reduction was experienced. This re-calibration of perceived size can last at least until the next day and its effects are evident also in the opposite hemifield. Furthermore, the re-calibration is not induced if the transsaccadic change is applied to stimuli from which gaze is being diverted, but it can be induced using stimulus motion rather than gaze displacement. Overall our results point to the fact that our visual system maintains our impression of uniformity across the visual field through a continuous recalibration process. The prediction of postsaccadic foveal appearance based on peripheral input and the associated prediction error seem to be the main, though not the exclusive, source of the re-calibration signal.

Funding: Deutsche Forschungsgemeinschaft DFG SFB/TRR 135 and EU Marie Curie Initial Training Network ‘‘PRISM’’ (FP7—PEOPLE-2012-ITN; grant agreement 316746).

[21S104] Why do we follow targets with our eyes during interception?

Cristina de la Malla, Jeroen B. J. Smeets and Eli Brenner

Department of Human Movement Sciences, Vrije Universiteit Amsterdam

People usually look at objects with which they intend to interact. An obvious advantage of doing so is that this ensures that one obtains the best possible spatial information about the object. A consequence of following moving objects with one’s eyes is that this changes the way in which the object’s motion is judged. Rather than judging its motion from the retinal slip of its image one must judge it from signals related to the movements of the eyes. This could be the retinal slip of the background, but it could also involve extra-retinal signals. A particular advantage of relying on information about one’s eye movements to judge how a target is moving is that retinal slip of the target’s image provides direct feedback about errors in keeping one’s eyes on the target. This can be used to correct any errors in the initial estimate of the target’s speed. Such corrections could lead to more precise interception that is more robust with respect to biases in visual processing. We will provide evidence that indeed it does: following a target that one is trying to intercept with ones eyes makes one slightly more precise, and much less sensitive to biases.

Funding: This work was supported by grant NWO 464-13-169 from the Dutch Organization for Scientific Research.

[21S105] The role of allocentric information when walking towards a goal

Danlu Cen, Simon Rushton and Seralynne Vann

School of Psychology, Cardiff University

Do allocentric position cues play any role in the visual guidance of walking towards a target? To date, egocentric direction and optic flow have been the primary focus of research. Here we addressed that oversight. Participants wearing prism glasses walked to an LED target on the far-side of a pitch-black room. They were separated into two groups: (1) participants in the familiar group underwent task preparation within the test room and so were familiar with the environment prior to walking; (2) participants in the unfamiliar group were prepared outside the test room so were unfamiliar with the room prior to walking. The two groups took different curving trajectories to the target. The curvature of the trajectory taken by the unfamiliar group was as predicted by the angular displacement of the prism. The curvature of the familiar group was not, it was significantly less. The effect of familiarity was found to be robust in a series of follow-on experiments that sought to isolate the roles of different cues. The findings suggest that observers with prior exposure to the environment may have formed a mental representation of the scene structure and spatial layout, which may contribute to the guidance of walking.

[21S106] From multisensory integration to new rehabilitation technology for visually impaired children and adults

Monica Gori, Giulia Cappagli, Elena Cocchi, Gabriel Baud-Bovy and Sara Finocchietti

U-VIP Unit for Visually Impaired People, Istituto Italiano di Tecnologia

Our researches have highlighted that blind persons have problems in understanding the relation between sounds presented in space (Gori et al., 2014; 2015; Cappagli et al., 2015), tactile information about object orientation (Gori et al., 2010) and in encoding sound motion (Finocchietti et al., 2015). Early-onset of blindness adversely affects psychomotor, social and emotional development. In 2002 children below 15 years of age with visual impairment worldwide were about 1.4 million (Resnikoff et al., 2002). To date the most of the technology available (e.g. Kajimoto et al., 2003) is not suitable for young children with visual disability. We developed a rehabilitative devices for very young visual disabled children: the ABBI device (Audio Bracelet for Blind Interaction; www.abbiproject.eu). ABBI is a new rehabilitative solution to improve spatial, mobility and social skills in visually impaired children. It is based on the idea that an audio feedback related to body movement can be used to improve spatial cognition. We performed a three months longitudinal study in 24 children and a one day study in 20 adults with visual disability. Our results suggest that the association between audio-motor signals with ABBI can be used to improve spatial cognition of visually impaired children and adults.

[21S201] Colour Physiology in Subcortical Pathways

Paul R Martin

Save Sight Institute and Centre for Integrative Brain Function, University of Sydney

Convergent results from anatomy, physiology, and molecular biology suggest that red-green colour vision is a relatively recent addition to the sensory capacity of primates, having emerged subsequent to evolution of high-acuity foveal vision. Signals serving red-green colour vision are carried together with high-acuity spatial signals on the midget-parvocellular pathway. The primordial blue-yellow axis of colour vision has by contrast poor spatial acuity and is served by the evolutionary primitive koniocellular visual pathway. In this symposium presentation I will review our studies of the spatial and chromatic properties of parvocelular and koniocellular pathways, and show recent results concerning influence of brain rhythms on blue-yellow signals.

Funding: Australian National Health and Medical Research Council Grants 1081441, Australian Research Council grant CE140100007.

[21S202] Color as a tool to uncover the organizational principles of object cortex in monkeys and humans

Rosa Lafer-Sousa, Nancy Kanwisher and Bevil Conway

Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA

The existence of color-processing regions in extrastriate cortex of humans and macaque monkeys is well established. But their location in the cortex relative to other functional regions, their selectivity for color compared with other properties (shape or category), and their relationship across species remains unclear. I will discuss recent imaging work in humans (Lafer-Sousa et al., 2016) using color as a tool to test a functional organizational plan across primate species for object-processing cortex, as suggested by imaging data in monkeys (Lafer-Sousa and Conway, 2013). I will argue that a comparison of the monkey and human results suggests a broad homology that validates the use of macaques as a model for human vision, provides insight into the computational goals of object cortex, and suggests how it evolved. The results appear to be consistent with a model of object cortex captured by parallel multi-stage processing of color, shapes and places, and support the idea that inferior temporal cortex can be carved into 3 or 4 somewhat separate areas. The work uncovers an extensive network of higher-order brain regions processing color. I will speculate about the functional role of these regions, informed by psychophysical observations and data available from the patient literature.

Funding: NIH: EY13455, EY023322, 5T32GM007484-38. NSF: CCF-1231216, 1353571, GRFP

[21S203] Neural processing of color in higher cortical areas

Hidehiko Komatsu

Division of Sensory and Cognitive Information, National Institute for Physiological Sciences

Human clinical observations suggest that higher visual areas play critical roles in color perception. We are gradually getting pieces of information to build integrated view of the functional organization and color processing in the higher areas. A significant step seems to occur at the primary visual cortex where nonlinear transformation of color signals convert two-axes into multi-axes representation of color where neurons tuned to various directions in the color space are formed. Such multi-axes color representation appears to be a universal principle of color representation across the visual cortical areas and is elaborated in higher areas. In the inferior temporal cortex of the macaque monkey, neurons tuned to a small range of hue exhibit properties closely associated with color perception. Neural mapping and fMRI studies in macaques are revealing gradually detailed picture on the functional organization of the higher cortical areas in relation to color where constellation of multiple subregions are observed. Our recent study (Namima et al., J Neurosci 2014) have shown that there is some important difference between these subregions in a way color signal is represented.

Funding: JSPS KAKENHI Grant, JST COI Program

[21S204] Understanding color preferences: from cone-contrasts to ecological associations

Karen B Schloss

Cognitive, Linguistic, and Psychological Sciences, Brown University

Fundamental questions in color cognition concern how and why colors influence thoughts, feelings, and behavior. Much of the research has focused on color preferences, but we are only beginning to understand why color preferences exist and how they are formed. The central role of cone-opponency in color perception makes it appealing to use as a framework for investigating color preference. Hurlbert and Ling (2007) led this approach, predicting hue preferences with weights along the cone-contrast axes (L-M, S-LM) and sex-differences with differential weighting along the L-M axis (attributed to evolutionary division of labor in hunter-gather societies). However, subsequent studies have challenged this approach, demonstrating its weaker performance for broader samples of color (Ling & Hurlbert 2009; Palmer & Schloss, 2010); the lack of L-M sex differences in infants (Franklin, et al., 2010) and other cultures (Taylor, et al., 2013; Yokosawa et al., 2015), and its comparable performance to models with non-biologically based axes (e.g., CIExyY) (Sorokowski, et al., 2014). Although the cone-contrast model can describe color preferences, it does not provide a causal explanation (Schloss et. al, 2015). An alternative is that color preferences are determined by ecological experiences with colored object/entities (Ecological Valence Theory; Palmer & Schloss, 2010).

[21S205] Color Psychophysics in the Distal Stimulus

David Brainard

Department of Psychology, University of Pennsylvania

The scene illumination and the surface reflectances of objects in the scene both influence the spectrum of the reflected light: information about these distal scene factors is confounded in the retinal image. To provide a stable perceptual representation of object color thus requires that the visual system make perceptual inferences from the inherently ambiguous proximal stimulus. This has been widely studied using adjustment methods that characterize object color appearance across changes in illumination. To gain additional traction, we have (in collaboration with Hurlbert’s lab) measured thresholds for discriminating changes in scene illumination. On each trial, subjects choose which of two test scenes is illuminated differently from a reference scene, with the illumination change governed by a staircase procedure. Thresholds are extracted from the data. i) Thresholds in different illuminant-change chromatic directions vary systematically with the ensemble of surface reflectances present in the scene; ii) shuffling the locations of the surfaces as the illumination changes elevates thresholds, but does not make the task impossible. Measurement of illumination discrimination thresholds will allow determination of how efficiently the visual system uses the information available at various sites along the early visual pathways to make discriminations in the color properties of the distal stimulus.

Funding: EY RO1 10016

[21T301] Fast figure-ground organization in visual cortex for complex natural scenes

Rüdiger von der Heydt1 and Jonathan R. Williford2

1Krieger Mind/Brain Institute, Johns Hopkins University

2Netherlands Institute for Neuroscience

Assignment of border-ownership is essential for understanding images of 3D scenes. To see if the visual cortex can perform this task in the real world we studied neurons in monkey cortical area V2 with static images of natural scenes. Each neuron was tested with a large sample of scenes. Contrast borders corresponding to object contours were presented in the receptive fields of edge selective neurons at the proper orientation. Responses were analyzed by regression and border-ownership selectivity was defined as the effect of side of object (object location relative to the contour). About half of the neurons showed a significant main effect of border ownership, and the mean border-ownership signal emerged at ∼70 ms, only about 30 ms after the onset of responses in V2. But how consistent are the neural signals across scenes? We calculated the distribution of border-ownership signals for each recorded neuron and corrected for effects of random response variation. We found that a substantial proportion of neurons were over 80% consistent across scenes and some were over 90% consistent. Thus, the visual cortex seems to understand the scene structure even in complex natural images. How it performs this task so fast remains a puzzle.

Funding: ONR N000141010278, NIH EY02966, NIH EY016281

[21T302] Serial dependence in context: the role of summary statistics

Mauro Manassi, Wesley Chaney, Alina Liberman and David Whitney

Department of Psychology, University of California, Berkeley, CA, USA

We experience the visual world as a continuous and stable environment, despite rapidly changing retinal input due to eye movements and noise. Recent studies have shown that orientation and face perception are biased toward previously seen stimuli (Fisher & Whitney, 2014; Liberman et al., 2014). This serial dependence effect was proposed as a mechanism to facilitate perceptual stability, compensating for variability in visual input. Although serial dependence was shown to occur between single objects, it remains unknown whether serial dependence can occur in the complex environment we experience in everyday life. Here, we tested whether serial dependence can occur between summary statistical representations of multiple objects. We presented a 3x3 array of nine Gabors with random local orientations, and asked observers to adjust a bar’s orientation to match the ensemble orientation. We found evidence for serial dependence: the reported ensemble orientation was pulled toward the orientation of the previous Gabor array. Further controls showed that serial dependence occurred at the ensemble level, and that observers averaged ∼60% of the Gabors per trial. Our results show that serial dependence can occur between summary statistical representations and, hence, provide a mechanism through which serial dependence can maintain perceptual stability in complex environments.

Funding: Mauro Manassi was supported by the Swiss National Science Foundation fellowship P2ELP3_158876.

[21T303] Responses of macaque ganglion cells to natural scenes: spatial and temporal factors

Barry Lee and Manuel Schottdorf

Neurobiology, MPIBPC

We have described responses of macaque ganglion cells to stimuli derived from natural scenes (van Hateren et al., J. Neurosci., 2002) using a simplified stimulus just modulated in time and color. We here compare responses to a full spatiotemporal video to the simplified stimulus. A flower-show video, centered over the receptive field, was played back to ganglion cells, and repeated up to six times (150 frames/sec, 256x256 pixels). In the simplified stimulus, the average of central pixels was displayed as a uniform field. Coherence (bit rate) functions of responses were very similar for the full and simplified stimuli. Impulse trains under the two conditions were highly correlated. RFs derived from reverse correlation (luminance and chromatic) showed little indication of center-surround structure. For MC cells, the temporal MTF is very bandpass. High SF components move at high TFs, amplifying their response to fine detail. This swamps any effect of RF structure on responses. For PC cells, the response was largely driven by the |L-M| signal and no spatial opponency is present. We conclude that cell responses to natural scenes seem driven by temporal modulation as the eye scans the scene, rather than spatial structure in stimulus or RF.

Funding: NE!13115

[21T304] Towards matching the peripheral visual appearance of arbitrary scenes using deep convolutional neural networks

Thomas S Wallis, Christina M. Funke, Alexander S. Ecker, Leon A. Gatys, Felix A. Wichmann and Matthias Bethge

Werner Reichardt Centre for Integrative Neuroscience, Eberhard Karls Universität Tübingen

Distortions of image structure can go unnoticed in the visual periphery, and objects can be harder to identify (crowding). Is it possible to create equivalence classes of images that discard and distort image structure but appear the same as the original images? Here we use deep convolutional neural networks (CNNs) to study peripheral representations that are texture-like, in that summary statistics within some pooling region are preserved but local position is lost. Building on our previous work generating textures by matching CNN responses, we first show that while CNN textures are difficult to discriminate from many natural textures, they fail to match the appearance of scenes at a range of eccentricities and sizes. Because texturising scenes discards long range correlations over too large an area, we next generate images that match CNN features within overlapping pooling regions (see also Freeman and Simoncelli, 2011). These images are more difficult to discriminate from the original scenes, indicating that constraining features by their neighbouring pooling regions provides greater perceptual fidelity. Our ultimate goal is to determine the minimal set of deep CNN features that produce metameric stimuli by varying the feature complexity and pooling regions used to represent the image.

Funding: Funded in part by the Alexander von Humboldt Stiftung, German Federal Ministry of Education and Research (BMBF), and the German Science Foundation (DFG)

[21T305] A numerosity-processing network throughout human association cortex

Ben Harvey1 and Serge Dumoulin2

1Faculty of Psychology and Education Sciences, University of Coimbra

2Spinoza Centre for Neuroimaging Amsterdam

Perception of numerosity (the number of visual objects in a set) and other quantities is implicated in cognitive functions including foraging, attention control, decision-making and mathematics. We hypothesize that numerosity-selective responses are widely distributed throughout human association cortices, to allow interactions with multiple cognitive systems. Using ultra-high-field (7 T) fMRI and neural model-based population-receptive field analyses, we describe numerosity-selective neural populations organized into six widely separated topographic maps in each hemisphere. These were found in visually responsive areas implicated in object recognition and motion perception (occipito-temporal cortex), attention control (parietal cortex), and decision-making and mathematics (prefrontal cortex). Left hemisphere maps typically contained more low numerosity preferences, with more high numerosity preferences in the right hemisphere. Within each hemisphere, anterior maps contained a smaller proportion of high numerosity preferences than posterior maps, and maps differed considerably in size. Unlike sensory topographic maps such as visual field maps, numerosity tuning widths were very similar between these numerosity maps. All numerosity maps were in visually-responsive areas, but their placement and organization did not follow that of particular visual field maps. This similar representation of numerosity in many brain areas suggests a broad role for quantity processing in supporting many perceptual and cognitive functions.

Funding: Supported by Netherlands Organization for Scientific Research grants #452.08.008 and #433.09.223 to SD, and by Portuguese Foundation for Science and Technology grant #IF/01405/2014 to BH.

Lucy J Spencer, Alex Wade and Karla Evans

Department of Psychology, University of York, UK

Humans can rapidly (∼13 ms) extract ‘gist’ (global image and summary statistics, including semantic categories) from visual scenes. This allows for rapid extraction of information for multiple categories, but these outputs can interfere destructively depending on the task at hand (Evans et al., 2011). We investigated the neural correlates of gist processing using a rapid event-related fMRI by presenting (200 msec) a linear combination of noise masks and ‘face’ and/or ‘place’ images in four quadrants of the visual field simultaneously. Observers’ task was to indicate the presence and quadrant of a pre-defined target category. We measured responses in pre-localised cortical regions and conducted additional whole-brain analyses. Category-selective activation in extrastriate areas support the involvement of ‘face’ and ‘place’ areas in gist perception. No top-down-driven activation in target locations were observed in V1, consistent with the observation of gist extraction without the ability to localize the target. Signal-detection analysis indicates activity in place selective areas predicts target perception (hits, false-alarms) while activity in face areas predicts the presence of the target itself (hits, misses). Finally, activity in the frontal and place selective areas were suppressed when an additional distractor stimulus was present, reflecting the destructive signal collision observed behaviourally.

[21T307] Giessen's hyperspectral images of fruits and vegetables database (GHIFVD)

Robert Ennis, Matteo Toscani, Florian Schiller, Thorsten Hansen and Karl Gegenfurtner

General Psychology, Justus-Liebig University Giessen, Germany

Vision is tuned to the environment in which we evolved. For example, it has been hypothesized that color vision is closely adapted to the spectral properties of our environment. Since food like fruit and vegetables presumably played a major role in evolution, we have developed a hyperspectral database of 29 fruits and vegetables. Both the outside (skin) and inside (fruit) of the objects were imaged. We used a Specim VNIR HS-CL-30-V8E-OEM mirror-scanning hyperspectral camera and took pictures at a spatial resolution of ∼57px/deg by 800 pixels at a wavelength resolution of ∼1.12 nanometers. A broadband LED illuminant, metameric to D65, was used. A first analysis of these images showed that (1) the frequency distribution of fruit/vegetable skin colors followed a power law, similar to natural scenes, (2) the skins were darker than the insides, and (3) inside and skin colors were closely correlated. More importantly, we have found (4) a significant correlation (0.73) between the orientation of the chromaticity distributions of our fruits/vegetables with the orientations of the nearest MacAdam discrimination ellipses. This indicates a close relationship between sensory processing and the characteristics of our environmental objects.

[21T308] Semantic integration without semantics? Meaningless synthesized scenes elicit N400 responses to semantically inconsistent objects

Melissa Võ, Tim Lauer and Tim Cornelissen

Scene Grammar Lab, Goethe University Frankfurt

Seeing an object that is semantically inconsistent with its scene context —like a toaster on the beach— elicits semantic integration effects, seen in scalp ERPs as an increased N400 response. What visual information from a scene is sufficient to modulate object processing and trigger an N400 response? To approach this question, we created a synthesized texture from each scene containing identical summary statistics but without providing any obvious semantic meaning. We then presented objects on either images of scenes, their texture versions, or a color control background. To create semantic inconsistencies, we paired indoor and outdoor scenes with either a consistent or inconsistent object thumbnail. We found a pronounced N400 response for inconsistent versus consistent objects on real-world scenes. Interestingly, objects on inconsistent texture backgrounds also elicited an N400 response with a similar time-course and topography, though less pronounced. A color control condition, however, showed no such response. At least for indoor versus outdoor scenes, our data suggest that even without direct access to the meaning of a scene, seeing its summary statistics might be sufficient to affect the semantic processing of an object. Whether this holds for more subtle inconsistencies —a toaster in a bedroom— remains to be seen.

Funding: This work was funded by DFG grant VO 1683/2-1 to MLV.

[22T101] Confidence levels during perceptual decision-making are discrete

Andrei Gorea, Matteo Lisi and Gianluigi Mongillo

Laboratoire Psychologie de la Perception, Université Paris Descartes & CNRS, France

Are our decisions – and hence confidence therein – based on a full knowledge of the prior and likelihood probability functions as imposed by the normative Bayesian theory? We answer this negatively by means of a new experimental paradigm. Each trial consisted of two consecutive decisions on whether a given signal was above or below some reference value. The first decision was to be made on a signal randomly drawn from a uniform distribution. Correct/incorrect responses resulted into signals randomly drawn from respectively the positive/negative sub-intervals to be judged when making the second decision. Subjects were told so. A non-Bayesian observer was designed to have discrete confidence levels instantiated by one, two or three second-decision criteria representing different levels of the point-estimates of the evoked neural response. Syntheticexpression space'

David Bimler and John Kirkland

School of Psychology, Massey University, New Zealand

If distinctions between the emotional content of facial expressions (FEs) are conveyed by cues within some specific region of the face, masking that region should distort the pattern of perceived emotional similarities among pairs of expressions. We generated four sets of 54 FEs by masking the eye region and mouth region of a male-poser and female-poser set (WF and MO). Each unmasked set contained six pure emotion prototypes plus Neutral, and 47 interpolated emotional blends. Ten subjects provided triadic similarity judgements for each set, for comparison with a pool of judgements previously collected for unmasked stimuli. The results were compatible with a four-dimensional geometrical model of “expression space”, compressed in specific directions corresponding to the absence of emotion-distinguishing cues, but these directions were not always the dimensions of experiential emotion. For instance, for both posers, the eye-masked condition was equivalent to compression along a direction where eye-region cues dominated expressive variance, separating Fear and Surprise at one extreme from Sad at the other. The mouth-masked condition expanded both poser’s models (increasing dissimilarities) along a direction with extremes of Fear/Surprise and Sad, and compressed the MO model along a Happiness direction, but not the WF model.

[3P097] Traditional Islamic Headdress and Facial Features Unconsciously Elicit Negative Emotions

Trevor J Hine and Bhutto Sarah

School of Applied Psychology/Menzies Health Institute Queensland, Griffith University

There has been an increasing amount of negative media in the West against Muslims and Islam, leading to an increase in implicit negative feelings towards those identifying as Muslim through wearing traditional dress. The dot-probe and Continuous Flash Suppression (CFS) techniques were used to elicit an emotional response without awareness. Thirty-five participants in a dot-probe experiment were shown (16 msec) images of male or female faces, with unfriendly, neutral or friendly facial expressions and both Muslim or Western headcovering and features. There were significantly slower reaction times to upright faces than inverted faces, especially with some of the Muslim faces. The same faces were shown to 21 participants during CFS. Afterwards, the participants were required to rate a visible neutral face as unfriendly, neutral or friendly. A significant two-way interaction was found for Orientation (normal vs inverted) × Headdress (Muslim vs Western), where the neutral face was rated as significantly more unfriendly after unconscious exposure to Muslim faces as opposed to Western faces. These results indicate that faces displaying the traditional headdress of the Islamic faith, along with other facial features, unconsciously elicit a negative emotional response in Westerners when compared to Western faces.

[3P098] Crossmodal integration of emotional sounds and faces depends on the degree of autistic traits

Arno Koning, Lena Mielke and Rob van Lier

Donders Institute for Brain Cognition and Behaviour, Radboud University, The Netherlands

We studied the influence of emotional sounds on the judged friendliness of faces. The faces were all computer generated and had a neutral expression, whereas the sounds could be qualified as happy, scary, or neutral, comprising 3 seconds of laughter, screams or noise, respectively. All faces were shown in two viewing directions: frontal (with the line of sight towards the observer) or sideways (with line of sight facing away by approx. 45 degrees). Participants were non-autistic, but filled in a standardized AQ test. There was a main effect of sound on the judged friendliness of the faces and an interaction effect of sound and viewing direction of the faces. More in particular, it appeared that participants with relatively low AQ-scores (less autistic traits) rated frontal faces to be more friendly when happy sounds were presented (as compared to sideways faces), but rated the sideways faces to be more friendly when scary sounds were presented (as compared to frontal faces). Additionally, when presenting the emotional sounds, the higher the AQ-score, the smaller the difference between the judged friendliness of the two faces (frontal versus sideways). The data suggest that cross-modal sensitivity was highest for participants that had the lowest AQ-score.

[3P099] Reading the Mind in the Blink of an Eye - A novel database for facial expressions

Gunnar Schmidtmann, Daria Sleiman, Jordan Pollack and Ian Gold

Department of Ophthalmology, McGill University

The ability to infer emotions, or mental states of others, referred to as theory-of-mind (ToM), has traditionally been understood as a slow, conscious process. It has recently been suggested that some aspects of ToM occur automatically. We aimed to investigate this with respect to specific emotional states by using the ‘Reading the Mind in the Eyes’ Test (Baron-Cohen et al., 2001). A 4-AFC paradigm was employed to test the ability to correctly judge the emotional state of peoples’ eye-region for different presentation times (12.5–400 ms & indefinite). Sensitivity to the stimuli increases with increasing presentation time up to about 75% correct response for 400 ms, but does not increase for longer presentation times. Moreover, despite consistent participants’ reports of guessing, they performed well above chance. These results suggest that judging complex facial expressions from just the eye regions is an automatic and unconscious process. Additionally, we introduce a completely new database of 96 different facial expressions, based on the terms used by Baron-Cohen et al., (2001). Two professional actors were recruited to interpret the facial expressions. High quality pictures were taken under controlled lightening and perspective conditions.

[3P100] Is there a correlation between psychophysical visual surround suppression and IQ?

Sandra Arranz-Paraíso and Ignacio Serrano-Pedraza

Faculty of Psychology, Complutense University of Madrid, Spain

People take longer to discriminate the direction of motion of a high contrast stimulus when this is large compared to when it is small. This paradoxical “visual surround suppression” is believed to reflect normal visual inhibitory mechanisms. There is a growing interest in the study of these mechanisms given the reduced visual surround suppression found in different clinical populations and the strong link with a reduced GABA concentration. Melnick et al., (2013), using a motion discrimination task, showed that intelligence strongly correlates with visual suppression (r = 0.71). Our aim is to figure out whether these results can be extended to other IQ measurements (RIAS test) and other visual suppression tasks (see Petrov, et al., 2005). We tested 27 participants (age-range 20–31 y). Our results showed that: a) Intelligence doesn’t correlate with any visual suppression task: Motion Suppression index vs. general intelligence RIAS (r = 0.22, p = 0.27); Spatial Suppression index vs. RIAS (r = −0.26,p = 0.177). b) Duration thresholds for a moving small high-contrast stimulus showed a significantly high correlation with non-verbal intelligence (r = −0.54, p = 0.0036) and general intelligence RIAS index (r = −0.46, p = 0.015) but a small non-significant correlation with verbal intelligence (r = −0.25, p = 0.199). Our results suggest that speed processing and not visual surround suppression is related to IQ.

Funding: [Supported by PSI2014-51960-P from Ministerio de Economía y Competitividad, Spain]

Agustín P Décima, Andrés Martín and José Barraza

Instituto de Investigación en Luz, Ambiente y Visión, Universidad Nacional de Tucumán, Argentina

Velocity is always defined in relation to a frame of reference. The visual system shifts the reference frame from which it processes motion signals according to situation and context, e.g. the duncker illusion. We hypothesize that the visual system not only employs contextual stimuli to establish novel reference frames, but are also necessary to improve velocity calculation, i.e. velocity precision is impoverished when motion signals must be computed under isolated environments, even when the reference system remains known. To test this hypothesis we conducted speed discrimination tests under landmarked and isolated situations. Results show a significant increase in the webber fractions when stimuli were displayed in isolation. These results would indicate that contextual information may not only help establish novel coordinates systems when needed, but would as well help refine motion estimations under ‘normal' (i.e. retinotopic) coordinate systems.

[3P102] Anodal and cathodal electrical stimulation over v5 improves motion perception by signal enhancement and noise reduction

Luca Battaglini and Clara Casco

General Psychology, University of Padova

The effect of transcranial direct current stimulation (tDCS) and perceptual learning (PL) on coherent motion (CM) discrimination of dots moving coherently (signal) in a field of dots moving randomly (noise) can be accounted for by either noise reduction or signal enhancement. To distinguish between the two mechanisms we monitored the correct direction of CM discrimination as a function of coherence levels (psychophysical method of constant stimuli). Rather than having opposite effects on CM motion discriminability, we found that tDCS of both positive and negative polarity over V5 enhances discriminability but in a different way: anodal (a-tDCS) reduces the coherence levels to reach threshold (75% accuracy) whereas cathodal (c-tDCS) improves discriminability at subthreshold signal-to-noise levels. Moreover, results show that a late-PL also reduces CM threshold as a-tDCS does. These results suggest a dissociation between the neural mechanisms responsible for enhanced CM discriminability: either depression of the noisy uncorrelated motion, by c-tDCS, or increased activation of weak correlated motion signals by a-tDCS and late-PL.

Louise O'Hare

School of Psychology, University of Lincoln

The human visual system is believed to make use of orientation detectors to augment direction discrimination of motion stimuli through the use of “motion streaks” (Geisler, 1999). The effect relies on the speed of the moving object relative to the length of the temporal integration window of the observer. Therefore, the size of the motion streak effect could potentially be used as a proxy for estimating individual differences in temporal integration processes. Migraine groups consistently show poorer performance on global motion tasks compared to controls (e.g. Ditchfield et al., 2006), and it has been shown that this is not due to inadequate sampling (Shepherd et al., 2012) or increased internal noise levels (Tibber et al., 2014). Global motion processing relies on sampling effectiveness, level of internal noise and ability to integrate motion signals (Dakin et al., 2005). This study investigated whether temporal integration processes are different in migraine and control groups, using a motion streak masking task. Results suggest a trend towards slightly elevated thresholds for the motion streak effect for those with migraine compared to those without.

[3P104] Further observations of the “Witch Ring” illusion

David A. Phillips, Priscilla Heard1 and Thomas Ryan1

1University of the West of England

We report further characteristics of illusions of expansion or contraction seen in “magic” novelty illusion rings, known historically as Witch Rings. In an earlier study we attributed the illusions to movement of the reflections the rings present with rotation (Heard and Phillips, 2015 Perception 44 (1) 103 - 106), noting that illusion was reduced and replaced by depth effects when animated variants of the stimuli were given size and acceleration perspective depth cues. We now report that the illusion and its reduction with perspective persist when stimuli are reduced just to V shaped patterns of streaming dots. Illusion is also sharply reduced by steady fixation. We analyse experimental data from 18 participants with ASL eye-tracker. We consider a possible relationship with similar effects of expansion and contraction, when a rigid V shaped fan of lines is raised or lowered rapidly in an observer's field of view. We call this the scrolling illusion, since it appears if such shapes chance to be present amongst rapidly scrolled computer screen content. We demonstrate however that the effect is attributable to the aperture problem, noting that it does not appear with dot patterns. It seems unlikely to be related to the Witch Ring illusion.

[3P105] Direction perception in center-surround multi-element configurations with varying contrast and velocity

Miroslava Stefanova, Nadejda Bocheva, Byliana Genova and Simeon Stefanov

Institute of Neurobiology, Bulgarian Academy of Science

We examined how the difference in contrast, velocity, and orientation of moving elongated elements in a central and surround field affect the apparent direction of motion. The stimuli consisted of Gabor elements moving either parallel or orthogonal to their orientation with two different speeds. The surround motion direction varied from 0° to 315° with a step of 45°.The relative contrast in the center and periphery was varied. The Subject’s task was to discriminate whether the central motion was to the left or to the right from the vertical downward. The results suggest a significant interaction between surround motion direction and the relative contrast, velocity, and orientation of the elements. The perceived direction in the center was repelled away from the surround motion direction most when the motions in the two fields of the configuration were orthogonal. The directional repulsion decreased with increasing speed and when the surrounding contrast was less in the center. The angular differences between the center and surround motion directions had stronger effect when the motion trajectory was parallel to the orientation of the elements. The functional significance of the observed effects on the integration of motion information for coding object speed and direction is discussed.

Funding: Supported by Grant №173/14.07.2014 of the Ministry of Education and Science, Bulgaria

[3P106] Size of motion display affects precision of motion perception

Yoshiaki Tsushima, Yuichi Sakano and Hiroshi Ando

Universal Communication Research Institute, National Institute of Information and Communication Technology, Japan

We enjoy visual images in different sizes of display such as a laptop and a theatre screen. How are our perceptual experiences altered by the size of display? Here, we conducted several motion perception experiments to investigate how perceptual experiences are influenced by size of visual images. Participants viewed the motion dots display for 500 msec in different size of perceptual field, 80 (large), 50 (middle), and 20 (small) degrees of visual angle. There were two types of coherently moving dots, expanding and contracting dots. The ratios of expanding to contracting (or contracting to expanding) dots were 10, 30, 40, 50, 60, 70, and 90 %. The size of a dot and dots density were fixed at all size of motion display. They were asked to report the global motion direction at each ratio of motion display, expansion or contraction. As a result, precision of motion perception at the larger display was higher than that at the smaller display. In addition, the variability of behavioral performances among participants decreased at the larger motion display. This might indicate that visual images at larger display provide us with not only more precise information but also more unified perceptual experiences.

[3P107] The effect of temporal duration on the integration of local motion in the discrimination of global speed, in the absence of visual awareness

Charles Y Chung, Sieu Khuu and Kirsten Challinor

School of Optometry and Vision Science, University of New South Wales

We examined the contribution of visual awareness to the spatial and temporal integration of local motion for the discrimination of global speed. Speed discrimination thresholds to rotational motion were measured using an annulus of moving Gabor in which the number of elements (2-8) and their temporal duration were varied. Experiment 1 showed that at brief stimulus durations (<0.8 s), speed discrimination improved with the number of elements, but not at longer durations. This demonstrated that spatial summation is more effective at brief stimulus-presentations. In Experiment 2, we investigated the minimum temporal duration required for local motion to be integrated to discriminate global speed. A subset of Gabor elements was presented asynchronously in which they appeared/disappeared at different temporal intervals. We find that transient Gabors were integrated over a temporal-window of 150 ms to influence speed discrimination. In Experiment 3, to investigate the role of visual awareness we repeated Experiment 2 and used Continuous Flash Suppression (CFS) to suppress transient Gabors from awareness. We find that suppressed transient-Gabors contributed to global-speed discrimination, but needed to be presented earlier to influence performance. This suggests that motion integration can occur without visual awareness, but this process is slower than under conscious vision.

[3P108] Second-order apparent motion perception traversing horizontal and vertical meridians

Hidetoshi Kanaya1 and Takao Sato2

1Faculty of Human Informatics, Aichi Shukutoku University, Aichi Shukutoku University

2Ritsumeikan University

We reported that, when classical apparent motion stimuli consisting of two discs were successively presented within or across hemifields (right/left or upper/lower), the motion perception rate markedly declined at shorter ISIs in cross-hemifield conditions relative to within-hemifield conditions (Sato, Kanaya, & Fujita, VSS2013). These results suggest classical apparent motion is partially mediated by a lower-level motion mechanism, e.g., a first-order motion mechanism. To further clarify this point, we examined the effect of second-order motion on within/cross-hemifield classical apparent motion. The first-order motion mechanism is thought to be unable to detect second-order motion (Cavanagh & Mather, 1989). Two rectangular objects defined by the first-order attribute (luminance) or one of the second-order attributes (contrast or dotsize) were successively presented within or across hemifields. ISI was varied in seven steps between 0 and 533.3 msec. Four observers’ task was to judge whether motion was perceived or not. Results showed that apparent motion perception was much the same in first- and second-order motions, and had a tendency similar to those of Sato et al., (2013). These results suggest that other motion mechanisms different from first-order motion mechanism that can detect second-order motion, e.g., long-range process (Braddick, 1974, 1980), mediate classical apparent motion.

[3P109] Effects of different electrical brain stimulations over V5/MT on global motion processing

Filippo Ghin, George Mather and Andrea Pavan

School Of Psychology, University of Lincoln

Transcranial electrical stimulation (tES) is a well-established neuromodulatory technique. To date the behavioural effects of tES on global motion processing are particularly fragmentary since previous studies employed different stimuli, stimulation regimes, stimulation sites and behavioural tasks. The aim of this study was to investigate the effect of different stimulation regimes (anodal tDCS, cathodal tDCS, high-frequency tRNS, and Sham) on global motion processing by stimulating the left V5/MT. Participants performed a motion direction discrimination task (8AFC). The stimuli consisted of global moving dots inside a circular window, and were displayed either to the right visual hemi-field (i.e., contralateral to the stimulation site) or to the left visual hemi-field (i.e., ipsilateral to the stimulation site). Results showed a significantly lower normalized coherence threshold for the contralateral than the ipsilateral visual hemi-field only when stimulating with anodal tDCS (M = 1.27 [SEM = 0.2] vs. M = 0.9 [SEM = 0.14], respectively). These results provide an additional confirmation of V5/MT as a crucial area for global motion processing and further evidence that anodal tDCS can lead to an excitatory effect at the behavioural level. The results suggest that anodal tDCS may increase the signal-to-noise ratio for global moving patterns.

[3P110] The window of simultaneity widens around the time of an active or passive action

Belkis Ezgi Arikan1, Bianca M. van Kemenade1, Benjamin Straube1, Laurence Harris2 and Tilo Kircher1

1Medicine, Philipps University Marburg

2York University, Canada

Research has shown distortions for the perceived timing of voluntary actions and their consequences, mostly focusing on unimodal action consequences. However, voluntary actions mostly have multisensory consequences. In two studies we investigated simultaneity perception for stimuli triggered by self-generated actions by assessing window of subjective simultaneity (WSS) for audiovisual stimulus pairs triggered by button presses. We manipulated the temporal predictability for the action-consequences by introducing delays between the button press and the AV pair. We found widened WSS when the action-effect relationship was as predicted. Introducing a delay led to a tightening of the WSS. In a second experiment, we included a passive condition using a passively-depressed button. We replicated widened WSS around the action time, for both active and passive movements. Delays led to a tightening of the WSS for both active and passive movements. We also found that the psychometric slopes of the active condition were steeper than slopes for the passive condition. Our results suggest that; 1. changes in the WSS may be explained by shifts or compressions in perceived timing, 2. causality seems to be crucial in perceiving simultaneity between actions and consequences, 3. movement intentionality seems to aid in achieving more precise perception of simultaneity.

Funding: This project is funded by IRTG-1901 BrainAct (DFG) and SFB/TRR 135

[3P111] The influence of effector movement on the spatial coding of somatosensory reach targets: From gaze-independent to gaze-dependent coding

Stefanie Mueller and Katja Fiehler

Experimental Psychology, Justus-Liebig University, Giessen, Germany

Previous research consistently showed that visual stimuli for reaching are coded in a gaze-dependent reference frame but the coding scheme of proprioceptive stimuli is less clear. Some studies suggest that proprioceptive reach targets are coded with respect to gaze, similar to visual targets, while others found gaze-independent coding. In study 1, we investigated whether an effector movement intervening between the presentation of the proprioceptive target and reaching towards it, accounts for the inconsistent results. Subjects reached to somatosensory targets while the gaze direction was varied. Additionally, we manipulated the presence of an effector movement (eyes or arm) between the target presentation and reaching. Reach errors only varied with gaze direction when the eyes or the arm were moved before reaching, thus indicating gaze-dependent coding. In study 2, we examined whether such a gaze-dependent representation after an effector movement replaced a gaze-independent one or whether a gaze-dependent representation was used in addition to gaze-independent, presumably body-centered representation. Hence, we used a similar paradigm as in study 1 but now varied the movement vector (start to target location) relative to the gaze direction and to the body midline. Results suggest mixed body-and gaze-centered coding when an effector movement intervened before reaching.

Funding: Fi 1567/4-2

[3P112] Turning down the noise in interceptive timing

Oscar T Giles, Richard Wilkie, Peter Culmer, Ray Hold, James Tresilian and Mark Mon-Williams

School of Psychology, University of Leeds, UK

Humans show higher temporal precision if they generate faster movements when intercepting moving targets (Tresilian & Plooy, 2006; Brenner & Smeets, 2015). We systematically added noise to participant trajectories to determine whether this would cause participants to alter their movement speed. Participants used a 1-DoF manipulandum to launch a virtual puck at a moving target travelling at a constant speed. Initial baseline trials (n = 100) had no added noise, with the puck moving at the strike speed. Participants then completed a block of trials (n = 200) where the puck moved at the strike speed plus noise taken from a Gaussian distribution of a specified standard deviation (SD). There were three groups: i) no noise group (SD = 0 mm/sec); low noise group (SD = 100 mm/sec) and high noise group (SD = 200 mm/sec). The presence of noise increased temporal errors but participants responded to the presence of noise by increasing their movement speeds: the high noise group hit the puck at higher velocities than the low noise group, and the low noise group hit the puck at higher velocities than the no noise group. These results suggest that people naturally offset changes in motor noise by systematically changing their movement speed.

[3P113] The lack of effect of a visual size illusion on grip aperture is independent of object size

Jeroen Smeets and Eli Brenner

Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands

There is an extensive literature debating whether visual size illusions influence the peak grip aperture in grasping. We found no effect for 7 cm diameter disks on a Ponzo illusion (Brenner & Smeets, EBR, 1996). We interpreted this as evidence that grasping is not based on a visual estimate of size. Most studies that did find an influence of illusions used approximately 3 cm diameter disks embedded in an Ebbinghaus illusion. Could it be that people do use size information for smaller objects because they combine information to maximise precision, and for smaller objects size judgments are no longer less precise than judgments of position (Smeets & Brenner, Current Biology, 2008). In order to avoid any possibility of parts of the illusion being interpreted as obstacles, we tested this possibility using a modified diagonal illusion. Participants grasped both small (1.5–2.5 cm) and slightly larger (4–5 cm) objects. The illusion had an effect of more than 10% on perceptual judgements, irrespective of object size. For the peak aperture during grasping movements, the effect of the illusion was negligible (<0.5%), again independent of object size. We conclude that the reported disagreement on the effect of illusion is not due to using differently sized objects.

[3P114] Repeated Search with Arm and Body Movements

Christof Körner1, Margit Höfler1 and Iain Gilchrist2

1Institute of Psychology, Universität Graz

2University of Bristol

When we search the same display repeatedly for different targets with covert attention (i.e., without eye movements), search does not benefit from repetition. Here, we investigated whether increasing the cost of the search would result in a repetition benefit. In two experiments participants searched repeatedly for different target letters among distractor letters. In Experiment 1 participants searched circular arrays of film canisters that were arranged on a board. Participants had to reach out and turn over the canisters to make the search letters visible. In the repeated search condition the array did not change between searches; in the unrepeated search condition there was a new array for each search. In Experiment 2 participants searched in a room amidst circular arrays of computer monitors. Participants had to walk from one monitor to the next and to press a button to make the search letters appear. We found that search rates (based on search times and the number of search steps necessary to find the target) improved dramatically in the repeated compared to the unrepeated search condition in both experiments. This suggests that participants used memory to improve search in the same environment if the cost of searching made memory usage worthwhile.

[3P115] Gaze when grasping a glass of milk or water

Eli Brenner1, Dimitris Voudouris2, Katja Fiehler2 and Jeroen B.J. Smeets1

1Human Movement Sciences, Vrije Universiteit, Amsterdam

2Justus-Liebig-Universität Giessen

Most studies agree that people look close to where their index finger will touch an object when reaching to grasp it. Various factors modulate this tendency, but the only clear exception is for objects at eye height. To examine whether this is because objects at eye height occlude the whole last part of the index finger’s trajectory, we compared gaze patterns when grasping a glass of milk or water. When the objects were at eye height, people could see their finger move behind the glass of water, but could not see anything through the milk. When the objects were below eye height, people could see both digits approach the glass. To encourage a precise grasp, the glass was placed on a small surface and people were to lift it and pour the liquid into another glass. Surprisingly, most participants looked closer to their thumb’s endpoint while reaching for a glass that was below eye height, and looked below the endpoints altogether when the glass was at eye height, irrespective of the kind of liquid in the glass. This suggests that where people look when reaching for an object depends on more than only the requirements of the reaching movement.

Dimitris Voudouris and Katja Fiehler

Experimental Psychology, Justus-Liebig University Giessen

The perception of tactile stimuli on a moving limb is generally suppressed. This is paradoxical, as somatosensory signals are essential when moving. In order to examine whether tactile perception depends on the relevancy of the expected somatosensory information, participants reached with their unseen right hand either to a visual or a somatosensory (digits of unseen left hand) target. Two vibrotactile stimuli were simultaneously presented: a reference either to the little finger of the left static hand or to the sternum, and a comparison to the index finger of the right moving hand. Participants discriminated which stimulus felt stronger. We determined the point-of-subjective-equality (PSE), which was higher during somatosensory than visual reaching, but only when the reference was at the target hand, suggesting enhanced tactile perception at the task-relevant location. We then examined whether tactile enhancement is target-specific. Participants reached to their left thumb or little finger and discriminated the intensity of a stimulus presented to one of these target-digits from another stimulus presented to the sternum. Stimuli on the target-digits were perceived stronger during reaching compared to a baseline (no-movement) suggesting that somatosensory perception is enhanced at the target hand. However, this enhancement was not specific to the movement goal.

Funding: This work was supported by the German Research Foundation (DFG) TRR 135.

[3P117] Sensory-based versus memory-based selection in well-practiced sensorimotor sequences

Rebecca M Foerster and Werner X. Schneider

Neurocognitive Psychology, Bielefeld University

When performing an object-based sensorimotor sequence, humans attend to sensory environmental information that specifies the next manual action step – sensory-based selection. In well-practiced sequential actions, long-term memory (LTM) can directly control for the sequence of attention, gaze, and manual movements (Foerster et al., 2012) – memory-based selection. We revealed that individuals differ in the usage of sensory-based versus memory-based manual selection after training. Participants practiced a computerized version of the number-connection test, while eye movements were measured (Foerster et al., 2015). They clicked in ascending order on spatially distributed numbers 0–8 on the screen. LTM was build-up in 65 trials with constant visuospatial arrangement. In 20 consecutive change-trials, numbers 3–8 switched to 4–9 introducing a change on the sequence-defining visual features. The required spatial-motor sequence remained the same. In 15 reversion-trials, the original numbers appeared. During the first change trials, about half of the participants clicked slower, performed more errors and fixations, and exhibited longer cursor- and scan-paths. The remaining participants were hardly affected by the trajectory-irrelevant number change. The different gaze and hand patterns did not correlate with performance measures prior to the change. Thus, the applied selection mode was independent from the level of expertise.

Funding: This research was supported by the Cluster of Excellence Cognitive Interaction Technology ‘CITEC' (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG)

[3P118] How do people steer a car to intercept a moving target: Flexibility in the visual control of locomotor interception

Huaiyong Zhao, Dominik Straub and Constantin Rothkopf

Institute of Psychology, Technical University Darmstadt

Numerous studies have found evidence that humans use the constant bearing angle (CBA) strategy in locomotor interception. However, participants in these studies controlled only locomotion speed while moving along a fixed straight path. With such a task constraint, any change in bearing angle is equivalent to a change in target-heading angle. Therefore, these studies cannot discriminate between the CBA strategy and a constant target-heading angle strategy. To examine the strategy used in locomotor interception, we asked participants (N = 12) to steer a car to intercept a moving target in two virtual environments: one with only a textured ground-plane and the other containing textured objects. The car moved at a constant speed of 7 m/s and participants steered while the target had different but constant direction (left or right) and speeds (4, 5, or 6 m/s) across trials. Our results indicate that the bearing angle continuously changed during interception, inconsistent with the CBA strategy. In contrast, the target-heading angle initially diverged, and then remained constant until interception. This flexible constant target-heading angle strategy is consistent with a hybrid account of visual control of action that combines off-line strategy with information-based control, reflecting the flexibility in visual control of locomotor interception.

[3P119] Exploring the role of actions in calibrating audio-visual events in time

Nara Ikumi1, Salvador Soto-Faraco12

1Multisensory Research Group Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona

2Institució Catalana de Recerca i Estudis Avançats (ICREA) Barcelona.

Perception in multi-sensory environments requires both grouping and segregation processes across modalities. Temporal coincidence is often considered as a cue to help resolve multisensory perception. However, differences in physical transmission time and neural processing time amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether actions might serve as anchors to calibrate audio-visual events in time. Participants were tested on an audio-visual simultaneity judgment task following an adaptation phase where they were asked to synchronize actions with audio-visual pairs presented at a fixed asynchrony (either flash leading or lagging). Our analysis focused on the magnitude of the cross-modal recalibration as a function of the nature of the actions; fostering either grouping or segregation. Greater temporal adjustments were found when actions promoted cross-modal grouping. A control experiment suggested that cognitive load and action demands could reasonably explain the obtained effect, above and beyond sensory-motor grouping/segregation. Contrary to the view that cross-modal time adaptation is only driven by stimulus parameters, we speculate perceptual adjustments strongly depend on the observer’s inner state, such as motor and cognitive demands.

Funding: This research was supported by the Ministerio de Economia y Competitividad (PSI2013-42626-P), AGAUR Generalitat de Catalunya (2014SGR856), and the European Research Council (StG-2010 263145).

[3P120] Effects of visual feedback of virtual hand on proprioceptive drift

Hiroaki Shigemasu and Takuya Kawamura

School of Information, Kochi University of Technology

As proprioceptive drift has been investigated mainly with rubber hand illusion, little is known about the effect of visual feedback on the drift with fake hand which is slightly shifted from real hand. In this study, virtual fake hand was presented on the 3D display which was placed over the self-hand. This experimental setup enabled to manipulate the visual factors which may influence the drift. In experiment 1, the effects of (1) congruence of depth position between fake- and self-hand, (2) synchronized active motion, and (3) the temporal change of the amount of the drift were examined with horizontal movement of self-hand. In experiment 2, with vertical movement of self-hand, the effects of (1) congruence of visual size cue and (2) the temporal change of the drift were examined. As a result, when horizontal or vertical motion of the virtual fake hand was presented synchronized with the self-hand, the drift was significantly much larger than no-movement condition irrespective of the congruence of depth position and visual size cue. Although the drift persisted at least for 25 s induced by horizontal movement, the drift by vertical movement significantly decreased in 0-25 s range, suggesting the anisotropy in the proprioceptive drift.

[3P121] Motor activity associated with perceived objects depends on its location in space and previous interactions: an EEG study

Alice Cartaud1, Yannick Wamain1, Ana Pinheiro2 and Yann Coello1

1Psychologie, Université Lille 3, France

2Universidade do Minho, Portugal

Previous studies have revealed the existence of a motor neural network that is activated during the perception of objects in particular when it is at a reachable distance. Yet, it is unclear whether the neural activity of this network is modulated by previous association between objects and specific motor responses. The aim of this study was to address this issue. Participants had to learn a new association between an object and a motor response (left or right hand grasping). Then, we recorded their cerebral response with an electroencephalogram while they judged the reachability of the object (with the right hand) when presented at different distances. EEG signals analysis showed that early components reflecting sensory processing were sensitive to the incongruence of the hand-object relationship (i.e. seeing an object associated with the left hand), and late components were sensitive to the distance of the object. This study reveals that motor related brain activation in the presence of a visual object depends on its position in space but also on the specificity of the effector-object relationship gained from previous experience.

[3P123] Foreperiod beta-power correlates with the degree of temporal adaptation

Clara Cámara, Josep Marco-Pallarés and Joan López-Moliner

Departament de Cognició i Desenvolupament & Institut de Neurociències, Universitat de Barcelona, Catalonia

When exposed to sensorimotor visual delays, people learn to intercept targets successfully. We hypothesize that adaptation should be related to the processing of some kind of execution error. If so, we should be able to reveal neural correlates between degree of adaptation and beta-power activity before movement onset (foreperiod) which has been related to error processing in spatial visuo-motor adaptation. We measured EEG (32 channels) while subjects performed an interception task. The behavioral experiment was divided in 4 phases: full-vision (FV), no-vision (NV), adaptation (A), and NV. In the adaptation phase, we incrementally increased (1msec/trial) the temporal difference between the hand and the cursor movements. For the analysis, the adaptation phase was divided in early adaptation (EA) and late adaptation (LA). We examined beta-band (15–25 Hz) during the preparation of the interceptive timing task given the conditions FV, EA and LA. Interestingly, only LA showed a foreperiod beta-activity pattern that was parametrically modulated by the size of temporal adaptation to the temporal delays. This modulation was a positive correlation between adaptation and beta-power. This suggests that the foreperiod beta-power is related to temporal adjustments leading to adaptation in a similar way as in spatial errors.

Funding: Supported by grants 2014 SGR-079 and PSI2013-41568-P

Elisabeth Knelange and Joan López-Moliner

Departament de Cognició i Desenvolupament & Institut de Neurociències, Universitat de Barcelona, Catalonia

People are able to predict the sensory consequences of their motor commands, such that they can compensate for delays in sensorimotor loops. This prediction is driven by the difference between the predicted outcome and the perceived error on a trial. To see if people can use cross-modal information to drive adaptation, we used different types of sensory error information in an adaptation task. A target on a screen moved from left to right towards a reference line (at different speeds/distances). In the bi-modal condition, subjects were instructed to match a high-pitched tone with the ball crossing the reference line. In the uni-modal condition, subjects were instructed to match the high-pitched tone with a low-pitched tone that was presented when the ball crossed the line. In both conditions, the consequence of this action was delayed with 1 ms/trial. The results show that subjects were unable to adapt to the delay in the bi-modal condition, but did adapt somewhat in the uni-modal condition. It shows that subjects are unable to integrate the auditory consequence with a visual reference (the reference line), and thus suggests that uni-modal error signals need to be present to adapt to sensory delays.

Funding: project PACE H2020-MSCA-ITN-2014 Ref. 642961

[3P125] Motor learning that demands novel dynamic control and perceptual-motor integration: An fMRI study of training voluntary smooth circular eye movements

Raimund Kleiser1, Cornelia Stadler1, Sibylle Wimmer1, Thomas Matyas2 and Rüdiger Seitz3

1Kepler Universtiy Klinikum

2La Trobe University Melbourne

3LVR Klinikum Düsseldorf

Despite a large number of studies, the promise of fMRI methods to produce valuable insights into motor skill learning has been restricted to sequence learning, or manual training paradigms where a relatively advanced capacity for sensory-motor integration and effector coordination already exists. We therefore obtained fMRIs from 16 subjects trained in a new paradigm that demanded voluntary smooth circular eye movements without a moving target. This aimed to monitor neural activation during two possible motor learning processes: (a) the smooth pursuit control system develops a new perceptual-motor relationship and successfully becomes involved in voluntary action in which it is not normally involved; (b) or the saccadic system normally used for voluntary eye motion develops new dynamic coordinative control capable of smooth circular movement. Participants were able to improve the number of correction saccades within half an hour. Activity in the inferior premotor cortex was significantly modulated and decreased during the progress of learning. In contrast, activations in dorsal premotor and parietal cortex along the intraparietal sulcus, the supplementary eye field and the anterior cerebellum did not change during training. Thus, the decrease of activity in inferior premotor cortex was critically related to the learning progress in visuospatial eye movement control.

[3P126] Optic flow speed modulates guidance level control: new insights into two-level steering

Jac Billington, Callum Mole, Georgios Kountouriotis and Richard Wilkie

School of Psychology & Neuroscience, University of Leeds, UK

Responding to changes in the road ahead is essential for successful driving. Steering control can be modelled using two complementary mechanisms: guidance control (to anticipate future steering requirements) and compensatory control (to stabilise position-in-lane). Influential models of steering capture many steering behaviours using just ‘far' and ‘near' road regions to inform guidance and compensatory control respectively (Salvucci & Gray, 2004). However, optic flow can influence steering even when road-edges are visible (Kountouriotis et al., 2013). Two experiments assessed whether flow selectively interacted with compensatory and/or guidance levels of steering control. Optic flow speed was manipulated independent of the veridical road-edges so that use of flow would lead to predictable understeering or oversteering. Steering was found to systematically vary according to flow speed, but crucially the Flow-Induced Steering Bias (FISB) magnitude depended on which road-edge components were visible. The presence of a guidance signal increased the influence of flow, with the largest FISB in ‘Far' and ‘Complete' road conditions, whereas the smallest FISB was observed when only ‘Near' road-edges were visible. Overall the experiments demonstrate that optic flow can act indirectly upon steering control by modulating the guidance signal provided by a demarcated path.

Funding: Remedi.org, PhD studentship.

[3P127] Predicting the trajectory of a ball from the kinematics of a throwing action

Antonella Maselli, Aishwar Dhawan, Benedetta Cesqui, Andrea d'Avella and Francesco Lacquaniti

Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy

In several competitive sports, as often in everyday life, humans get engaged in catching objects thrown by a confederate or a competitor. When information about the ball trajectory is limited –because of sparse visibility or short duration of the ball trajectory- the catcher may rely on the throwing kinematics for predicting the outgoing ball’s trajectory and improving performance. In this study, we explored the information content about the outgoing ball trajectory present in the whole-body kinematics of the throwing action. We collected data from twenty subjects throwing at four different targets. We then collected kinematics from two subjects simultaneously, adding the presence of a second subject whose task was to catch the ball. Applying a combination of decomposition and classification techniques, we used throwing data (i) to characterize different throwing styles across subjects, (ii) to characterize the target-dependent intra-individual variability in the throwing kinematics, and (iii) to quantify the discriminability of the outgoing ball trajectory as a function of the time percentage of the action. This analysis is functional for assessing, in a successive step, to what extent humans are able to extract useful information from the view of a throwing action and exploit it for improving catching performances.

Funding: CogIMon (Horzion 2020 robotics program ICT-23-2014, grant agreement 644727)

Hua-Chun Sun, Curtis Baker and Frederick Kingdom

Department of Ophthalmology, McGill University, Canada

Simultaneous density contrast (SDC) is the phenomenon in which the perceived density of a textured region is altered by a surround of different density (Mackay, 1973). However SDC has never been systematically examined. Here we measured dot texture SDC using surround densities varying from very sparse to very dense (0–76.8 dots/deg2), using a 2AFC staircase procedure in which observers compared the perceived density of a test-plus-surround texture with that of a comparison texture with no surround. Psychometric functions were fitted to obtain the point of subjective equality (PSE). Unexpectedly we found that 4 of 5 observers showed a bidirectional SDC; not only does a denser surround make the test region appear less dense than otherwise (as expected), but a sparser surround makes the test appear more dense than does no surround. The latter result runs contrary to reports that with the density aftereffect (in which the perceived density of a region is altered by adaptation to a different density) adaptation only ever reduces perceived density (Durgin & Huk, 1997). Additional experiments/analyses ruled out mediation of SDC by contrast or spatial frequency. Our results are consistent with the presence of multiple channels that are selective for texture density.

Funding: NSERC grant, OPG0001978

[3P129] The neural response to visual symmetry in each hemisphere

Damien Wright, Alexis Makin and Marco Bertamini

Psychological Sciences, University of Liverpool

The human visual system is specially tuned to processing reflection symmetry. Electrophysiological work on symmetry perception has identified an ERP component termed the Sustained Posterior Negativity (SPN): Amplitude is more negative for symmetrical than random patterns from around 200 ms after stimulus onset (Bertamini & Makin, 2014). Presentation in central vision produces activity in both hemispheres. To examine activity in the two hemispheres separately and their interaction, we presented patterns in the left and right hemifields. In Experiment 1, a reflection and a random dot pattern were presented either side of fixation. In Experiment 2, just one hemifield contained a pattern whilst the opposite hemifield remained empty. In Experiment 3, both hemifields contained matching patterns. For each experiment, participants had to choose whether the patterns were light or dark red in colour (thus there was no need to classify the symmetry of the patterns). The SPN was found independently in each hemisphere, and it was unaffected by the stimuli presented in the opposite hemifield. We conclude that symmetry processing does not require activation of both hemispheres; instead each hemisphere has its own symmetry sensitive network.

Funding: This work was partly sponsored by an Economic and Social Research Council grant (ES/K000187/1) awarded to Marco Bertamini, and partly by a Leverhulme Trust Early Career Fellowship (ECF-2012-721) awarded to Alexis Makin.

[3P130] Identifying Semantic Attributes for Procedural Textures

Qitao Wu1, Jun Liu2, Lina Wang1, Ying Gao1 and Junyu Dong1

1Department of Computer Science and Technology, Ocean University of China

2Qingdao Agricultural University, China

Perceptual attributes of visual textures are important for texture generation, annotation and retrieval. However, it has been verified that perceptual attributes are not sufficient in discriminating a variety of textures except for near-regular ones. Recently, semantic attributes have raised significant interest in object recognition and scene understanding. This work focuses on identifying semantic attributes of procedural textures and discusses whether they can provide deeper insight into texture perception. We first generated 450 textures by 23 procedural models. Then, twenty observers were asked to group these textures, and describe textures in each group using semantic terms. We analyzed these terms and merged similar ones. Finally, we identified 43 semantic attributes from 98 semantic terms introduced by [Bhusan, 1997]. We applied Hierarchical Cluster Analysis (HCA) to the similarity matrix of the procedural models, and clustered these models into 10 classes. We tested the classification performance with semantic attributes as features. The accuracies of cross validation for all classes was 83.66% and for near-regular ones was 99.43%, which outperformed the 12 perceptual features defined in literature by more than 20% and 10% respectively. These results demonstrated that semantic attributes were more effective in discriminating procedural textures and more consistent with human perception.

Funding: National Natural Science Foundation Of China(NSFC) (No. 61271405); The Ph.D. Program Foundation Of Ministry Of Education Of China (No. 20120132110018);

[3P131] The role of motion in mirror-symmetry perception

Rebecca J Sharman and Elena Gheorghiu

Psychology, University of Stirling, Scotland

The human visual system has specialised mechanisms for encoding mirror-symmetry (Gheorghiu, Bell & Kingdom, 2014, Journal of Vision, 14(10):63). Here we investigate the role of motion in symmetry detection and whether symmetrical motion is processed differently to translational motion. Stimuli were random-dot patterns with different amounts of mirror-symmetry about the vertical axis or no symmetry. In the ‘symmetry and motion’ conditions, patterns contained both position and motion-symmetry and the matched pairs moved inwards, outwards, in random directions or were static. We manipulated the amount of positional symmetry by varying the proportion of symmetrical dots and measured symmetry detection thresholds using a 2IFC procedure. In the ‘motion only’ condition, the dots moved either symmetrically (inwards/outwards) or horizontally (left/right). We varied the proportion of coherently moving dots and measured both direction discrimination and motion coherence thresholds. We found that symmetry detection thresholds were higher for static patterns compared to moving patterns, but were comparable for all moving conditions. Motion coherence thresholds were higher for symmetrical than horizontal motion. Direction discrimination thresholds were higher for outwards motion than inwards, left or right motion. Motion does contribute to mirror-symmetry perception, by reducing thresholds, although this is not influenced by motion direction.

Funding: The Wellcome Trust Research Grant

[3P132] Dynamically adjusted surround contrast enhances boundary detection

Arash Akbarinia and Alejandro Parraga

Centre de Visió per Computador, Universitat Autònoma de Barcelona, Catalonia

It has been shown that edges contribute significantly to visual perception and object recognition. In the visual cortex, orientation selective receptive fields can capture intensity discontinuities at local level (Hubel & Wiesel, 1962), and neurophysiological studies reveal that the output of a receptive field is highly influenced by its surrounding regions. In this work, we propose a boundary detection model based on the first derivative of the Gaussian kernel resembling the double-opponent cells in V1 known to respond to colour edges (Shapely & Hawken, 2011). We account for four different types of surround stimulation (Loffler, 2008): (i) full, with a broad isotropic Gaussian, (ii) far, through different image scales weighted according to the distance, (iii) iso- and (iv) orthogonal-orientation, via orientation specific narrow long Gaussians. We dynamically adjust the surround weight of facilitation or inhibition according to the contrast of the receptive field. These signals are filtered at V2 with a centre-surround kernel orthogonal to the V1 orientations (Poirier & Wilson, 2006). Furthermore, we introduce a feedback connection from higher level areas (V4) to the lower ones (V1). Our preliminary results on two benchmark datasets show a large improvement compared to the state-of-the-art (non-learning) computational algorithms.

[3P133] From grouping to coupling: A new perceptual organization beyond Gestalt grouping

Katia Deiana and Baingio Pinna

Department of Humanities and Social Sciences, University of Sassari, Italy

In this work, the perceptual organization has been studied through new conditions that cannot be explained in terms of classical grouping principles. The perceptual grouping represents the way through which the visual system builds integrated elements on the basis of the maximal homogeneity among the components of the stimulus pattern. Our results demonstrated through experimental phenomenology the inconsistency of the organization by grouping, and more particularly, the inconsistency of the similarity principle. On the contrary, they suggested the unique role played by the dissimilarity among elements that behaves like an accent or a visual emphasis within a whole. The accentuation principle was here considered as imparting a directional structure to the elements and to the whole object thus creating new phenomena. The salience of the resulting phenomena reveals the supremacy of the dissimilarity in relation to the similarity and the fact that it belongs to a further perceptual organization dynamics that we called “coupling”.

Funding: perceptual organization, segmentation and grouping

[3P134] Probing Attentional Deployment to Foreground and Background regions

Adel Ferrari and Marco Bertamini

School of Psychology, University of Liverpool

Attention is allocated to locations in space. Previous studies (e.g., Nelson & Palmer, 2007) reported a foreground advantage, however how this depends on figure-ground organisation and boundary ownership is not clear. Observers were shown a scene of rectangular surfaces using Random Dot Stereograms (RDS). There was a central (target) rectangle and four surrounding rectangles. In one condition (Foreground) the surrounding rectangles were behind the target, in another condition (Background) the surrounding rectangles were in front of it. Figure-ground stratification was given solely by disparity of surrounding regions, not of the target. This stratification was always task-irrelevant. Parameters were adjusted for each individual using an adaptive procedure. Observers performed four tasks: (a) whether the target surface was horizontal/vertical (Aspect-Ratio), (b) which direction a Landolt-C gap was facing (Acuity), (c) whether a darker patch of the target surface was positioned in the upper/lower half of the surface (Displacement), (d) whether the surround was closer/farther compared to the target (Control task). Only in the Aspect-Ratio task (a) there was a figural advantage in terms of speed and accuracy. We take this as evidence that contour assignment is critical to generate a foreground advantage.

Funding: ECVP2015

[3P135] Perception of the Kanizsa illusion by pigeons when using different inducers

Tomokazu Ushitani and Mochizuki Shiori

Faculty of Letters, Chiba University

One problem in investigating animal perception of illusory modal figures, such as the Kanizsa illusion, is that results often cannot be determined in terms of discriminative cues, because the animal might discriminate specific local features formed by real contours of the inducers, but not the illusory contours that are assumed to be perceived. Therefore, the current study examined whether pigeons could perceive the Kanizsa illusion by using a variety of inducers to inhibit them using specific features. More specifically, we trained pigeons to search for an illusory triangle, or square, among illusory squares, or triangles. Both figures were formed on 15 textures consisting of a distribution of many small figures. Pigeons successfully learned to search for the target, suggesting that they perceived the illusion. To rule out the possibility that they merely rote learned numerous cues, we newly introduced five more textures as inducers. Results indicated that the search behavior of pigeons transferred to these novel inducers without any difficulty, which strongly supported the perception of illusory figures by pigeons. We have discussed how these results contribute to studies on the Kanizsa illusion in humans.

[3P136] Depth perception affects figure-ground organization by symmetry under inattention

Einat Rashal1, Ruth Kimchi2 and Johan Wagemans1

1Brain and Cognition, KU Leuven, Belgium

2University of Haifa Israel

Figure-ground organizations entail depth relations: the figure is what lies in front of the ground, and the ground consequently continues behind the figure. Symmetrical regions are frequently perceived as figures. However, symmetry is considered a relatively weak figural cue, since it has been shown to be overpowered by other cues when in competition. In this study we tested whether symmetry can be an effective figural cue when attention is not available. Specifically, we examined the relationship between symmetry and depth information through shading, in the process of figure-ground organization, using an inattention paradigm. The results showed that when symmetrical and asymmetrical areas were defined by line borders, figure-ground organization could not be achieved under inattention. However, when depth by shading was added, figure-ground organization was accomplished under inattention when symmetry and shading were compatible (i.e., both leading to the same figure-ground organization), but not when they were incompatible (i.e., both factors leading to competing figure-ground organizations). These results suggest that figure-ground organization due to symmetry can be achieved without focused attention when depth information is available.

[3P137] Measuring Plaid-Selective Responses Across Contrast Using the Intermodulation Response

Darren G Cunningham and Jonathan Peirce

School of Psychology, University of Nottingham, UK

Relatively little is known about the signal combinations carried out by mid-level neural mechanisms to encode conjunctions of low-level visual features. We studied these using frequency-tagging in steady-state EEG recordings. These allow us to measure responses to the components as well as the “intermodulation” responses, which indicate nonlinearities at or after the point of signal combination. This provides a rich dataset with which to examine cross-orientation suppression and other nonlinearities, such as super-additive summation. Two grating components (1cpd and 3cpd, respectively) were orthogonally combined to form spatial frequency-matched (‘coherent’) and non-matched (‘non-coherent’) plaids. In particular, we explored the contrasts at which nonlinear responses occurred. At the fundamental component frequencies significant responses were found from Michelson contrasts of 0.02–0.08 upwards. When grating components formed a non-coherent plaid, there was no significant intermodulation response at any contrast. For coherent plaids, however, there was a significant intermodulation response at a Michelson contrast of 0.32 but not at 0.16 and below. This difference between coherent and non-coherent plaids did not appear to be accounted for by cross-orientation suppression, as component suppression was greater in the non-coherent plaid condition, suggesting that an additional selective nonlinearity was occurring for the coherent combination.

Funding: EPSRC

[3P138] Different mechanisms mediating interpolation of illusory and partly-occluded contours

Bat-Sheva Hadad

Edmond J. Safra Brain Research Center, University of Haifa

Continuous objects often project disparate fragments on the retina. Yet, in most cases, humans perceive continuous and coherent objects. A set of experiments was carried out to determine the spatial factors limiting interpolation, their developmental, and whether the same process underlies interpolation in different cases of fragmentation. Experiment 1 examined the effects of contour geometry, specifically, the effect of scale-dependent (i.e., retinal size) and scale-independent factors (i.e., support-ratio; SR). For both illusory and occluded-contours, interpolation was affected more by SR than size. However, illusory-contours were less precisely interpolated and were affected more by SR. Experiment 2 traced the development of interpolation demonstrating a general improvement with age, with the two types of contours equally affected by spatial constraints during childhood. However, while interpolated occluded-contours became more precise with age and less dependent on SR, illusory-contours were less improved and more tied to SR by adulthood. Experiment 3 presented two parts of each display to the same or to different eyes. Superior performance in the monocular over binocular presentation was found more for the illusory-contours, indicating the involvement of relatively lower, prestriate portions of the visual system. Consistent with their different appearances, different mechanisms underlie interpolation of illusory and occluded contours.

Funding: Israel Science Foundation (ISF) #967/14

[3P139] The time compression induced by visual masking

Riku Asaoka and Jiro Gyoba

Department of Psychology, Graduate School of Arts and Letters, Tohoku University, Japan

The present study examined the effect of visual masking on the perceived duration of visual stimuli using the time reproduction task. Black and white checkerboards were presented for 50 ms as mask stimuli. A black unfilled circle or triangle was presented for 300, 500, 700, or 900 ms as target stimuli. Both types of stimuli were presented at the center of the display on a gray background. Participants were asked to reproduce the perceived duration of the target stimuli by pressing a key. In Experiment 1, the checkerboards were presented before and after the target stimuli with several inter-stimulus intervals. The reproduced duration was shorter in the 0 ms inter-stimulus interval condition than in the target-only condition, indicating that the visual masking compressed the perceived duration. Experiment 2 tested the effects of the forward or backward mask alone on the reproduced duration; this showed that the time compression was not observed in the forward or backward masking condition. Experiment 3 demonstrated that the time compression did not occur when the mask and target appeared in the different position. These results indicate that spatial and temporal factors modulate the visual masking effect on visual time perception.

[3P140] Emotions evoked by viewing pictures may affect perceived duration and temporal resolution of visual processing

Makoto Ichikawa and Misa Kobayashi

Department of Psychology, Chiba University

We investigated how impressions evoked by viewing a picture affects temporal resolution of the visual processing and perceived picture duration. In Experiment 1, as an index of temporal resolution of the visual processing, we measured the noticeable duration of a monochrome picture after presenting a color picture by the use of methods of constant stimuli. In addition, in Experiment 2, we measured the duration of the picture presentation. We found that the minimum duration with which observer could notice the monochrome image in viewing a dangerous picture was shorter than that in viewing safe pictures. We also found that the observers overestimated the duration of the picture presentation in viewing dangerous pictures. There was no significant correlation between the results of the two experiments. These results suggest that the basis for improvement of the temporal resolution in visual processing differs from that for the elongation of the perceived duration. Results of the present study suggest that enhancement of temporal resolution in the visual processing requires strong emotional arousal whereas elongation of perceived duration is achieved not only by emotional arousal, but also by emotional valence and salient impression in dangerous and unpleasant dimensions.

Funding: JSPS Grants- in-Aid for scientific research (#25285197)

[3P141] The effects of spatial attention on temporal integration

Ilanit Hochmitz1, Marc M. Lauffs2, Michael H. Herzog2 and Yaffa Yeshurun1

1Psychology, University of Haifa

2EPFL Lausanne Switzerland

Feature fusion reflects temporal integration. Previous studies mostly employed foveal presentations with no attention manipulation. In this study we examined the effects of sustained spatial attention on temporal integration using feature-fusion with peripheral presentation. We used a typical feature fusion display. A vernier and anti-vernier stimuli (vernier with offset in the opposite direction than the first vernier) were presented in rapid succession in one of 2 possible locations, at 2° of eccentricity. The attended condition involved endogenous attention manipulation achieved through holding the location of the stimuli constant for the whole block (i.e., the stimuli were always presented to the right of the fixation). Thus, in this condition there was no spatial uncertainty. In the unattended condition, the stimuli could appear either to the right or left of the fixation with equal probability, generating spatial uncertainty. We found considerable feature fusion in the attended condition, suggesting that feature fusion can also occur with peripheral presentation. However, no feature fusion was found without attention (i.e., when there was uncertainty regarding the stimuli location), suggesting that spatial attention improves temporal integration. We are currently conducting similar experiments using different attentional cues to manipulate transient attention.

Elena A Parozzi1, Luca Tommasi2 and Rossana Actis-Grosso1

1Department of Psychology, University of Milano-Bicocca

2University of Chieti-Pescara

Cuts between different focal lengths (such as transitions from long-shot to close-up) are widely used in video editing. The question is still open whether and how cuts alter the perception of temporal continuity. To investigate this issue, two experiments were carried out, with animations representing Michotte-type launch and entraining effects as prototypical events: two colliding squares underwent a sudden change in size (as from a close up to a long shot or vice versa, with 1.5 and 3 as two possible Degrees of Magnification, DM); the temporal continuity of the displayed event could also be altered, from a flashback to a flash-forward. In Experiment 1 participants (n = 15) rated the perceived temporal continuity of the displayed events; the experimental setting of Experiment 2 was aimed at resembling a movie theatre, with small groups of participants (i.e. 3 groups of 5 persons each) watching the animations projected onto a white wall 3 m from the audience. Interestingly, results show, among other things, an effect of DM (p < 0.001), indicating a general preference for cuts from long-shot to close-up and, more in general, that a temporally discontinuous edit can be made to appear continuous.

[3P143] Unpacking the prediction-motion literature

Alexis D Makin

Psychological Sciences, University of Liverpool

In a Prediction Motion (PM) task, participants observe a moving target disappear behind an occluder, and press when it reaches a goal. I have attempted to review and consolidate the fragmented PM literature by elucidating four theoretical dichotomies, and answering them with new data. Dichotomy 1) Do people track the occluded target with spatial attention (tracking strategy), or estimate time-to-contact before occlusion, then delay a motor response (clocking strategy)? Answer: Tracking and clocking are both viable strategies. Dichotomy 2) Is PM mediated by mental imagery, or the oculomotor system? Answer: Neither. Dichotomy 3) People can do PM tasks in physical space and feature space. They may thus update mental representations, both without sensory input, and at the right speed. Do we have a common rate controller in the brain, which can be functionally coupled to different sensory maps, or does each map have its own local rate control circuitry? Answer: common rate controller. Dichotomy 4) Do people run a rate controlled simulation of the occluded process, or do they use a clocking strategy? Answer: common rate controller. This synthesis helps unpack the PM literature, but also offers a new way of understanding fundamental mechanisms involved in controlling thought and action.

Funding: Leverhulme Trust Early Career Fellowship (ECF 721-2012)

[3P144] Effect of stimulus placement and presentation on duration discrimination

Charlotte Harrison1, Nicola Binetti1, Isabelle Mareschal2 and Alan Johnston3

1Department of Experimental Psychology, University College London

2Queen Mary University of London

3University of Nottingham

Discrimination performance in psychophysical tasks is better when the reference stimulus is presented in the first interval of two-interval trials (Nachmias, 2006). We investigate the spatial dependence of this effect. In the current experiment, two circular sinusoidal gratings (reference and test) were shown sequentially and participants judged which was presented for a longer duration. The reference lasted 600 ms, and the test had one of seven values (300–900 ms). Stimulus-order was randomised between trials. The second stimulus could appear in the same retinotopic, spatiotopic, or ‘combined’ region of the screen. There were control conditions for each trial in which equivalent eye movements were made. While there were no observed effects of visuospatial memory type, stimulus-order effects were stronger in the ‘spatiotopic control’, compared to other control conditions. In this condition, the test duration was overestimated by a significantly larger amount when the reference stimulus was presented second. Participants were also less sensitive to differences in duration overall when the reference stimulus was second. While the findings suggest that stimulus-order biases duration discrimination, there is less evidence for a spatial component to the effect.

Funding: Leverhulme Trust

[3P145] The effect of awareness of temporal lag on motor-visual temporal recalibration varies with judgment tasks

Masaki Tsujita, Koichiro Yamada and Makoto Ichikawa

Faculty of Letters, Chiba University

Subjective judgment about temporal relationship between a voluntary action and a perceived visual event is adaptively recalibrated after repeated exposure to a temporal lag between a voluntary action and its visual feedback. A recent study demonstrated that motor–visual temporal recalibration in temporal order judgment depends upon awareness of a motor–visual temporal lag during adaptation (Tsujita & Ichikawa, 2016, Frontiers in Integrative Neuroscience). In this study, we examined whether awareness of the temporal lag is required for motor–visual temporal recalibration in simultaneity judgment. We allocated observers either of two conditions. In the unaware of lag condition, we first introduced a slight temporal lag and gradually increased it during adaptation. In the aware of lag condition, we introduced a substantial temporal lag throughout adaptation, and instructed observers about the introduction of the temporal lag before the adaptation. We found significant recalibration in both the unaware and aware of lag conditions. These results suggest that motor–visual temporal recalibration in simultaneity judgment is independent of awareness of motor–visual temporal lag, and based upon automatic, and unaware processing. We will discuss differences in the basis of motor–visual temporal recalibration between temporal order judgment and simultaneity judgment.

[3P146] Illusion and reality: the case of apparent duration

David Rose

School of Psychology, University of Surrey

What are ‘illusions’ and how ubiquitous are they? A common definition states that they are deviations of our percepts from veridical representation of physical reality. But to know if an illusion is occurring we have to have access to that reality independently of our percepts, which phenomenologists regard as impossible. Here, I show how realist science defends its belief in objective reality, and I give one example of a clear illusion: the ‘stopped-clock’ effect. In our experiments, the first flash of a series of four (each of physically equal duration 667 ms) appears longer than the second, whereas the second and third appear ‘veridically’ to have the same duration. By varying the physical duration of the first flash it can be made to appear the same duration as the second, which occurs when their ‘real’ durations are in the ratio 2:3 (Rose & Summers, Perception 24, 1177-1187, 1995). Several different controls were used to verify that the physical durations were as specified. The general relevance of such manipulations will be explained, illustrating how science can establish with high plausibility the objective reality of any stimulus, by searching for convergence between the results obtained with highly diverse methods.

[31S101] The Continuity Field (CF): a mechanism for perceptual stability via serial dependence

David Whitney, Wesley Chaney, Jason Fischer, Alina Liberman, Mauro Manassi and Ye Xia

Psychology, UC Berkeley, USA

A critical function of vision is to stabilize perception, so objects look the same from moment to moment. This is a challenge because visual input is noisy and discontinuous. Though a classic question, the mechanism that links the perceived identity and properties of an object from moment to moment is unknown. We recently discovered the Continuity Field (CF), a mechanism of object constancy built on serial dependence; an object's present appearance is captured by what was perceived over the last several seconds. Serial dependence occurs in the perception of orientation, facial expression and identity, biological motion, object attractiveness, and perceived object position. This perceptual attraction extends over several seconds, and displays clear tuning to the difference (in orientation, facial expression, etc) between the sequential stimuli. It is spatially specific and selective to the attended object within a scene, even if that object passes behind an occluder. Perceptual serial dependence cannot be explained by effects of priming, known hysteresis effects, or visual short-term memory. Our results reveal a novel mechanism—the Continuity Field—a spatiotemporally tuned operator that helps maintain stable object and scene representations in the face of a dynamic and noisy environment.

Funding: NIH

[31S102] Humans exploit optimally serial dependencies

Guido Marco Cicchini and David Burr

Institute of Neuroscience, National Research Council

Department of Neuroscience, Psychology, Pharmacology and Child Health (NEUROFARBA), University of Florence, Florence, Italy

Natural scenes contain strong serial correlations as the world tends to be stable from moment to moment. Similar stimuli often require similar behavioral responses, so it is highly likely that the brain has developed strategies to leverage these regularities. On the other hand, we often need to be sensitive to change, for which opposite strategies (adaptation) are more appropriate. In this talk we will review some of our recent research on these issues, showing that: 1 - Strong serial dependencies occur for many stimulus attributes, including numerosity, leading to an apparent logarithm encoding of number. 2 - Serial dependencies occur both at the perceptual and response stages. The weight of current or past information depends on the reliability of both stimuli, as well as their similarity, well modeled by a Kalman filter that predicts the current state of the world, depending on the quality of information and the occurrence of novel events. 3 - For attributes that change slowly over time (such as gender), serial correlations are positive and strong; but for changeable features, like expression, where the change is important, negative after-effects occur. Both processes operate at the same time, in the same stimuli.

Funding: ERC-FP7 ECSPLAIN (Grant No. 338866), Italian Ministry of Research FIRB (RBFR1332DJ)

[31S103] Choice-induced aftereffects in visual perception

Alan A Stocker

Department of Psychology, University of Pennsylvania

Adaptation aftereffects are striking examples of how perception depends on stimulus history. Here, I present evidence that suggests that perception also depends on the observer's interpretation of the previous stimuli (i.e. a subjective stimulus history). More specifically, I present results of some recent psychophysical experiments in which subjects had to perform sequential discrimination tasks about the relative orientations of serially presented visual stimuli. The results indicate that subjects' decisions were biased by their previous judgements in the task sequence in a way that is orthogonal to the influence of the actual stimulus sequence. These findings cannot be explained by current models of perceptual adaptation. However, the data are well described with a probabilistic observer model that is constrained to remain self-consistent with regard to its interpretation of sensory evidence over time. I discuss the degree to which this novel observer model can account for other aspects of perceptual and cognitive behavior, and how it might help us to better understand the intricate computational principles underlying adaptive sensory processing in the brain.

Funding: NSF

[31S104] Sequential integration in color identification

Qasim Zaidi and Zhehao Huang

Graduate Center for Vision Research, State University of New York

Sequential dependencies in visual perception have been attributed to observers integrating current input with past information, based on the prior assumption that constancy of object properties is generic in the physical world, so that integration can discount environmental transitions. We show that temporal color integration is an inherent aspect of estimating surface colors when illumination change is easily discernable, but controls are needed for adaptation and induction effects. When the illumination is obviously constant, but the surface color is changing in a similar fashion as above, integration is not needed. The results show a lack of integration and adaptation, but indicate that color memory is accurate for these tasks. A second control uses an illumination change condition that penalizes temporal integration and controls for induction effects, but observers still use temporal integration to estimate surface colors. We also compare color integration for directional shifts between natural lights (Sunlight-Skylight), between frequently encountered artificial lights (Tungsten-Fluorescent), and between unusual artificial lights (Kodak Red CC50R – Cyan CC50C). There are differences between magnitudes of temporal integration depending on the nature of the illumination shift. These may be related to the familiarity of the final illuminant’s color, suggesting complexities in the control of sequential integration.

Funding: EY007556 & EY013312

[31S105] Varieties of serial dependency in perceptual state variables

Mark Wexler

Laboratoire Psychologie de la Perception CNRS, Université Paris Descartes

We have recently shown that the perception of certain families of visual stimuli is governed by strong biases that vastly differ from one observer to the next. For example, with optic flow that can be perceived as a depth-slanted surface having one of two opposite tilts (because of depth reversals), perceptual decisions for different tilts are not independent: observers nearly always perceive tilts lying within 90 degrees of a preferred tilt direction–but the preferred tilt varies widely among observers (demos: goo.gl/4cukhq and goo.gl/YaytLF). These bias directions–which we call perceptual state variables–have population distributions that are non-uniform and that are different and independent for different stimulus families. The state variables vary not only between observers, but also over time. When measured in the same observer in long time series over minutes, hours, or months, the variables exhibit a variety of dynamical behaviors. In nearly all observers, the time series have autocorrelations, demonstrating a non-trivial form of perceptual memory. Part of their dynamics can be described by a simple random-walk model. Comparison of series sampled at different time scales suggests the existence of an additional internal variable, analogous to a temperature: more frequent presentations lead to higher volatility in the state variables.

[31S106] Change-related weighting of statistical information in visual sequential decision making

József Fiser1, József Arató1, Abbas Khani2 and Gregor Rainer2

1Department of Cognitive Science, Central European University, Budapest

2Department of Physiology/Medicine University of Fribourg Fribourg Switzerland

There is a complex interaction between short- and long-term statistics of earlier percepts in modulating perceptual decisions, yet this interaction is not well understood. We conducted experiments, in which we independently manipulated the appearance probabilities (APs) of abstract shapes over short and long time ranges, and also tested the effect of dynamically changing these probabilities. We found that, instead of simply being primed by earlier APs, subject made decisions so that they reduced the discrepancy between recent and earlier APs. Paradoxically, this leads to favor the less frequent recent event if it was more frequent in the long past. Moreover, this compensatory mechanism did not take effect when the difference in APs between long past and recent times was introduced gradually rather than abruptly. This leads to a paradox false and lasting negative compensation with uniform APs after a momentary abrupt shift followed by a gradual return. We replicated our key human finding with behaving rats, demonstrating that these effects do not rely on explicit reasoning. Thus instead simply following the rule of gradually collected event statistics, perceptual decision making is influenced by a complex process in which statistics are weighted by significance due to detected changes in the environment.

[31T201] Differential attenuation of auditory and visual evoked potentials for sensations generated by hand and eye movements

Nathan Mifsud, Tom Beesley and Thomas Whitford

School of Psychology, UNSW Australia

Reco Love Blue Ocean Iso Download Torrent

Our ability to distinguish sensations caused by our own actions from those caused by the external world is reflected at a neurophysiological level by the reduction of event-related potentials (ERPs) to self-initiated stimuli. These ERP differences have been taken as evidence for a predictive model in which motor command copies suppress sensory representations of incoming stimuli, but they could also be attributed to learned associations. We tested the cross-modal generalizability of the predictive account by comparing auditory and visual responses in the same cohort, and tested the associative account with a novel task in which eye motor output produced auditory sensory input, an action-sensation contingency which no participant could have previously experienced. We measured the electroencephalogram (EEG) of 33 participants as they produced auditory (pure tone) and visual (unstructured flash) stimuli with either button-presses or volitional saccades. We found that attenuation of self-initiated sensations, indexed by auditory and visual N1-components, significantly differed by sensory domain and motor area, and was strongest for natural associations between action and sensation (i.e., hand-auditory and eye-visual). Our results suggest that predictive and associative mechanisms interact to dampen self-initiated stimuli, serving to facilitate self-awareness and efficient sensory processing.

[31T202] Action video game play improves eye-hand coordination in visuomotor control

Li Li and Rongrong Chen

Department of Psychology, The University of Hong Kong

We recently found that action video game play improves visuomotor control in driving. Here we examined whether action video game play improves eye-hand coordination in visuomotor control. We tested 13 action gamers and 13 gender-matched non-gamers with a classic closed-loop visuomotor control task consisted of two conditions. In the eye-hand condition, the display (40°H × 30°V) presented a cyan Gaussian target (σ = 0.6°) that appeared to move randomly along the horizontal axis. Participants tracked the target motion with their eyes while using their dominant hand to move a high-precision mouse to vertically align a red Gaussian cursor (8° below) with the cyan target. In the eye-alone condition, the display replayed the target and cursor positions recorded in the eye-hand condition and participants were instructed to only track the target with their eyes. Action gamers and non-gamers did not differ in their eye-tracking performance in the eye-alone condition. However, in the eye-hand condition, action gamers showed better tracking precision, larger response amplitude, and shorter response lag for both eye and hand tracking than did non-gamers. Our findings provide the first empirical evidence suggesting that action video game play improves eye-hand coordination in visuomotor control such as driving.

Funding: HKU 7460/13H & NYU SH Seed Research Funding

[31T203] Depth cues for allocentric coding of objects for reaching in virtual reality

Mathias Klinghammer1, Immo Schütz2, Gunnar Blohm3 and Katja Fiehler1

1Experimental Psychology, Justus-Liebig-University Giessen

2Technische Universität Chemnitz, Chemnitz, Germany

3Queen's University, Kingston, Canada

Previous research demonstrated that humans use allocentric information when reaching to visual targets; but most of the studies are limited to 2D space. In two virtual reality experiments, we investigated the use of allocentric information for reaching in depth and the role of different depth cues (vergence/retinal disparity, object size) for coding object locations in 3D space. We presented participants a scene with virtual objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After visual exploration and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts indicating the use of allocentric information. This was independent of observer-object-distance and dependent on object size suggesting that both vergence/retinal disparity and object size provide reliable depth cues when coding reach targets in an allocentric reference frame in reachable 3D space.

Funding: This research was funded by the International Research Training Group (IRTG) 1901 “The Brain in Action” by the German Research Foundation (DFG)

[31T204] Different usage of visual information for cursor and target in a target-tracking task

Loes C van Dam1, Dan Li2 and Marc Ernst3

1Department of Psychology, University of Essex

2Istituto Italiano di Tecnologia

3Ulm University

When tracking a moving target with a cursor, we need to know the target's current position and predict where it is going next. This means the visual system needs to collect target position and movement information over time. The same could be said for the cursor, despite having direct control over its movement. Thus, we hypothesized that visual target and cursor information would both be integrated over time. Two tasks were performed. In the target-judgement task, a target moved horizontally for variable durations. Participants judged whether the last target position was left or right of a comparison stimulus shown after target disappearance. In the cursor-judgement task, the target was shown only briefly. Participants followed the target using the cursor and kept moving with the same speed after target disappearance. In this case, participants judged the last cursor position. Results show that for target-judgements, participants integrated position information over time, leading to a perceived lag of the target and increased perceptual precision over time. For cursor-judgements no lag or increase in perceptual precision was observed, suggesting that only the current cursor position was taken into account. We conclude that visual information is processed in fundamentally different ways for target and cursor.

Funding: This work was carried out at Bielefeld University and funded by the DFG Cluster of Excellence: Cognitive Interaction Technology ‘CITEC’ (EXC 277)

[31T205] Relative timing of visual and haptic information determines the size-weight illusion

Myrthe A. Plaisier, Irene Kuling, Eli Brenner and Jeroen B.J. Smeets

Human Movement Sciences, Vrije Universiteit, Amsterdam

Although weight is perceived through the haptic modality, it can be influenced by visual information. This is demonstrated by the size-weight illusion: the effect that small objects feel heavier than larger ones of the same mass. It has been suggested that this illusion is caused by a mismatch between the expected and actual masses of objects. If so, we predict that size information needs to be available before lifting in the size-weight illusion. We investigated this in an experiment in which size could only be perceived through vision. In each trial, we made vision available for a 200 ms interval starting at various times from lift onset ranging from 200 ms prior to lift onset until when the maximum lifting height was reached. As predicted the size-weight illusion was strongly reduced when visual information became available later, but this decrease only occurred when vision was available about 300 ms after lift-off. This shows that the relative timing of visual size and haptic weight information is crucial, but that size information does not need to be available prior to the onset of the lifting action.

Funding: NWO VENI grant for MP (MaGW 451-12-040)

[31T206] A shared numerical representation for action and perception

Irene Togoli, Giovanni Anobile, Roberto Arrighi and David Burr

NEUROFARBA, University of Florence

Much evidence has accumulated to suggest that in many animals, including young human infants, there exists a neural mechanism dedicated to estimating approximate quantity: a sense of number. Most research has concentrated on spatial arrays of objects, but there is also good evidence that temporal sequences of number are encoded by similar mechanisms (Arrighi et al., Proc. Roy. Soc., 2014). Processing of numerical information is also fundamental for the motor system to program sequences of self-movement. Here we use an adaptation technique to show a clear interaction between the number of self-produced actions and the perceived numerosity of subsequent visual stimuli, both spatial arrays and temporal sequences. A short period of rapid finger-tapping (without sensory feedback) caused subjects to under-estimate the number of visual stimuli presented around the tapping region; and a period of slow tapping caused over-estimation. The distortions occurred both for stimuli presented sequentially, and for simultaneous clouds of dots. Our results sit well with neurophysiological studies showing links between number perception and action. We extend these findings by demonstrating that vision and action share mechanisms that encode numbers, and that the ‘number sense’ serves to estimate both self-generated and external events.

[31T207] Revealing nudging effects of floor patterns on walking trajectories in the real world

Ute Leonards, Hazel Doughty and Dima Damen

Experimental Psychology, University of Bristol, UK

Recently, we have shown that floor patterns such as tiling orientation can influence our sense for walking “straight-ahead” in laboratory settings (Leonards et al., 2015). Here, we investigated whether similar pattern-induced lateral veering can occur in the real world. Developing an automatic tracking algorithm that allows extraction of walking trajectories from CCTV footage, rescaling it to the given environment, we recorded the walking trajectories of usual passers-by walking down a main corridor in a large university building. Seven different floor patterns were presented twice in random order, for 2-3 working days at a time, covering a section of 14.2x2.8 m. Patterns consisted of stripes of different orientations (horizontal, diagonal left or diagonal right), and spatial frequencies (frequent, infrequent). The corridor without stripes acted as control. Both direction and extent of lateral veering directly depended on the orientation and spatial frequency of the patterns. Oblique as compared to horizontal patterns/no-pattern control, induced veering of up to 1 m over the analysed travel distance of 12 meters. The study will be discussed both with regard to the possible impact of patterns in man-made environments on everyday walking, and new technologies allowing us to transfer lab-based experimental settings into the real world.

[31T208] How your actions are coupled with mine: Adaptation aftereffects indicate shared representation of complementary actions

Dong-Seon Chang, Leonid Fedorov, Martin Giese, Heinrich Bülthoff and Stephan de la Rosa

Dept. Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tuebingen

Previous research has shown that humans share numerous cognitive processes when they interact, such as representations of tasks, goals, intentions, and space. However, little is known about the perceptual representation of complementary actions, in particular actions in an interaction that are often observed together. We examined behavioral correlates of potentially shared neural representations for human actions that are visually dissimilar, but contingent from accumulated previous observations in spatiotemporal proximity. Namely, we measured visual adaptation aftereffects in 25 participants for perceiving the actions Throwing and Giving after prolonged exposure to the actions Catching and Taking, and vice versa, in a completely crossed design. We found significant adaptation aftereffects for all tested actions (p < 0.001) as well as for the complementary actions. For the complementary actions, the overall adaptation aftereffect for the disambiguation of Catching from Taking was significant after prolonged exposure (adaptation) to Throwing and Giving (p < 0.001), as well as for the disambiguation of Throwing from Giving when Catching and Taking were used as adaptors (p = 0.002). These results support the hypothesis that processes involved in the recognition of complementary actions might employ a shared neural representation.

Funding: BMBF, FKZ: 01GQ1002A, ABC PITN-GA-011-290011, AMARSi-FP7-248311, CogIMon H2020 ICT-644727; HBP FP7-604102; Koroibot FP7-611909, DFG GI 305/4-1

[32T101] Serial dependence of multisensory relative timing judgements is not sensory adaptation

Warrick Roseboom, Darren Rhodes and Anil Seth

Sackler Centre for Consciousness Science, School of Engineering and Informatics, University of Sussex, UK

Recent sensory experience affects subsequent experience. For example, following repeated exposure to audio events leading visual by ∼200 ms, smaller audio-leading-visual offsets are more likely to be reported as synchronous - consistent with classic negative sensory aftereffects. Recent studies have reported that a single such exposure can alter audiovisual synchrony judgements (SJ), producing aftereffects similar to extended exposure. These results are interpreted as evidence for rapid adaptation of audiovisual timing. If this were true, similar negative aftereffects for relative timing should be detectable in tasks other than SJ. We examined the influence of a single audiovisual relative timing presentation on subsequent judgements – serial dependence of relative timing – for SJ, temporal order judgements (TOJ), and magnitude estimation (ME). We found serial dependence for SJ consistent with previous results, producing apparently negative aftereffects for subjective synchrony, but the opposite direction of dependency for apparent timing estimates derived from TOJ and ME. To reconcile these conflicting results, we propose that serial dependence for SJ is dependence of synchrony decision criteria, not relative timing. This interpretation is consistent with Bayesian descriptions of the influence of recent experience on decisions and inconsistent with serial dependence for relative timing being equivalent to sensory adaptation.

Funding: Funded by EU FET Proactive grant TIMESTORM: Mind and Time: Investigation of the Temporal Traits of Human-Machine Convergence, and the Dr. Mortimer and Dame Theresa Sackler Foundation, supporting the Sackler Centre for Consciousness Science

[32T102] What smell? Loading visual attention can induce inattentional anosmia

Sophie Forster

School of Psychology, University of Sussex

The human sense of smell can provide important information, alerting us to potential dangers (e.g. the smell of smoke) or rewards (e.g. food) in our environment. Nevertheless, people often fail to attend to olfactory stimuli. Within the visual attention literature it is well established that high perceptual task load reduces processing of task-irrelevant stimuli, leading to phenomena such as “inattentional blindness”, whereby a clearly visible stimulus is apparently not perceived when attention is engaged in a visually demanding task. The present study sought to establish whether the extent to which visual attention is loaded is also an important determinant of the ability to notice odors. Participants were exposed to an olfactory stimulus (coffee beans) while performing a computerized visual search task with either a high or low level of perceptual load. Across two experiments, the olfactory stimulus was subsequently reported by the majority of participants in the low load condition, but only approximately 25% of those in the high load condition. This study establishes the phenomenon of inattentional anosmia, an olfactory analogue of inattentional blindness. The findings demonstrate that the effects of loading visual attention extend to the olfactory sense, and have important applied implications.

[32T103] Rapid recalibration to audiovisual asynchronies occurs unconsciously

Erik Van der Burg1, David Alais2 and John Cass3

1Dept. Experimental and Applied Psychology, Vrije Universiteit Amsterdam

2University of Sydney

3University of Western Sydney

In natural scenes, audiovisual events deriving from the same source are synchronized at origin. However, from the perspective of the observer, there are likely to be significant multisensory delays due to physical and neurological differences. Fortunately, our brain appears to compensate for the resulting latency differences by rapidly adapting to the asynchronous audiovisual events. Here, we examine whether rapid recalibration to asynchronous signals occurs unconsciously. On every trial, a brief tone pip and flash were presented across a range of stimulus onset asynchronies (SOAs). Participants were required to perform two tasks in alternating order. On adapter trials, participants judged the order of the audiovisual events. Here, audition either lead or lagged vision with a fixed SOA (150 ms). On test trials the SOA as well as the modality order varied randomly, and participants judged whether the events were synchronized or not. For test trials, we show that the point of subjective simultaneity (PSS) follows the physical rather than the perceived (reported) modality order of the preceding trial. These results suggest that rapid temporal recalibration occurs unconsciously.

[32T104] Integration of visual and tactile information during interval timing: Implications for internal clocks

Kielan Yarrow1, Daniel Ball1 and Derek Arnold2

1Department of Psychology, City University London

2The University of Queensland

With two sources of independent information, people often do better. For example, dual depth cues (within vision) and size cues (between vision and touch) can enhance the precision of perceptual judgements. Because this enhancement most likely rests on the independence of noise affecting contributing estimates, it can speak to the underlying neurocognitive architecture. Humans may possess a single (amodal) internal clock, or multiple clocks tied to different sensory modalities. Statistical enhancement for duration would suggest the existence of multiple clocks, but evidence from visual and auditory modalities is mixed. Here, we assessed the integration of visual and tactile intervals. In a first experiment, 12 musicians and 12 non-musicians judged durations of 300 and 600 ms compared to test values that spanned these standards. Bimodal precision increased relative to unimodal conditions, but not as much as optimal integration models predict. In a second experiment, a smaller sample judged six standards ranging from 100 to 600 ms in duration. While musicians showed evidence of near optimal integration at longer durations, non-musicians did not, and there was no evidence of integration at the shortest intervals for either group. These data provide partial support for the existence of separate visual and tactile clocks.

[32T105] Effect of blue light on audiovisual integration

Su-Ling Yeh1, Yi-Chuan Chen2 and Li Chu1

1Psychology, National Taiwan University

2University of Oxford, UK

A subset of retinal ganglion cells expresses melanopsin, a photo-pigment with absorption spectrum peaking at 480 nm (i.e., blue light). These cells can directly respond to light even without classic photoreceptor rods and cones, and are named intrinsically photosensitive retinal ganglion cells (ipRGCs). Past animal research revealed that Superior Colliculus (SC), a locus where multisensory signals are preliminarily integrated, receives inputs from ipRGCs as well. We therefore aimed to evaluate the potential influence of blue-light-elicited ipRGC signals as compared to other color lights on human multisensory perception. We examined blue light's effect using the audiovisual simultaneity judgements task that a flash and beep were presented at various SOAs, and participants were asked to judge whether the visual and auditory stimuli were presented simultaneously under blue- or red-light background. Results showed that participants' audiovisual simultaneity perception was more precise, especially in the visual-leading conditions, in the blue-light than in the red-light background. Our results suggest that ipRGCs may project to SC (or other cortical areas involving audiovisual integration) in humans as well. This is the first attempt to directly explore the impacts of blue light on multisensory perception, which sheds light on how modern technology impacts human perception.

Funding: This research was supported by grants from Taiwan’s Ministry of Science and Technology to S.Y. (MOST 104-2410-H-002-61 MY3)

[32T106] Spatiotemporal dynamics of visual and auditory attention revealed by combined representational similarity analysis of EEG and fMRI

Viljami R Salmela, Emma Salo, Juha Salmi and Kimmo Alho

Institute of Behavioral Sciences, University of Helsinki

The cortical attention networks have been extensively studied with functional magnetic resonance imaging (fMRI), but the temporal dynamics of these networks are not well understood. In order to bypass the low temporal resolution of fMRI, we used multivariate pattern analysis to combine fMRI data and event-related potentials (ERPs) from electroencephalogram (EEG) of identical experiments in which participants performed grating orientation and/or tone pitch discrimination tasks in multiple conditions. We varied target modality (visual or auditory), attention mode (selective or divided attention, or control), and distractor type (intra-modal, cross-modal or no distractor). Using representational similarity analysis (RSA), we parsed time-averaged fMRI pattern activity into distinct spatial maps that each corresponded, in representational structure, a short temporal segment of ERPs. Multidimensional scaling of temporal profiles of cortical regions suggested eight clusters. Discriminant analysis based on the eight clusters revealed four attention components that were spatially distributed and had multiple temporal phases. The spatiotemporal attention components were related to stimulus and distractor triggered activation, top-down attentional control, motor responses and shifting between brain states. The results suggest that time-averaged fMRI activation patterns may contain recoverable information from multiple time points, and demonstrate complex spatiotemporal dynamics of visual-auditory attentional control in cortical networks.

Funding: This research was supported by the Academy of Finland (grant #260054).

[32T201] Neural model for adaptation effects in shape-selective neurons in area IT

Martin A Giese1, Pradeep Kuravi2 and Rufin Vogels2

1CIN & HIH, Department of Cognitive Neurology, University Clinic Tübingen, Germany, Section Computational Sensomotorics

2Lab. Neuro en Psychofysiologie Dept. Neuroscience KU Leuven Belgium

Neurons in higher-level visual cortex show adaptation effects, which likely influence repetition suppression in fMRI studies, and the formation of high-level after-effects. A variety of theoretical explanations has been discussed that are difficult to distinguish without detailed electrophysiological data. Recent electrophysiological experiments on adaptation of shape-selective neurons in inferotemporal cortex (area IT) provide constraints that narrow down possible computational mechanims. We propose a biophysically plausible neurodynamical model that reproduces these results. METHODS: Our model consists of a neural field of shape-selective neurons that is augmented by the following adaptive: (i) spike-rate adaptation; (ii) input fatigue adaptation, modeling adaptation in earlier hierarchy levels or afferent synapses; (iii) firing-rate fatigue, depending on the output firing rates of the neurons. RESULTS: The model reproduces the following experimental results: (i) Shape of typical PSTHs of IT neurons; (ii) temporal decay of adaptation with number of stimulus repetitions; (iii) dependence of adaptation on efficient and ineffective adaptor stimuli, which stimulate the neuron strongly or only moderately; (iv) dependence of the strength of the adaptation effect on the duration of the adaptor. Conclusions: A mean field model including several adaptive processes provides a unifying account for the observed experimental effects.

Funding: Supported by EC Fp7-PEOPLE-2011-ITN PITN-GA-011-290011 (ABC), FP7-ICT-2013-FET-F/604102 (HBP), FP7-ICT-2013-10/611909 (Koroibot), BMBF, FKZ: 01GQ1002A, DFG GI 305/4-1 + KA 1258/15-1.

[32T202] How the Brain Represents Statistical Properties

Shaul Hochstein

ELSC & Life Sciences Institute, Hebrew University

Intensive research uncovered diverse dimensions of summary statistic perception, including simple and complex dimensions (circle size and Gabor orientation to face emotion and attractiveness), the type of statistics acquired (mean, variance, range), and our ability to summarize elements presented simultaneously or sequentially and divide displays into groups, detecting statistics of each. How does the brain compute scene summary statistics without first attaining knowledge of each scene element? One possible solution is that the brain uses implicit individual element information to compute summary statistics, which become consciously accessible first. I show that this added step is superfluous. Direct acquisition of summary statistics is unsurprising; novel computational principles aren’t required. A simple population code, as found for single elements, may be scaled up for group mean values. With a broader range of neurons, the computation is identical for sets as for one element. Population codes add power, determining elements to be included or excluded as outliers triggering pop-out attention, and naturally dividing between sets. As suggested by Reverse Hierarchy Theory, conscious perception may begin with summary statistics and only later focus attention to individual elements. A similar population code representation may underlie categorization, including both category prototype and its boundaries

Funding: supported by Israel Science Foundation ISF

[32T203] Inhibitory function and its contribution to cortical hyperexcitability and visual discomfort as assessed by a computation model of cortical function

Olivier Penacchio1, Arnold J. Wilkins2, Xavier Otazu3 and Julie M. Harris1

1School of Psychology & Neuroscience, University of St Andrews, Scotland

2University of Essex, UK

3Computer Vision Center Universitat Autònoma de Barcelona, Barcelona, Catalonia

Irrespective of what they represent, some visual stimuli are consistently reported as uncomfortable to look at. Recent evidence from brain imaging suggests that uncomfortable stimuli are responsible for abnormal, excessive cortical activity, especially in clinical populations. A long-standing hypothesis is that reduced inhibition, potentially caused by a lowered availability of gamma-aminobutyric acid (GABA) neurotransmitter, may drive susceptibility to visual discomfort. We have shown in previous work that the sparsity of the network response of a computational model of the visual cortex based on Gabor-like receptive fields and both excitatory and inhibitory lateral connections is a good predictor of observers’ judgments of discomfort [Penacchio, et al., (2015) Perception, 44, 67–68]. To test the former hypothesis we assessed computationally how the distribution of firing rates evolves when the strength of inhibition is modified. We found that, as inhibitory strength is progressively reduced, the sparsity of the population response decreases. A winner-take-all process emerges whereby the spatial distribution of excitation in the network becomes similar to the response to typically aversive stimuli such as stripes. These findings back up recent suggestions that abnormal inhibitory activity is involved in the generation of visual discomfort, an abnormality that may be extreme in clinical populations.

[32T204] Contextual interactions in grating plaid configurations are explained by natural image statistics and neural modeling

Udo A Ernst1, Alina Schiffer1, Malte Persike2 and Günter Meinhardt2

1Institute for Theoretical Physics, University of Bremen

2University of Mainz

The processing and analysis of natural scenes requires the visual system to integrate localized, distributed image features into global, coherent percepts. A central hypothesis states that our brain hereby uses statistical dependencies in visual scenes: when represented in neural interactions between elementary feature detectors, they will enhance the processing of certain feature conjunctions, while suppressing others. By combining psychophysical experiments with computational modeling and image analysis, we investigated which interaction structures underlie feature integration in visual cortex, and how perception and neural interactions relate to natural scene statistics. As stimuli we used grating patch configurations (plaids) comprising four patches with varying orientations, spatial frequencies, and inter-patch distances. Human detection thresholds for plaids were strongly modulated by inter-patch distance, number of orientation- and frequency-aligned patches and spatial frequency content (low, high, mixed). For large inter-patch distances, detection thresholds for the plaids were inversely related to their likelihood of occurrence in natural images. Using a structurally simplistic cortical model comprising orientation columns connected by horizontal axons, we were able to reproduce detection thresholds for all configurations quantitatively. In addition to medium-range inhibition and long-range, orientation-specific excitation, model and experiment predict a novel form of strong orientation-specific, inhibitory interactions between different spatial frequencies.

Funding: This work was supported by the BMBF (Bernstein Award Udo Ernst, grant no. 01GQ1106) and the DFG (Priority Program 1665, grant ER 324/3).

[32T205] Reinforcement learning: the effect of environment

He Xu and Michael Herzog

Brain Mind Institute, Laboratory of Psychophysics, EPFL, Switzerland

Reinforcement learning is a type of supervised learning, where reward is sparse and delayed. For example in chess, a series of moves is made until a sparse reward (win, loss) is issued, which makes it impossible to evaluate the value of a single move. Still, there are powerful algorithms, which can learn from delayed and sparse feedback. In order to investigate how visual reinforcement learning is determined by the structure of the RL-problem, we designed a new paradigm, in which we presented an image and asked human observers to choose an action (pushing one out of a number of buttons). The chosen action leads to the next image until observers achieve a goal image. Different learning situations are determined by the image-action matrix, which creates a so-called environment. We first tested whether humans can utilize information learned from a simple environment to solve more complex ones. Results showed no evidence supporting this hypothesis. We then tested our paradigm on several environments with different graph theoretical features, such as regular vs. irregular environments. We found that humans performed better in environments which contain less image-action pairs to the goal. We tested various RL-algorithms and found them to perform inferior to humans.

Funding: Learning from delayed and sparse feedback, Sinergia project of the Swiss National Science Foundation

[32T206] Combining sensory and top-down effects in a cortical model of lightness computation

Michael E Rudd

Department of Physiology and Biophysics, University of Washington

Recently, there has been an interest in differentiating high-level, interpretative effects on lightness judgments from sensory effects (Blakeslee & McCourt, 2015). For some time, I have been modeling data from simple lightness matching paradigms to discover what perceptual factors are required to fully account for them. That task has proved surprisingly complex. A successful model requires the correct quantitative amalgam of sensory, midlevel, and top-down factors. One important factor is likely sensory: incremental luminance steps are weighted only 1/3 as much as decremental steps by the visual system (Rudd, 2013). Another is midlevel: cortical mechanisms integrate luminance steps in object-centered coordinates. Two other factors are high-level. One is the observer’s propensity to classify ambiguous edges as resulting from either reflectance change or an illumination change (Rudd, 2010, 2014). The other is the size of an adjustable attentional window over which the observer evaluates the effects of spatial context. Here, I explain how these factors combine to account for lightness in these simple paradigms. My findings are consistent with other recent results demonstrating strategic effects in lightness (Economou, ECVP 2011) and color (Radonjić & Brainard, 2016) judgments and against strictly < 1°). A database of eye movements recorded on 168 natural images at 3 different levels of blur on 64 persons was used. Data were obtained with the Eyetracker ETL 400 ISCAN. The defocus was simulated by applying homogeneous blur on images by steps of one quarter of diopter (0.25, 0.50 and 0.75 Diopter). Participants had normal vision and were asked to free-view only one level of blur per image during 3 sec. Results showed that on average, when the blur increase the number of fixations decrease but the number of large saccades and microsaccades events increase. Moreover, the average microsaccade amplitude also raised with the induced blur. These results suggests that the perceived visual blur participates in adjusting eye movements at both large and fine scale, which could have potential clinical applications in the evaluation of subjective refraction.

Funding: This research was supported by the Spanish Ministry of Economy and Competitiveness under the grant DPI2014-56850-R, the European Union and DAVALOR. Carles Otero and Clara Mestre would like to thank the Generalitat de Catalunya for a PhD studentship award.

[4P062] CHAP: An Open Source Software for Processing and Analyzing Pupillometry Data

Ronen Hershman1, Noga Cohen2 and Avishai Henik1

1Cognitive neuroscience, Ben-Gurion University of the Negev

2Columbia University

Pupil dilation is an effective indicator of cognitive load. There are many available eye tracker systems in the market that provide effective solutions for pupil dilation measurement, which can be used to assess different cognitive and affective processes. However, there is a lack of tools for processing and analyzing the data provided by these systems. For this reason, we developed CHAP - an open source software written in Matlab. This software provides a user-friendly interface (graphical user interface) for processing and analyzing pupillometry data. The software receives input of a standard output file of the Eyelink eye tracker (EDF file) and provides both pre-processing and initial analysis of the data. Our software creates uniform conventions for building and analyzing pupillometry experiments, and provides a quick and easy-to-implement solution for researchers interested in pupillometry.

[4P063] Classification of Expertise in Photoediting based on Eye Movements

Tandra Ghose1, Kartikeya Karnatak1, Yannik Schelske1 and Takeshi Suzuki2

1Psychology, University of Kaiserslautern

2Ricoh Institute of Technology Japan

Can expert knowledge be modeled by machine-learning algorithms based on eye-movement data (EM) in the domain of photoediting? To investigate this question we recorded EM from 4 experts and 4 novices during two photoediting tasks: set the 1- contrast or 2-color of a given image to the most aesthetically pleasing one. The stimuli were images acquired from expert photographers that were degraded either in 1-contrast or 2-color along the blue-yellow axis. Clustering of adjusted-contrast and adjusted-color showed two distinct groups corresponding to the experts and novices. For the experts the adjusted-value was closer to that of the original-image. A support-vector machine was trained to classify EM-based features (luminance at fixation, luminance-variance in small (3x3px) or large (51x51px) region around fixation, color at fixation, color-variance in small/large region) into experts or novices. Classification-accuracy was significantly higher for the contrast (60%) than in color (52%) adjustment task. Luminance-features were significantly more discriminative during contrast than during color-adjustment, and vice-versa for color-features. Luminance-features were more discriminative (60% accuracy) than color-features (54%). Based on EM-based classification of observer expertise we conclude that EM encode task-relevant information (increased discriminability of color-/luminance-features in color/luminance-based tasks respectively).

[4P064] The testing of motion sickness resistance in virtual reality using eye tracking

Oksana A Klimova and Artem Ivanovich

Department of Psychology, Lomonosov Moscow State University

Prolonged exposure to moving images in virtual reality systems can cause virtual reality induced motion sickness (VIMS). The ability to motion sickness resistance may be associated with the level of vestibular function development. The aim of the present research is to study oculomotor characteristics during the observation of moving virtual environments causing the VIMS effect. We supposed that people, who have a robust vestibular function as a result of their professional activity, are less susceptible towards VIMS compared with people who have no such professional abilities. 30 figure skaters, 30 football players, 30 wushu fighters and 20 non-trained people were tested. The CAVE virtual reality system was used to initiate the VIMS effect. Three virtual scenes were constructed consisting of many bright balls moving as a whole around the observer. The scenes differed in the width of the visual field: all balls subtended 45°, 90° or 180°. The results showed more active eye movements for athletes compared to non-trained people — an increase in blink, fixation and saccade counts. The decrease in saccadic amplitudes was revealed for figure skaters. These characteristics were considered as individual indicators of the motion sickness resistance of athletes.

Funding: 16-06-00312

[4P065] Gaze behavior in real-world driving: cognitive and neurobiological foundations

Otto Lappi

Institute of Behavioural Sciences, University of Helsinki

Driving is a ubiquitous visual task, and in many ways an attractive model system for studying skilled visuomotor actions in the real world. In curve driving, several steering models have been proposed to account for the way drivers invariably orient gaze towards the future path (FP) and/or the tangent point (TP). For twenty years 'steering by the tangent point' (Land & Lee, Nature, 1994) has been the dominant hypothesis for interpreting driving gaze data, and the textbook account of vision in curve negotiation. However, using some novel methods for analyzing real world gaze data, a number of studies from our group have recently undermined the generality of the TP hypothesis, supporting instead the FP models (Lappi et al., J Vis, 2013; Lappi, Pekkanen & Itkonen, PLOS ONE, 2013; Itkonen, Pekkanen & Lappi, PLOS ONE, 2015; review: Lappi, J Vis, 2014). This presentation integrates the findings of these experiments, with some previously unpublished results, and presents on that basis a theoretical framework of oculomotor control in visually oriented locomotion. The neurobiologically grounded theory is consistent with current computational theories of spatial memory and visuomotor control and neuroimaging work on the neural substrates of driving (Lappi, Front Hum Neurosci, 2015).

Funding: Supported by Finnish Cultural Foundation grant 00150514.

[4P066] The Other Race Effect on contextual face recognition

Fatima M Felisberti and James John

Psychology Department, Kingston University London

We are a social species and face recognition plays an important role in our cooperative exchanges. The level of exposure to faces of a given race can affect their recognition, as observers tend to recognize faces of their own race better than faces of other races—the Other Race Effect (ORE). This study investigated ORE in the recognition of briefly encoded unfamiliar faces. Participants (n = 80; 47.5% Caucasians, 52.5% Afro-Caribbean) were asked to encode three groups of faces tagged with different moral reputations (trustworthy, untrustworthy or neutral; 50% Caucasians, 50% Afro-Caribbean). The recognition test used a two-alternative forced-choice paradigm (2AFC: “old-new”). The findings showed a significantly higher sensitivity (d’) for trustworthy and neutral faces in comparison to untrustworthy faces. In addition, Caucasian participants were significantly more sensitive (and faster) at recognizing faces of their own race than Afro-Caribbean ones (and vice-versa, although the amplitude of the difference was smaller). The findings confirm previous studies and also extend them by showing that ORE modulation of face recognition is sensitive to the moral information associated with the encoded faces.

[4P067] Incidental learning of trust does not result in distorted memory for the physical features of faces

James Strachan and Steven Tipper

Psychology, University of York

People can use their eyes to direct others’ attention towards objects and features in their environment. A person who consistently looks away from targets is later judged to be less trustworthy than one that consistently looks towards targets, even when the person is a background distractor that viewers are instructed to ignore. This has been shown in many experiments using trustworthiness ratings, but one outstanding question is whether these systematic changes in trustworthiness reflect perceptual distortions in the stored memory representations of these faces, such that faces that provide valid cues are remembered as looking more trustworthy than they actually were, and vice versa for invalid faces. We test this in two experiments, one where we gave participants the opportunity to morph the faces along a continuum of trustworthiness and asked them to report the image they had seen during the experiment, and one where we presented the original face images morphed to appear more or less trustworthy and asked participants to select from the two. The results of these two experiments suggest that this incidental learning of gaze contingencies does not result in distorted memory for the faces, despite robust and reliable changes in trustworthiness ratings.

[4P068] Pupillary response reflects the effect of facial color on expression

Satoshi Nakakoga, Yuji Nihei, Shigeki Nakauchi and Tetsuto Minami

Department of Computer Science and Engineering, Toyohashi University of Technology

The change of facial color and expression reflects our physical condition. Previous behavioral studies indicated that there is an interaction between facial color and expression. However, it is not clear how facial color affects expression. Our study investigated the contribution of facial color to expression recognition in blur images with the measurement of behavior and pupillometry. In the experiment, the face stimuli of facial colors (natural color, and reddish) with different expressions (neutral, and anger) in 3 blur levels were presented. Participants performed expression identification task. Behavioral results showed that the accuracy of the reddish-colored face condition was higher than that of the neutral-colored face condition, and this effect significantly increased in proportion to the blur levels. The ratio of peak pupil size between the expression conditions was computed. The ratio in the natural-color condition significantly increased in proportion to the blur levels. However, the ratio in the reddish-color condition remained substantially constant regardless of the blur level. This result indicated that the reddish-color provided the information necessary to identify anger. These results showed the contribution of facial color increases as blur level increases in both psychophysics and pupillary experiment, which suggested facial color emphasizes the characteristics of specific expression.

Robin S Kramer, Rob Jenkins, Andrew Young and Mike Burton

Department of Psychology, University of York

We learn new faces throughout life, for example in everyday settings like watching TV. Recent research suggests that image variability is key to this ability: if we learn a new face over highly variable images, we are better able to recognise that person in novel pictures. Here we asked people to watch TV shows they had not seen before, and then tested their ability to recognise the actors. Some participants watched TV shows in the conventional manner, whereas others watched them upside down or contrast-reversed. Image variability is equivalent across these conditions, and yet we observed that viewers were unable to learn the faces upside down or contrast-reversed – even when tested in the same format as learning. We conclude that variability is a necessary, but not sufficient condition for face learning. Instead, mechanisms underlying this process are tuned to extract useful information from variability falling within a critical range.

Funding: ERC and ESRC, UK

[4P070] Precise Representation of Personally, but not Visually, Familiar Faces

Duangkamol Srismith, Mintao Zhao and Isabelle Bülthoff

Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics

People are good at recognising faces, particularly familiar faces. However, little is known about how precisely familiar faces are represented and how increasing familiarity improves the precision of face representation. Here we investigated the precision of face representation for two types of familiar faces: personally familiar faces (i.e. faces of colleagues) and visually familiar faces (i.e. faces learned from viewing photographs). For each familiar face, participants were asked to select the original face among an array of faces, which varied from highly caricatured (+50%) to highly anti-caricatured (−50%) along the facial shape dimension. We found that for personally familiar faces, participants selected the original faces more often than any other faces. In contrast, for visually familiar faces, the highly anti-caricatured (−50%) faces were selected more often than others, including the original faces. Participants also favoured anti-caricatured faces more than caricatured faces for both types of familiar faces. These results indicate that people form very precise representation for personally familiar faces, but not for visually familiar faces. Moreover, the more familiar a face is, the more its corresponding representation shifts from a region close to the average face (i.e. anti-caricatured) to its veridical location in the face space.

Funding: Max Planck Society

[4P071] The effect of facial familiarity on the assessment of facial attractiveness

Yuan Zhou

Department of Mechanical Engineering, University of Shanghai for Science and Technology

Both facial familiarity and facial physical attractiveness play important roles on human social communication. Here, we investigated whether the facial familiarity will influence the perception of face attractiveness in the Asian female faces. The 50 subjects were divided into two groups and we selected the familiar faces for the participants in one of two groups. The selected faces were morphed with the novel faces to create the artificial faces of different familiarity levels. The subjects in two groups were instructed to rate the facial attractiveness and facial naturalness using the method of paired comparison. The results indicate that the familiarity decreased the assessment of facial attractiveness for the high attractive faces, but no significant effect on the normal attractive faces. The results are inconsistent with halo effect to some extent, and indicates an complex interaction between the facial attractiveness and familiarity.

Funding: Natural Science Foundation of Shanghai (No. 13ZR1458500)

[4P072] Uncanny Valley: social distance and prosopagnosia

Marija Cmiljanović1, Sunčica Zdravković123

1Laboratory for Experimental Psychology, Novi Sad (Serbia), University of Novi Sad

2Department of Psychology Faculty of Philosophy University of Novi Sad

3Laboratory for Experimental Psychology University of Belgrade

People prefer humanlike characteristics, looks and motion in toys, robots and avatars, as long as the likeness is not too compelling. This sudden dip in preference is labeled uncanny valley. One way to understand this interesting phenomenon in face perception is to investigate people with prosopagnosia. Two groups of Psychology students, controls (18, age 19–21, 5 males) and prosopagnosiacs (6, age 21–24, 4 males), estimated faces for familiarity and social distance (using Bogardus inspired scale). In the first experiment, human and robot faces were morphed (8 levels). Controls demonstrated standard decrease in familiarity as more robot characteristics were added (F(9,733) = 20.11, p < 0.0001), while this tendency was much smaller in prosopagnosiacs (F(9,230) = 2.23, p < 0.021). However, this perceptual effect did not influence social distance in prosopagnosiacs (F(9,230) = 11.58, p < 0.0001) vs. controls (F(9,733) = 11.59, p < 0.0001). In the second experiment, human, robot and symmetrical human faces were compared. Controls demonstrated expected preference for unchanged human face (F(3,301) = 33.559, p < 0.0001), while prosopagnosiacs made no distinction (F(3,92) = 1.31, p < 0.27). Again the perceptual effect did not influence social distance in prosopagnosiacs (F(3,92) = 5.933, p < 0.0001) vs. controls (F(3,301) = 15.503, p < 0.0001). In this study we obtained uncanny valley effect measuring it through social distance and showed the exclusively perceptual side of phenomenon by investigating people with prosopagnosia.

[4P073] Trustworthiness judgement from facial images and its relationship to outcome of political contest in South Asia

Garga Chatterjee, Avisek Gupta and Ishan Sahu

Computer Vision and Pattern Recognition Unit, Indian Statistical Institute

Humans use facial information to judge or infer various aspects about a person. Such inferences or judgements may or may not represent real information about those aspects. Studies have shown that personality trait judgement from facial images has predictive value in various scenarios like differentiating between candidates from different political parties (Rule & Ambady, 2010), prediction of corporation performance from the facial appearance of CEOs (Rule & Ambady, 2008), etc. In the present study, we investigated whether inferences of trustworthiness based solely on facial appearance of election candidates to political offices in South Asia has any relation with the actual outcome of the election. Candidates were selected from closely contested elections from various regions. The photographs of the winner and first runner-up of an election were presented simultaneously to the experiment participants, who had to indicate which of the two faces appeared more trustworthy. The results showed that for 60% of the candidate pairs the face-based trustworthiness judgement was not related to success in election. In most political contests, facial appearance based trustworthiness does not seem to matter in electoral contest outcomes in South Asia.

[4P074] Changes of eyes expression of a model affect a perception of facial expression

Elizaveta Luniakova and Jahan Ganizada

Faculty of Psychology, Lomonosov Moscow State University

Recognition of facial expressions of emotions was studied applying eye tracking technology. Three types of stimuli were used: (A) a set of photos of 2 male and 2 female faces each displaying six basic facial expressions; and two sets of composite photos (B and C). To construct stimuli “B” the eyes on a photo displaying one of basic emotional expressions (anger, fear, disgust, happiness, or sadness) were replaced by the eyes from a photo of the same person posing neutrality. Stimuli “C” were composed in the same way from portraits displaying neutrality and eyes from photo of a person posing one of five emotional expressions. The results did not show significant differences between photos “A” and “B” in expression recognition and in proportion of fixations on the various internal parts of the faces, except fear expression. A fearful face with neutral eyes was not perceived as fearful; a number of fixations on the eyes, nose and mouth increased; fixation durations became shorter. Facial expressions on photos “C” were not recognised as the same basic emotions which were posed on the original photos and were described as “concentration”, “contempt”, “distrust” and rarely as “neutral expression”. Fixation time on eyes area increased.

[4P075] Pupillary response to face-like processing

Yuji Nihei, Tetsuto Minami and Shigeki Nakauchi

Computer science and engineering, Toyohashi University of Technology

Most people have experienced a phenomenon where they perceive faces in the various non-face objects. This phenomenon is called “face pareidolia”. Several studies for the face pareidolia have been investigated using brain activity. In the present study, we investigated face pareidolia using pupillary response. The pupillary response was suggested to be influenced by high-level cognition (e.g. preference and, interest). Therefore, we predicted that change of pupil diameter may be induced by face pareidolia. In our study, we measured that pupil diameter when stimuli were perceived as faces. The stimuli consisted of five circles including a big circle and four small circles. The big circle as a face outline was presented on the center of display with four small circles arranged at random inside. The subjects performed two tasks (face-like and symmetry) to the same stimuli in the block design. In the face-like block, they were asked to make a face-like/not judgment and in the symmetry block, they were asked to make a symmetry/not judgment as results, pupil dilation for face-like task showed differences depending on the behavioral responses. However, pupil dilation for symmetry task showed no differences. These results suggest that this pupillary effect is specific for the face-like processing.

Funding: Grants-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (Grant numbers 25330169 and 26240043).

[4P076] Disruption of face detection and individuation in schizophrenia: links with deficits of visual perception and selective attention

William Comfort1, Thiago Fernandes2, Natanael dos Santos2 and Yossi Zana3

1Center for Health and Biological Sciences, Mackenzie Presbyterian University

2Universidade Federal de Paraiba

3Universidade Federal do ABC

Previous evidence (Butler et al., 2008; Chen et al., 2009) points to a greater impairment of face detection than face recognition in schizophrenia. To further investigate this distinction, we employed a set of tasks derived from Or & Wilson (2010), which seek to target specific aspects of face perception with separate tasks for detection and identification of faces. The tasks were based on a 2AFC discrimination and categorisation design, with presentation times varying in intervals of 1 frame per second (FPS) between 16.–3006 milliseconds (ms). Bayesian adaptive estimation was employed to estimate the threshold of mean presentation time needed to achieve 75% accuracy on each task. Following attentional evaluation, participants in the control and schizophrenia group completed both detection and individuation tasks. Subjects with schizophrenia required a significantly greater mean presentation time than controls to achieve 75% accuracy on the face detection task but not the face identification task. In addition, this increase in presentation time was significantly correlated with the individual scores of the schizophrenia group on the attentional evaluation. These results indicate a link between impaired face detection in schizophrenia and severity of negative symptoms in schizophrenia including inattention and lack of inhibition.

Elena Nikitina

Laboratory of Developmental Psychology, Institute of Psychology, Russian Academy of Sciences

Peculiarities in perception of faces turn to the left or to the right are repeatedly studied in psychology and art. Some authors explain the observed effects by the general principles of perception (Gross & Bornstein; Hufschmidt), and some – by the objective differences between the right and left sides of faces (Kean; Schirillo). In this study we tested two hypotheses: Right-directed faces are perceived more masculine than left-directed faces. Demonstration of right or left side of a real face is not related to gender recognition but can be taken into account during the attribution of personal qualities. We used photos of right and left directed faces of 7-year olds, 20-year olds and elder people and their mirror images. We found that left direction of female faces images significantly improved their gender identification. No difference for original photos and mirror images was observed. But our participants demonstrated the tendency to attribute more courage and intelligence and less honesty for mirror photos turned right.

[4P078] The speed of continuous face detection in a gaze-contingent paradigm

Jacob G Martin1, Maximilian Riesenhuber2 and Simon J. Thorpe1

1Center for brain and cognition research, CNRS CerCO

2Georgetown University Medical Center

We report that the continuous detection of small faces with the human eye is incredibly fast and resistant to fatigue. Subjects detected 500 faces with no background in only 107 seconds on average (N = 24). Moreover, only an average of 27 additional seconds were required to detect 500 faces hidden in cluttered background photos. The fastest subjects processed 500 faces in 100 secs with no background and only required an average of 114 sec with a background. Inverting the faces significantly decreased subjects’ accuracy and speed; as would be expected if the visual hierarchy prioritized the detection of objects according to experience or utility. These inversion effects were present within eccentricities which ranged from 4 degrees to 20 degrees, both with and without hiding the faces in a background.

Funding: Funded by ERC Advanced Grant N° 323711 (M4) and R01EY024161

[4P079] Stepwise dimensional discrimination of compound visual stimuli by pigeons

Olga Vyazovska1, V.M. Navarro2 and E.A. Wasserman2

1Department of general practice-family medicine, V. N. Karazin Kharkiv National University, Ukraine

2The University of Iowa IA United States

To document the dynamics of discrimination learning involving increasingly complex visual stimuli, we trained six pigeons in a stagewise Multiple Necessary Cues (MNC) go/no-go task. The compound stimuli were composed from 4 dimensions, each of which could assume either of two extreme values or the intermediate value between them. Starting with a stimulus composed entirely from intermediate values, we replaced those values with each of the two extreme dimensional values in four successive stages, thereby increasing the discriminative stimulus set from 2 in Stage 1 to 16 in Stage 4. In each stage, only one combination of dimension values signaled the availability of food (S+), whereas the remaining combinations did not (S-s). In each stage, training continued until the response rate to each of the S-s was less than 20% of the response rate to the S+. All pigeons acquired the final MNC discrimination. Despite the novelty of this stagewise design, we successfully replicated the key results of prior MNC discrimination studies: (1) speed of learning was negatively related to compound complexity, (2) speed of learning was negatively related to the similarity between S+ and S− compounds, and (3) attentional tradeoffs between dimensions were observed, especially in later training stages.

[4P080] Perceptual learning for global motion is tuned for spatial frequency

Jordi M Asher, Vincenzo Romei and Paul B Hibbard

Psychology, University of Essex

Perceptual learning research routinely demonstrates that improvements exhibit a high degree of specificity to the trained stimulus. A recent study by Levi, Shaked, Tadin, & Huxlin (2015), observed an improvement in contrast sensitivity as a result of training on global motion stimuli. This study sought to further investigate this generalisation of learning. Participants trained daily, for five continuous days, on one of three global motion tasks (broadband, low or high frequency random-dot Gabors) with auditory trial-by-trial feedback. Additionally participants completed a pre and post training assessment consisting of all three levels of global motion (without feedback) as well as high and low spatial frequency contrast sensitivity tasks. Perceptual learning, during the five days training, occurred for low, and to a lesser extent for broad frequency conditions, but no improvement was found in the high frequency condition. Comparisons of pre and post assessments found improvement exclusively in the low frequency global motion condition. Furthermore, there was no transfer of learning between global motion stimuli. Finally, there was no improvement in contrast sensitivity for any trained frequency. This suggests that global motion training may not improve contrast sensitivity, and improvements at the visual level occur only with low frequency global motion tasks.

Funding: This research was financially supported by a grant from ESSEXLab

[4P081] tRNS over the parietal lobe inhibits perceptual learning of task irrelevant stimuli

Federica Contò1, Sarah Christine Tyler2 and Lorella Battelli3

1Center for Mind/Brain Sciences & Istituto Italiano di Tecnologia, University of Trento

2Center for Neuroscience and Cognitive [email protected] Istituto Italiano di Tecnologia Rovereto

3Center for Neuroscience and Cognitive [email protected] Istituto Italiano di Tecnologia & Berenson-Allen Center for Noninvasive Brain Stimulation and Department of Neurology Beth Israel Deaconess Medical Center Harvard Medical School Boston MA (USA)

Attention helps selectively sensitize the brain towards important visual information. However, unattended information can influence learning; the brain can unconsciously adapt to visual features (Watanabe 2001). In our study, we measured whether transcranial random noise stimulation (tRNS) facilitates this sensitization, or whether distracting information can inhibit the process. Subjects were divided into the “training group (TG)” or “no training (NG)” group. Both groups completed an orientation discrimination (OD) task on Day 1 and Day 5. On Days 2–4, the TG was presented with the same stimuli, but performed a temporal order (TOJ) of the Gabors used in the OD task. We hypothesized that stimulation would help process the unattended orientation, and lead to increased performance on Day 5. The NG performed no task during Days 2–4. Subjects were stimulated over hMT+, parietal cortex (Par), sham, or behavioral only (8 groups, two per condition). tRNS was administered during days 2–4. The NG-Par subjects performed significantly better on the OD task on Day 5. Conversely, the TG-Par subjects that underwent training performed significantly worse on the OD task on Day 5. When subjects are stimulated while performing an irrelevant TOJ task, inhibition of cortical processes involved in task irrelevant learning occurs.

[4P082] Transcranial Random Noise Stimulation (tRNS) Modulates Cortical Excitability of the Visual Cortex in Healthy Adults

Florian S Herpich12, Martijn van Konigsbruggen2, Lorella Battelli23

1Cimec - Center for Mind/Brain Sciences and Istituto Italiano di Tecnologia, University of Trento

2Center for Neuroscience and Cognitive [email protected] Istituto Italiano di Tecnologia Rovereto

3Berenson-Allen Center for Noninvasive Brain Stimulation and Department of Neurology Beth Israel Deaconess Medical Center Harvard Medical School MA USA

tRNS can induce long term increases of the corticospinal excitability of the motor cortex (Terney et al., 2008). Moreover, tRNS over the parietal cortex can improve mathematical skills. However, it is unclear whether tRNS over other areas causes similar changes in excitability. Our aim was to investigate whether tRNS over the visual cortex leads to increases in excitability similar to the motor cortex. In Experiment 1 we tested 12 participants in a within-subject design. To quantify the magnitude of cortical excitability changes, we measured phosphene threshold using an objective staircase method. Single-pulse TMS was used to elicit phosphenes before, immediately after, and every 10 minutes up to 1 hour after the end of 20 min tRNS or Sham. In Experiment 2, 8 subjects underwent the same procedure, but were followed up to 2 hours post-tRNS. We found a significant effect of stimulation immediately after and up to 60 minutes after the end of the stimulation in Experiment 1. We replicated and extended these findings in Experiment 2, by showing that phosphene threshold returns to baseline at 90 minutes post-tRNS. Our findings demonstrates that tRNS can modulate the excitability of the visual cortex, and the effect is sustained and long lasting.

[4P083] Perceiving novel objects: The effect of learning on repetition blindness

Idy W Chou and Dorita H. F. Chang

Department of Psychology, The University of Hong Kong

Repetition blindness (RB) refers to the failure in detecting the second occurrence of a repeated stimulus when a series of stimuli are presented in rapid succession. For familiar objects, RB is observed even if stimuli differ across orientations, suggesting RB must involve some view-invariant source. Here, we probed the source of RB in object perception by testing changes in RB across orientations before and after training using novel objects. In the RB task, novel object streams were presented under a RSVP paradigm and contained either repeated or non-repeated objects with varying orientation differences. Observers were asked to judge whether they saw a repeated item in the stream or not. In a second object discrimination task, two different or identical objects were presented in separate intervals and observers were asked to discriminate whether the objects were the same or different. Participants (N = 14) were tested on both tasks before and after training. Results indicated significant RB for novel objects across orientations even before training. Training reduced the overall magnitude of RB uniformly across orientations. These findings support a view-invariant source of RB, which has a locus earlier than object recognition, perhaps at the stage of featural processing and organization.

[4P084] Dichoptic perceptual training in juvenile amblyopes with or without patching history

JunYun Zhang1, Xiang-Yun Liu2 and Cong Yu1

1Department of Psychology, Peking University

2Department of Ophthalmology Tengzhou Central People’s Hospital Tengzhou Shandong Province China

Dichoptic training is a popular tool in amblyopia treatment. Here we investigated the effects of dichoptic training on juvenile amblyopia no longer responsive to patching treatment (PT group) or never patch treated (NPT group). Training consisted of three stages. (1) 10 PT and 10 NPT amblyopes (8-17 years) received dichoptic de-masking training for 40 hours. They used AEs to practice contrast discrimination of Gabors that were dichoptically masked by a band-filtered noise pattern simultaneously presented in NAEs. Training improved maximal tolerable noise contrast for AE contrast discrimination by 350% in PT and 480% in NPT, which translated to stereoacuity improvements by 4.6-lines in PT and 3.0-lines in NPT, and AE visual acuity improvements by 1.3-lines in PT and 2.1-lines in NPT. (2) The amblyopes further received stereopsis training for another 40 hours. Training improved stereoacuity by 2.4-lines in PT and 0.5-lines in NPT, and AE acuity by 0 line in PT and 0.5 lines in NPT. Seven PT amblyopes regained normal stereoacuity after two stages of training. (3) Extra monocular AE grating acuity training (30 hours) failed to improve VA and stereoacuity in both groups. Our study confirmed the effectiveness of dichoptic training approaches in the treatment of juvenile amblyopia.

Funding: Natural Science Foundation of China Grants 31470975 and 31230030

[4P085] Learning when (and when not) to integrate audiovisual signals

Neil Roach1, Eugenie Roudaia2, Fiona Newell2 and David McGovern2

1School of Psychology, The University of Nottingham

2Trinity College Dublin

To promote effective interaction with the environment, the brain combines information received from different sensory modalities. Integration of cues relating to a common source can improve the precision of sensory estimates, but these benefits must be tempered against the costs of integrating cues relating to independent sources. To balance these competing demands, the brain restricts multisensory integration to cues that are proximal in space and time. Rather than being fixed, recent research suggests that the tolerance of audiovisual integration to temporal discrepancies can be altered via training. However, little is known about the mechanisms underlying these changes. Here, we measure the temporal and spatial tolerance of the ventriloquist effect before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific reduction in the tolerance to discrepancies in time but not space, and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimates, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source.

[4P086] Visual perceptual learning of a task-irrelevant feature of the stimulus

Jessica Galliussi1, Lukasz Grzeczkowski2, Walter Gerbino1, Michael Herzog2 and Paolo Bernardis1

1Department of Life Sciences, University of Trieste

2École Polytechnique Fédérale de Lausanne

Awareness, focused attention, and task-relevance were thought to be necessary for perceptual learning (PL): a Feature of the Stimulus (FoS) on which participants perform a task is learned, while a task-irrelevant FoS is not learned. This view has been challenged by the discovery of task-irrelevant PL, occurring for subthreshold task-irrelevant stimuli presented at an unattended, peripheral location. Here, we proof further evidence for task-irrelevant PL by showing that it can occur for subthreshold task-irrelevant FoS presented in the fovea (hence spatially attended). Our experiment was divided into 3 stages: pre-test, training, and post-test. During pre- and post-tests, participants performed a 3-dot Vernier task and a 3-dot bisection task. During training, participants performed an unrelated task (luminance discrimination) on the same stimulus. The task-irrelevant FoS, manipulated during training, was the position of the middle dot: either a subthreshold left/right offset (Experimental Group) or in perfect alignment with the outer dots (Control Group). The Experimental Group showed performance improvements in the Vernier task but not in the bisection task; while the Control Group showed no effect on performance in either task. We suggest that PL can occur as an effect of mere exposure to a subthreshold task-irrelevant FoS, which is spatially attended.

[4P087] Broad learning transfer in visual hierarchical processing

Kenji C Lau and Dorita H. F. Chang

Department of Psychology, The University of Hong Kong

Kenji C. K. Lau, Dorita H. F. Chang The literature has yielded mixed conclusions as to whether there is hemispheric specialization for the perception of hierarchical stimuli (e.g., Navon-type figures), with some findings indicating enhanced processing of local configurations in the left hemisphere and enhanced global (holistic) processing in the right hemisphere. Here, we tested hierarchical processing of stimuli in the two visual fields (left/right) to probe hemispheric specialization with perceptual learning. Participants (N = 16) were presented with letter-based, congruent and incongruent Navon figures and asked to judge the identity of either the global or local structure. Participants were tested in four conditions consisting of all combinations of the two visual fields and two tasks (global/local), before and after training on one of the four conditions. Results showed no interaction between the two visual fields for the two tasks. More importantly, training improved performance and speed (RT) for all conditions, regardless of the trained task or visual field. The results suggest hierarchical stimuli are processed comparably well by both hemispheres for both tasks and demonstrate extensive learning transfer between trained locations and tasks.

[4P088] Training transfer: from augmented virtual reality to real task performance

Georg Meyer1, Natalia Cooper1, Mark White1, Fernando Milella2 and Iain Cant2

1Psychology, Liverpool University

2VEC Daresbury

The aim of this study was to investigate whether augmented cues in VR that have previously been shown to enhance performance and user satisfaction in VR training translate into performance improvements in real environments. Subjects were randomly allocated into 3 groups. Group 1 learned to perform real tyre changes, group 2 were trained in a conventional VR setting, while group 3 were trained in VR with augmented cues, such as colour, sound and vibration changes signalling task relevant events or states. After training participants were tested on a real tyre change task. Overall time to completion was recorded as objective measure; subjective ratings of presence, perceived workload and discomfort were recorded using questionnaires. Overall, participants who received VR training performed significantly faster on the real task than participants who completed real tyre change only. Participants who were trained with augmented cues performed real task with fewer errors than participants in minimal cue training group. Systematic differences in subjective ratings that reflected objective performance were also observed. Results suggest that the use of virtual reality as a training platform for real tasks should be encouraged and further evaluated.

Funding: ESRC CASE,

[4P089] The role of vestibular inputs in self-motion perception by cutaneous sensation (2): Does the active motion of the perceiver facilitate or inhibit perceived self-motion by cutaneous sensation?

Hidemi Komatsu1, Kayoko Murata2, Yasushi Nakano1, Shigeru Ichihara3, Naoe Masuda1 and Masami Ishihara2

1Faculty of Business and Commerce, Keio University

2Tokyo Metropolitan University

3MEDIA EYE

Perceived self-motion has been substantially investigated in vision. But, Murata et al., (2014) reported the wind for cutaneous sensation with vibration for vestibule also occurred perceived self-motion. They called “cutaneous vection”. The authors of this study have compared perceived self-motion on cutaneous vection with actual body transfer. This study compared active motion and passive motion. This experiment prepared three factors (with or without wind, transfer or vibration, and active motion or passive motion). We used a bladeless fan for cutaneous stimulus to the participant face and an AC motor for vibration to the participant body. The participant sat astride an aerobike. The fan and the aerobike were installed on a platform The platform itself could move to and fro. In active motion condition, the participant pedaled the bike. Onset latency, accumulative duration and rating of subjective strength were measured. The latency was longer in active motion than in passive motion regardless of other factors (F(1,14) = 11.29, p = .00). There was not significantly difference in duration. The rating was higher in with wind condition than without wind condition regardless of other factors (F(1,14) = 9.43, p = .01). In this experiment, the active motion of perceiver inhibited the occurrences of perceived self-motion by cutaneous sensation.

[4P090] Coordination of eye-head movements and the amount of twist of the body while jumping with turn

Yusuke Sato1, Shuko Torii2 and Masaharu Sasaki3

1College of Commerce, Nihon University

2The University of Tokyo

3Hirosaki Gakuin University

Gymnasts stabilize their gaze before/during landing in a jump with half/full turn while coordinating eye and head movements. The aim of this study was to compare the differences between eye and head movements during jumps with half turn and those during jumps with full turn. The participants were male skilled gymnasts. They performed jumps with half turn (180 degrees) and full turn (360 degrees). Horizontal eye movement during these jumps was measured using electrooculography. Head movement was simultaneously recorded by high-speed digital camera. Gaze was determined from the obtained eye and head movement data. We found two main results: 1) When jumping with half turn, the gymnasts started gaze stabilization earlier than when jumping with full turn; 2) There was a correlation between the initiation times of gaze stabilization during the two kinds of jump. These results indicate that the amount of twist of the body while jumping with turn would affect timing of gaze stabilization during landing. Furthermore, gymnasts who delay stabilize their gaze before landing during jumps with half turn tend to also delay it during jumps with full turn.

Funding: This research was supported by JSPS KAKENHI Grant-in-Aid for Young Scientists (B), Grant Number 26750280.

Takeharu Seno1, Ken-ichi Sawai2, Masaki Ogawa1, Toshihiro Wakebe3, Hidetoshi Kanaya4, Juno Kim5 and Stephen Palmisano6

1Institute for Advanced Study, Kyushu Univresity

2University of Tokyo

3Fukuoka Jo Gakuin University

4Aichi Shukutoku University

5University of New South Wales

6University of Wollongong

In vection studies, three main indices are used to gauge the illusory experience of self-motion (latency, duration and magnitude). We sought to identify the combination of these indices that best describes vection. The vection indices were taken from seven of our previous experiments, using 317 of drinks before consumption. However, it is necessary to conduct fundamental investigations of the relationships between color and aroma, in particular, in the case of one color with different strengths or aromas with different strengths. We used four types of milk beverages which were colored in red as visual stimuli. As olfactory stimuli, we used eight types of flavor samples: two concentration conditions of strawberry, peach, blueberry and mint. These stimuli were evaluated by twenty participants in their twenties. Each visual stimulus was in a plastic-wrapped glass, and each olfactory stimulus was in a brown bottle. In the visual evaluation experiment, participants observed one milk without any olfactory stimulus. In the olfactory evaluation experiment, they smelled a flavor sample without any visual stimulus. Finally, they observed one of the milk beverages while smelling a flavor sample in the visual-olfactory evaluation experiment. Evaluated items were “predicted sweetness, sourness, umami taste, hot flavor”, and “predicted palatability”. The results show that the weighting factor of color on evaluating “predicted palatability” of red colored milk beverages was extremely smaller than that of the aroma.

[4P110] The Effects of Multisensory Cues on the Sense of Presence and Task Performance in a Virtual Reality Environment

Natalia Cooper1, Ferdinando Millela2, Carlo Pinto2, Iain Cant2, Mark White1 and Georg Meyer1

1Department of Psychological Sciences, University of Liverpool

2Virtual Engineering Centre

The aim of this study was to evaluate the effect of visual, haptic and audio sensory cues on participant’s sense of presence and task performance in a highly immersive virtual environment. Participants were required to change a wheel of a (virtual) racing car in the 3D environment. Auditory, haptic and visual cues signalling critical events in the simulation were manipulated in a factorial design. Participants wore 3D glasses for visual cues, headphones for audio feedback and vibration gloves for tactile feedback. Participants held a physical pneumatic tool. Data was collected in two blocks containing all eight sensory cue combinations. All participants completed all 16 conditions in a pseudorandom sequence to control for order and learning effects. Subjective ratings of presence and discomfort were recorded using questionnaires after each condition. The time taken to complete the task was used as an objective performance measure. Participants performed best when all cues were present. Significant main effects of audio and tactile cue presentation on task performance and also on participants' presence ratings were found. We also found a significant negative effect of environment motion on task performance and participants' discomfort ratings.

Funding: ESRC funding body

[4P111] Auditory and tactile frequency representations overlap in parietal operculum

Alexis Pérez-Bellido, Kelly A. Barnes and Jeffrey M. Yau

Department of Neuroscience, Baylor College of Medicine

Traditional models of sensory cortex organization segregate auditory and somatosensory information in modality-specific cortical systems. Recent studies have shown that spatially overlapping regions of sensory cortex respond to both auditory and tactile stimulation, but whether they support common functions for audition and touch is unknown. In the present functional magnetic resonance imaging (fMRI) study we employed univariate and multivariate analysis approaches to characterize human cortical responses to auditory and tactile frequency information. Participants received auditory and tactile stimulation (75, 130, 195, 270 and 355 Hz) in separate scans as they performed an attention-demanding frequency discrimination task. This design enabled us to quantify BOLD signal changes and spatial activation patterns to identical stimulus frequencies presented separately by audition or touch. Our univariate and multivariate analyses showed that primary sensory areas display specific response patterns to auditory and tactile frequency, consistent with traditional sensory processing models. Interestingly, higher-order somatosensory areas in the parietal operculum exhibited frequency-specific responses to both auditory and tactile stimulation. Our results provide further evidence for the notion that overlapping cortical systems support audition and touch. Moreover, our findings highlight the potential role of higher-order somatosensory cortex, rather than auditory cortex, in representing auditory and tactile temporal frequency information.

[4P112] Colour associations in synaesthetes and nonsynaesthetes: A large-scale study in Dutch

Tessa M van Leeuwen1 and Mark Dingemanse2

1Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen

2Max Planck Institute for Psycholinguistics Nijmegen, The Netherlands

For synaesthetes, specific sensory stimuli lead to additional experiences, e.g. letters evoke colours. An interesting question is whether crossmodal associations in synaesthesia are similar to crossmodal associations in the wider population. We performed a large-scale online survey consisting of multiple cross-modal association tasks (>19.000 participants; >30.000 completed tests). Synaesthetic associations are consistent over time; using consistency scores we classified 1128 synaesthetes. We mostly observed coloured days or months (N = ∼450), and grapheme-colour synaesthesia (N = ∼650). We compared synaesthetes’ and nonsynaesthetes’ colour choices. In letter-colour and number-colour association tests, grapheme frequency influenced colour associations. For letters, nonsynaesthetes (N = 6178) as well as synaesthetes (N = 344) chose high-frequent colours for high-frequency letters (p < .001 for both groups) and numbers (p < .01), but in both cases the correlation coefficients were higher for synaesthetes. Certain colour associations were made significantly more often, e.g. A was red (p < .001) and X was gray (p < .05) for both nonsynaesthetes and synaesthetes. In the music-color task (N = 1864, 101 synaesthetes), musical mood affected colour choices. Synaesthetes chose different colours for different instruments. Additional tests included vowel-colour, Cyrillic letters-colour, weekday-colour, and month-colour associations. We compared crossmodal associations in large samples of synaesthetes and nonsynaesthetes: the results, so far, suggest that synaesthetic associations are similar to nonsynaesthete associations.

Funding: This research was supported by the Netherlands Organisation for Scientific Research (NWO) and by Dutch broadcasting company NTR

[4P113] Effects of object-specific sounds on haptic scene recognition

Simon Hazenberg and Rob van Lier

Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen

In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a platform. In half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After exploring the scene, two objects were swapped and the task was to report which of the objects swapped positions. In Experiment 1, geometrical objects and simple sounds were used, while in Experiment 2, the objects comprised toy-animals that were matched with semantically compatible animal sounds. In Experiment 3, we replicated Experiment 1, but now a task preceded the experiment in which the participants learned to identify the objects based on tactile and auditory input. In all experiments, the results revealed a performance increase only after the switch from bimodal to unimodal. Thus, the release of bimodal identification, from audio-tactile to tactile-only produces a benefit that is not achieved for the reversed order in which sound was added after having haptic-only trials. We discuss these findings and relate them to task-related factors other than mere bimodal identification.

[4P114] The interactions with own avatars may improve the presence effect in virtual environments

Natalya Krasilshchikova and Galina Menshikova

Psychology, Lomonosov Moscow State University

Various studies have proved that the use of natural interfaces which able to exploit body movements can contribute to increase the feeling of presence in the virtual environments. We investigated the subject’s interactions with her/his own avatar through the use of virtual embodiment technology. The experiment consisted of two parts. In the first part the participants were asked to carry out body movements that were exactly copied by their own avatar displayed in front of the participant and repeat their motor actions several times to verify the existence of their own avatar. In the second part the participant performing body movements at some moment could perceive the avatar’s movements that did not correspond to participant’s movements. To measure the level of interactions between the participant and her/his own avatar the physiological reactions (EMG activity) were recorded during the performance. Then participants filled out questionnaire (IPQ) assessing their presence effect. The results showed that the summary EMG activity was significantly higher when body movements consistency was broken. Also the results revealed the correlation between body physiological reactions and the presence scores. Our data support the interaction paradigm that is based on maximizing the match between visual data and proprioception.

Funding: The study was funded by Russian Scientific Fund project № 15-18-00109

[4P115] Crossmodal transfer of emotion by music is greater for social compared to non-social visual cues: an event-related potential (ERP) study

Neil Harrison and Linda Jakubowiczová

Department of Psychology, Liverpool Hope University

Studies have shown that music can influence the processing of subsequently presented visual stimuli. However, in many of these affective priming paradigm experiments a very wide range of visual stimuli have been used as cues. Given that music is a highly social form of communication, we tested whether music would have a greater influence on the emotional perception of visual cues containing social cues, compared to those containing no social cues. In an emotional priming experiment, participants listened to exerts of music (happy, sad, or neutral) before an emotionally neutral picture from the IAPS database was presented. The picture could be either social (containing humans) or non-social (without humans). Participants had to rate the valence of the picture. ERP results showed that sad music primes elicited a higher P300 amplitude for social versus non-social visual cues. For neutral music there was no difference in P300 amplitude for social versus non-social cues. Importantly, the difference in P300 amplitude between social versus non-social cues was larger for sad music compared to neutral music. These results offer novel insights into the influence of social content on the capacity of music to transfer emotion to the visual modality.

[4P116] Eye-Fixation Related Potentials evidence for incongruent object processing during scene exploration

Hélène Devillez, Randall C. O'Reilly and Tim Curran

Department of Psychology and Neuroscience, University of Colorado Boulder

Object processing is affected by the gist of the scene within which it is embedded. Incongruent objects result in prolonged and more frequent eye-fixations than congruent objects. In parallel, previous event-related potential (ERP) research has suggested that the congruency effect is reflected by a late ERP resembling the N300/N400 effect. The present study investigates the effect of semantic congruency on scene processing using eye-fixation related potentials (EFRPs). We simultaneously registered electroencephalographic (EEG) and eye-tracking signals of participants exploring natural scenes in preparation for a recognition memory test. We compared EFRPs evoked by congruent vs. incongruent eye-fixations (e.g., a fork in a kitchen vs. the same fork in a bathroom). First, we replicated previous eye movement results, showing that incongruent objects were fixated more and longer than congruent objects. Second, the EFRP analysis revealed that the P1 EFRP and a later EFRP emerging around 260 ms after the fixation onset were modulated by semantic congruency. The top-down encoding of the scene was built during the first eye fixations; a mismatch between the semantic knowledge of objects and the features of the scene affected scene exploration. These results suggest that top-down information influences early object processing during natural viewing.

Funding: This research was supported by Grants N00014-14-1-0670 and N00014-16-1-2128 from the Office of Naval Research (ONR).

[4P117] Reducing the impact of a restricted field of view when watching movies

Francisco M Costela and Russell Woods

Harvard Medical School, Schepens Eye Research Institute

Magnification is commonly used to reduce the impact of impaired central vision (e.g. macular degeneration). However, magnification limits the field of view (FoV; e.g. only 11% (1/9) of the original is visible with 3x magnification). This reduction is expected to make it difficult to follow the story. Most people with normal vision look in about the same place, the center of interest (COI), most of the time when watching “Hollywood” movies, presumably as it is most informative. We hypothesized that if the FoV was around the COI, it would provide more useful information than using the original image center or an unrelated view location (the COI of a different clip). To measure the ability to follow the story, subjects described twenty 30-second clips in natural language. A computational linguistics approach was used to measure information acquisition (IA). Significant reductions in IA were found as FoV decreased (11% and 6%; highest magnification) for the original-center and unrelated-COI view locations. FoV around the COI had higher IA than the other conditions for the 11% and 6% FoVs. Thus, magnifying around the COI may serve as a video enhancement approach, potentially applicable to people with central vision loss.

[4P118] The interactive role of nCRF on CRF at cat's primary visual cortex to natural stimuli

Ling Wang, Lipeng Zhang, Zhengqiang Dai and Jiaojiao Yin

Key Laboratory for Neuroinformation of Ministry of Education, Center for Information in Medicine, School of Life Science and Technology, University of Electronic Science and Technology of China

By means of simple grating stimulus, it is known that the neuron responses at primary visual cortex in CRF (classical receptive filed) are affected by outer stimulus in nCRF (non-classical receptive filed). But in natural environment, visual stimulus are far more complicated than simple grating stimulus. It is unclear about the interactive role of nCRF on CRF at primary visual cortex under natural stimulus. With several natural movies, we compared the neuron responses in CRF and in CRF + nCRF to explore the role of nCRF. Generally, the responses to natural stimulus were weaker than to simple grating stimulus. Further, we found that the neuron responses depended on the type of outer nCRF. In our results, the final neuron responses were presented as: 1) inhibited when adding inhibitory nCRF and the selectivity to special natural features became stronger at same latency with that in CRF; 2) facilitated when adding facilitative nCRF and the selectivity became stronger at a delay latency; 3) inhomogeneous when adding mixed nCRF, such as first facilitated then inhibited, and the selectivity became weaker and spread. The results illustrated that, at primary visual cortex, the nCRF played important interactive and dependable role on CRF in extracting natural features.

Funding: This work is supported by grants from the NSFC (61375115, 31000492, 61573080), 973 Project (2013CB329401) and the Fundamental Research Funds for the Central Universities (ZYGX2015J092).

[4P119] Awareness level modulates ERPs for evolutionarily threatening images: investigating the snake detection hypothesis

Simone Grassini, Suvi Holm, Henry Railo and Mika Koivisto

Department of Psychology, University of Turku, Finland

The snake detection theory claims that snake stimuli have shaped human visual system. According to this theory, predatory pressure provoked by snakes have selected individuals who were better able to recognize snakes, transferring such skill to their offspring. The snake detection hypothesis has been previously tested using Event Related Potentials (ERPs). These studies have found that snake images produce an enhanced amplitude around 225-230 ms (Early posterior negativity, EPN) from stimulus onset, compared to other animal stimuli. As the snake detection advantage might be from evolutionary origin, it has been suggested that it may not be dependent on subjective awareness. The present study aims to test the hypothesis that the electrophysiological advantage for snake images is independent from aware perception. In our experiment, images of snakes, spiders, butterflies and birds were presented in five different conditions, where awareness was modulated using backward masking. Our results showed that snake images provoked an enhanced ERPs amplitude compared to the other animal images in unmasked conditions. However, the difference become smaller in conditions of reduced awareness, and disappeared on the most challenging perceptual conditions. Thus, the results show that the stimulus must be consciously perceived before the enhanced EPN for snakes emerges.

Funding: This study was supported by the Academy of Finland (project no. 269156).

[4P120] Interaction of perceptibility and emotional arousal in modulating pupil size. fMRI study

Kinga Wołoszyn, Joanna Pilarczyk, Aleksandra Domagalik and Michał Kuniecki

Jagiellonian University

Pupil size indicates emotional arousal. EEG study revealed that emotional arousal indexed by the early posterior negativity is modulated by perceptibility of stimuli, with larger amplitudes for more visible arousing stimuli. Our study investigates the relation between perceptibility and pupillary changes while viewing emotionally arousing vs. non-arousing pictures in different signal-to-noise conditions, as well as brain activity related to pupillary changes in different arousal conditions. Twenty healthy participants had fMRI scans while performing free viewing task. Stimuli were shown by googles with eye-tracking camera. 50 arousing and non-arousing natural scenes were selected from emotional pictures databases. To each original image pink noise was added in the following proportions: 0, 60, 70, 80, 100%. Change in pupil diameter was modulated by both noise level F(4, 68) = 5.43, p = .001 and arousal F(1,17) = 42.74, p < .001. Interaction between these factors was also significant F(4,68) = 9.31, p < .001. Over decreasing noise level high arousing stimuli increased pupil diameter while low arousing stimuli decreased it. Pupil changes were related to the activations in primary visual areas, bilateral amygdala and cingulate sulcus in high arousal condition and bilateral insula and superior frontal sulcus in low arousal condition, suggesting different mechanisms of pupil size regulation depending on arousal level.

Funding: This work was supported by the Polish National Science Centre (grant number 2012/07/E/HS6/01046).

[4P121] Automatic analysis of smooth pursuit episodes in dynamic natural scenes

Michael Dorr, Ioannis Agtzidis and Mikhail Startsev

Institute for Human-Machine Communication, Technical University Munich

We investigated smooth pursuit behaviour in dynamic natural scenes. Smooth pursuits (SP) are difficult to detect in noisy eye-tracking signals because of their potentially slow speed and short duration, and thus labour-intensive hand-labelling is still considered the gold standard. In order to facilitate (automatic) analysis of potentially huge gaze corpora, we recently developed an algorithm for SP detection (Agtzidis et al., ETRA 2016) that combines information from the gaze traces of multiple observers and thus achieves much better precision and recall than state-of-the-art algorithms. We applied our algorithm to a publicly available data set recorded on Hollywood video clips (Mathe et al., ECCV 2012), which comprises more than 5 hours of gaze data each for 16 subjects. With these professionally produced stimuli, where often at least one object of interest is moving on the screen, subjects performed a considerable amount of SP: SP rates ranged from 0% to 45% per video (mean = 9.5%) and from 6.7% to 12.5% per observer (mean = 9.7%). SPs had a long-tailed speed distribution with a peak at about 3 deg/s (median = 3.82 deg/s), and showed a marked anisotropy towards the horizontal axis.

Funding: Research funded by the Elite Network Bavaria.

[4P122] The Influence of Detailed illustrations on Comprehension Monitoring and Positive Emotions

Yu Ying Lin, Kiyofumi Miyoshi and Hiroshi Ashida

Graduate School of Letters, Kyoto University

Illustrations containing colors and realistic details were often used to accompany the text in scientific textbooks. The present study examined the effect of detailed illustrations on positive emotions and comprehension monitoring accuracy. In the experiment, students studied six human anatomy lessons with either detailed or simplified illustrations, judged how well they understood each lesson, and completed tests for each lesson. Students rated their positive emotions before and after learning the lessons. Monitoring accuracy was computed as the intra-individual correlation between judgments and test performance. The results showed that students who learned with detailed illustrations were less accurate in judging their comprehension than students who learned with simplified illustrations. Moreover, positive emotions of the students who learned with detailed illustrations decreased significantly after studying the lessons. Students who learned with simplified illustrations did not change significantly in positive emotions. The results support the idea that adding irrelevant details in instructional illustrations may prompt students to rely on invalid cues for predicting their own level of comprehension, resulting in poor monitoring accuracy. Detailed illustrations may also have negative impact on positive emotions of students, possibly due to the realistic anatomical details contained in the illustrations.

[4P123] Constructing scenes from objects: holistic representation of object arrangements in the parahippocampal place area

Daniel Kaiser and Marius Peelen V

Center for Mind/Brain Sciences, University of Trento, Italy

The prevailing view is that scenes and objects are processed in separate neural pathways. However, scenes are built from objects; when do objects become scenes? Here we used fMRI to test whether scene-selective regions represent object groups holistically, as scenes. Participants viewed images of two objects that usually appear together in a scene (e.g., a car and a traffic light). These object pairs were presented either in their regular spatial arrangement or with the two objects interchanged. Additionally, every single object was presented centrally and in isolation. We modeled multi-voxel response patterns evoked by the object pairs by averaging the response patterns evoked by the two single objects forming the pair. We hypothesized that this approximation should work well when an object pair is represented as two separate objects, but should be significantly reduced when an object pair is represented holistically. The scene-selective parahippocampal place area (PPA) showed good approximation for irregularly arranged object pairs. Importantly, this approximation was significantly reduced for regularly arranged object pairs. No such difference was found in control regions, including object-selective cortex. These results indicate that object groups, when regularly arranged, are represented holistically in the PPA, revealing a transition from object to scene representations.

Funding: The research was funded by the Autonomous Province of Trento, Call “Grandi Progetti 2012”, project “Characterizing and improving brain mechanisms of attention - ATTEND”.

[4P124] Visual processing of emotional information in natural surfaces

Isamu Motoyoshi and Shiori Mori

Department of Life Sciences, The University of Tokyo

Visual appearance of a surface tells us not only what it is made from, but also what it means for us. In the nature, some surfaces look beautiful and attract us whereas the others look ugly and avert us. To investigate visual process underlying such emotional judgments upon surfaces, we asked observers to rate comfortableness and unpleasantness for a variety of natural and artificial surfaces (193 images). Analyzing the relationship between the human rating data and image statistics of surfaces, we found that unpleasantness was correlated with the SD at mid-spatial frequency bands and the cross-orientation energy correlation at high-spatial frequency bands (p < 0.01). Comfortableness was specifically related with the luminance vs. color correlation at high-spatial frequency bands (p < 0.01). Similar patterns of the results were obtained for statistically synthesized texture images (r > 0.75), and for stimuli with a short duration (50 ms) that makes it hard to recognize surface category (r > 0.8). These results indicate the existence of a rapid and implicit neural process that utilizes low-level image statistics to directly summon emotional reactions to surfaces, independently from the recognition of material.

Funding: Supported by KAKENHI 15K12042 and 15H05916

[4P125] Encoding basic visual attributes of naturalistic complex stimuli

Jozsef Fiser1, Jeppe Christensen2 and Peter Bex3

1Department of Cognitive Science, Central European University

2University of Copenhagen

3Northeastern University

Despite numerous studies with simple stimuli, little is known about how low-level feature information of complex images is represented. We examined sensitivity to the orientation and position of Gabor patches constituting stimuli from three classes according to their image type: Familiar natural objects, Unfamiliar fractal patterns, and Simple circular patterns. All images were generated by re-synthesizing an equal number of Gabor patches, hence equating all low-level statistics across image types, but retaining the higher-order configuration of the original images. Just noticeable differences of perturbations in either the orientation or position of the Gabor patches were measured by 2-AFC on varying pedestal. We found that while sensitivity patterns resembled those reported earlier with simple, isolated Gabor patches, sensitivity exhibited a systematic stimulus-class dependency, which could not be accounted for by current feedforward computational accounts of vision. Furthermore, by directly comparing the effect of orientation and position perturbations, we demonstrated that these attributes are encoded very differently despite similar visual appearance. We explain our results in a Bayesian framework that relies on experience-based perceptual priors of the expected local feature information, and speculate that orientation processing is dominated by within- hyper-column computations, while position processing is based on aggregating information across hyper-columns.

[4P126] Perceptual Organization of Badminton Shots in Experts and Novices

Ehsan Shahdoust, Thomas H Morris, Dennis Gleber, Tandra Ghose and Arne Güllich

Perception Psychology, TU Kaiserslautern

Expertise leads to perceptual learning that is not present in novices. Here we investigate whether such differences in perceptual organization can be measured by event-segmentation task. To date, there is no published data on event-segmentation of racket sports. We used videos of three different badminton shots (clear, smash, drop). Participants (5 Experts, 5 Novices) performed a classic button-pressing task (Newtson, 1976) and segmented the video-clips (60 shots, 1.25 seconds/shot, presented in random order). Overall, novice and experts had high event-segmentation agreement (R2 = 0.620). Nevertheless, during the initial 0.5 sec period (“movement” phase) there was no agreement between experts and novices (R2 = 0.045): Experts did not mark foot movements as significant events. Repeated measures ANOVA (expertise*time*shot) revealed a significant shot*time interaction affect (p = .037) but no affect for expertise. Time point analysis revealed that this interaction affect was seen only 0.5–0.75 sec before shuttle contact (p = 0.05). We conclude that (a) each shot type has a differential temporal event segmentation sequence; and (b) experts only segment during the “shot” phase and not during the “movement” phase. These findings help in the understanding of the perceptual processing responsible for anticipation skills in badminton.

[4P127] Effects upon magnitude estimation of the choices of modulus’ values

Adsson Magalhaes12, Marcelo Costa1 and Balazs Nagy1

1Institute of Psychology, University of Sao Paulo

2Stockholms Universitet

We used the magnitude estimation to investigate how the range of stimuli influences visual perception by changing the modulus value. Nineteen subjects with normal or corrected-to-normal visual acuity (mean age = 25.7yrs; SD = 3.9) were tested. The procedure consisted of two gray circles luminance of 165 cd/m2, 18.3 degrees apart from each other. On the left side was the reference circle (VA of 4.5 deg) in which was assigned four arbitrary values: (1) 20, (2) 50, (3) 100 and (4) 500. The subjects' task was to judge the size of the circles on the right side of the screen assigning the number proportional to the changed size, relative to the circle presented on the left side of the screen (modulus). In each trial, ten circle sizes (1.0, 1.9, 2.7, 3.6, 4.5, 5.4, 6.2, 7.2, 8.1, 9.0 degree of visual angle at 50 cm) were presented randomly. Our results shows a high correlation between the circle size judgment and different modulus sizes (R = 0.9718, R = 0.9858, R = 0.9965 and R = 0.9904).The Power Law exponents were (1) 1.28, (2) 1.34, (3) 1.29 and (4) 1.40. Increasing the size of modulus, bigger the exponent gets due the wide range of numbers available to judge the size.

[4P128] How information from low and high spatial frequencies interact during scene categorization?

Louise Kauffmann, Alexia Roux-Sibilon, Dorante Miler, Brice Beffara, Martial Mermillod and Carole Peyrin

Department of Psychology, LPNC (CNRS UMR5105) University Grenoble Alpes

Current models of visual perception suggest that during scene categorization, low spatial frequencies (LSF) are rapidly processed and activate plausible interpretations of visual input. This coarse analysis would be used to guide subsequent processing of high spatial frequencies (HSF). The present study aimed to further examine how information from LSF and HSF interact during scene categorization. We used hybrid scenes as stimuli by combining LSF and HSF from two different scenes which were semantically similar or dissimilar. Hybrids were presented 100 or 1000 ms and participants had to attend and categorize either the LSF or HSF scene. Results showed impaired categorization when the two scenes were semantically dissimilar, indicating that the semantic content of the unattended scene interfered with the categorization of the attended one. At short exposure duration (100 ms), this semantic interference effect was greater when participants attended HSF than LSF scenes, suggesting that information from LSF interfered more with HSF categorization than HSF did with LSF categorization. This reversed at longer exposure duration (1000 ms) where HSF interfered more with LSF categorization. These results suggest that the relative weight of LSF and HSF content varies over time during scene categorization, in accordance with a coarse-to-fine processing sequence.

[4P129] Large-scale human intracranial LFPs related to scene cuts in the TV series “Friends”

Evelina Thunell1, Sébastien M. Crouzet1, Luc Valton2, Jean-Christophe Sol1, Emmanuel J. Barbeau2 and Simon J. Thorpe1

1Centre de Recherche Cerveau et Cognition (CerCo), Université de Toulouse, Centre National de la Recherche Scientifique (CNRS)

2Explorations Neurophysiologiques Hôpital Pierre Paul Riquet Centre Hospitalier de Toulouse France

Centre de Recherche Cerveau et Cognition (CerCo) Université de Toulouse Centre National de la Recherche Scientifique (CNRS) France

Movies and TV series offer an opportunity to study the brain in conditions that closely match natural audio-visual settings. We recorded intracranial neural activity (LFPs) from epileptic patients as they watched episodes of the TV series “Friends”. To characterize the responsiveness of each intracranial electrode, we analyzed the response induced by scene cuts, which constitute major audio-visual events. We found scene cut-related activity throughout the brain, in visual and auditory, but also in higher-order cortices, which was consistent across hemispheres and patients (N = 3). In the occipito-temporal cortex, the responses resemble typical visual evoked activity with peaks from around 100 to 600 ms depending on electrode location. In the hippocampus, perirhinal cortex, and temporal pole, we found activity already 400 ms before and lasting at least 400 to 700 ms after the scene cut. The pre-scene cut activity might reflect auditory responses to pre-cut changes of the sound atmosphere or anticipation of the scene cuts based on implicit knowledge of typical scene lengths and scene evolution. Electrodes in the frontal lobe show distinct responses between 100 and 700 ms. In summary, the scene cuts elicited clear exogenous and feed-forward, but perhaps also endogenous and top-down, responses in many different brain areas.

Funding: This research was supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement nr 323711

[4P130] Experimental cognitive toponymy: what’s in a (place) name?

David R Simmons and Leslie Spence

School of Psychology, University of Glasgow

How do we decide what to call a landmark or geographical feature? The new field of cognitive toponymy focuses on the role of cognitive psychology in place name choices. To investigate the role of visual perception in place naming, we asked 19 young observers to verbally describe photographs of 60 geographical features. These photographs were deliberately chosen to be of distinctive landscapes, located in Scotland, which have existing names (unknown to the participants) based on their distinctive appearance. Full-colour images were presented on a computer screen at a viewing distance of 40 cm for one minute. The responses were recorded, transcribed and analysed to identify categories and sub-categories. Colour terms featured approximately 60% more often in descriptions than the next most common category, geographical classification (e.g. “hill”, “mountain”), and more than twice as much as the other common categories: composition (“dusty”, “rocky”), shape (body parts, animals), slope (“flat”, “steep”), texture (“bumpy”, “jagged”) and vegetation (“grassy”, “mossy”). Surprisingly there was very little (<10%) correlation between the descriptions recorded and the historical names of these features. This method allows us to begin to unravel the cognitive processes which underlie place-name decisions and gain insight into historical naming puzzles.

Funding: Royal Society of Edinburgh Cognitive Toponymy Research Network

[4P131] What is actually measured in the rapid number estimation task?

Yulia M Stakina and Igor S. Utochkin

Psychology, National Research University Higher School of Economics (HSE)

Our brain is able to estimate number of objects rapidly, even without counting them. It is still unclear whether it necessary in visual estimation an accurate representation of the object itself or sufficiently process the features of sample? To answer the question, we appealed to the feature integration theory and particularly to the phenomenon of asymmetry in visual search [Treisman, A., Gelade, G., 1980. Cognitive psychology, 12. 97–136]. If rapid number estimation is based on features, we should expect an increase of accuracy on stimuli with additional features in comparison with stimuli without features. Conversely, If the evaluation is based on the object representations, all the stimuli, regardless of the number of features are treated equally effective. During our experiment subjects were presented images with objects consisting of different number or quality of elements (such as O, Q, I, X). They were asked to estimate number of certain items. In some trials, target objects were cued before the stimulation, in other trials cue occurred after target objects. We found, that ability to estimate was based on feature representation, but only in “target before” condition, while in “target after” condition estimations were based on object representation.

Funding: The study was implemented in the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE) in 2016, TZ-64.

[4P132] The effects of spatial dividers on counting and numerosity estimation

Qi Li, Ryoichi Nakashima and Kazuhiko Yokosawa

Graduate School of Humanities and Sociology, The University of Tokyo

Nakashima and Yokosawa (2013) reported that frames subdividing search displays facilitate serial but not parallel search. They suggested that frame enclosures of multiple items induce a grouping effect that facilitates the allocation of focused attention (grouping hypothesis). The purpose of this study was to further examine the grouping hypothesis by comparing the effects of spatial dividers on counting and numerosity estimation. It is known that counting is related to focused attention while numerosity estimation depends on distributed attention (Chong & Evans, 2011). In the counting task, participants counted the number of circles in the display. The circles were presented until a response was made. In the numerosity estimation task, participants provided a quick estimate of the number of circles in the briefly presented display. The number of frames subdividing the stimulus displays was manipulated in both tasks. The grouping hypothesis predicts a facilitative effect of spatial dividers only on the counting task, which requires focused attention. The results revealed that spatial dividers facilitated counting, but had little impact on numerosity estimation. These findings confirm the grouping hypothesis, and extend previous work by showing that the facilitative effect of spatial dividers on serial search generalizes to other tasks involving focused attention.

[4P133] Serial dependence for perception of visual variance

Marta Suarez-Pinilla, Warrick Roseboom and Anil Seth

Sackler Centre for Consciousness Science, University of Sussex, UK

Despite being a crucial descriptor of the environment, variance has been largely overlooked in ensemble perception research. Here we explored whether visual variance is directly perceived and applied to subsequent perceptual decisions. In Experiment 1, participants scored the ‘randomness' of motion of a cloud of dots whose individual trajectories were extracted from a circular Gaussian distribution with random mean (0–359°) and 6 standard deviations (5° to 60°). In Experiment 2, 1/3 trials did not require a response. Experiment 3 demanded confidence reports alongside randomness. We analysed participants’ responses for serial dependence and observed a significant influence of the previous trial, with larger standard deviations in trial n-1 eliciting larger ‘randomness’ responses in the current trial, irrespective of whether the previous trial demanded a response, or whether the average motion direction was similar. This effect was larger when participants reported higher confidence in trial n-1, was not observed for trials n-2 to n-4, and was reversed for trials n-5 to n-10, with smaller standard deviations driving larger trial n responses. These results suggest that visual variance is transferred across observations irrespective of response processes, with recent sensory history exerting an attractive Bayesian-like bias and negative aftereffects appearing at a longer timescale.

[4P134] Visual-auditory interaction in perception of the variance

Sachiyo Ueda1, Ayane Mizuguchi1, Reiko Yakushijin2 and Akira Ishiguchi1

1Humanities and Sciences, Ochanomizu University

Ocnanomizu University

2Aoyama Gakuin University

Human observers perceive the statistical information of environment to interpret them effectively. In particular, the variances are very important because they can be basis for perception of variety of objects in environment. It has been shown that human observer could perceive the variances effectively within the limits of in a single stimulus property. In real life, however, it is common to identify variances across several properties or modalities consisting of a single object. In this study, we used visual-auditory stimuli and explored whether cross-modal correspondent variance will facilitate perceiving the variance of a selected modality. We use sequential visual-auditory stimuli which have variances in terms of size of visual stimuli and pitch of auditory stimuli. The participants estimated magnitudes of the variance of the size or the pitch. As a result, cross-modal correspondent stimuli did not have effects on the precision of the variance estimation. Constant pitch stimuli, however, produced underestimation of the size variance in the size-selected condition. It suggests that the variance magnitude of ignored auditory stimuli effected on magnitude of the variance of visual stimuli. This result suggests visual-auditory interactive mechanism in perception of the variance.

[4P135] Automatic detection of orientation variance within scenes

Szonya Durant1, Istvan Sulykos23, Istvan Czigler23

1Department of Psychology, Royal Holloway, University of London

2Institute of Cognitive Neuroscience Research Centre for Natural Sciences Hungarian Academy of Science

3Eötvös Loránd University Budapest

Visual Mismatch Negativity (vMMN) is an early ERP component that has been suggested to reflect automatic pattern detection over a sequence of images. It has been suggested that scene statistics are automatically detected. We tested whether orientation variance within scenes can elicit vMMN. We presented a set of Gabor patches of a given (random) mean orientation on each presentation in a sequence, and varied the variance of the orientations of the patches, so that some sets all had a similar orientation (ordered) or the individual orientations were random (disordered). These two types of sets of Gabor patches formed the standards and deviants in an unattended oddball paradigm. We found that a more disordered set of stimuli elicited a vMMN amongst ordered stimuli, but not vice versa. This suggested that the visual system was able to build up an expectation about a certain level of order, but not able to pick up on a pattern from disordered stimuli. Furthermore, in a test of ability to discriminate between ordered and disordered stimuli, we found that better discrimination corresponded with a larger vMMN amplitude. Automatic detection of the variance of orientations within a scene was shown and vMMN magnitude correlated with discrimination ability.

Funding: Royal Society International Exchange Grant IE140473

Reco Love Blue Ocean Iso Download Free

[4P136] Anchoring predictions in scenes: Electrophysiological and behavioral evidence for a hierarchical structure in scenes

Sage E Boettcher and Melissa L.-H. Võ

Scene Grammar Lab, Goethe University Frankfurt

Real-world scenes follow certain rules, “scene-grammar”, which are not well understood. We propose that scene-grammar is hierarchically structured, with three levels: scenes, anchors, and objects. Anchors are distinguishable from other objects in that they are large, salient, diagnostic of a scene, and importantly hold spatial information. In a behavioral experiment, participants clicked on the most likely location of an object (e.g. soap) relative to an anchor (shower) or another object (shampoo). We found that anchors show a smaller spatial distribution of clicks — i.e. stronger spatial predictions — compared to their object counterparts. An additional EEG experiment focused on understanding the interplay between these levels. Participants viewed two images presented in succession and determined their relationship. The first image – the prime – was either a scene (kitchen) or an object (pot). The second image was always an anchor, either consistent (stove) or inconsistent (bed) with the prime. The N400 ERP-component reflects semantic integration costs. A stronger semantic prediction activated by the prime should result in a greater violation and N400. We found a larger N400 when observers were primed with objects compared to scenes. This indicates that objects generate stronger predictions for anchors compared to the scenes containing them.

Funding: This work was supported by DFG grant VO 1683/2-1 to MLV and DAAD grant #91544131 to SEB

[4P137] Neural substrates of early data reduction in the visual system

Laura Palmieri and Maria Michela Del Viva

Department NEUROFARBA, University of Florence

The visual system needs to extract rapidly the most important elements of the external world from a large flux of information, for survival purposes. Capacity limitations of the brain for processing visual information require an early strong data reduction, obtained by creating a compact summary of relevant information (primal sketch) to be handled by further levels of processing. Recently we formulated a model of early vision allowing a much stronger data reduction than existing vision models based on redundancy reduction. Optimizing this model for best information preservation under tight constraints on computational resources yields surprisingly specific a-priori predictions for the shape of physiological receptive fields of primary visual areas, and for experimental observations on fast detection of salient visual features by human observers. Here we investigate the anatomical substrates of these fast vision processes by adopting a flicker adaptation paradigm, that has been shown to impair selectively the contrast sensitivity of the Magnocellular pathway. Results show that thresholds for discrimination of briefly presented sketches, obtained according to the model, increase after adapting to a uniform flickering field, while contrast thresholds for orientation discrimination of gratings do not change, suggesting the involvement of the MC pathway in this compressive visual stage.

[4P138] Simple reaction times as an implicit measure of the development of size constancy

Carmen Fisher and Irene Sperandio

Psychology, University of East Anglia

It has been suggested that simple reaction times (SRTs) can be used as a measure of perceived size, whereby faster SRTs reflect perceptually bigger objects (Sperandio, Savazzi, Gregory, & Marzi, 2009). As many developmental studies are limited to the child’s comprehension of task instructions, the implicit nature of the aforementioned paradigm was deemed to be particularly appropriate for the investigation of size perception in children. Whether size constancy is an innate ability or develops with age is still under debate. A simplified version of Sperandio et al's (2009) paradigm was used to examine the detection profile in four age groups: 5–6, 9–10, 12–13 year-olds and adults. Participants were presented with pictures of tennis balls on a screen placed at two viewing distances. The stimulus size was adjusted, so the visual angle subtended by the stimulus was constant across distances. Luminance was also matched. It was found that SRTs responded to retinal and not perceived size in the 5-6-year-old group only, whilst older children and adults produced SRTs that were modulated by perceived size, demonstrating that size constancy was operating in these age groups. Thus, size constancy develops with age and is not fully accomplished until after 5-6 years of age.

[4P139] Dynamics of the perceived space under the self induced motion perception

Tatsuya Yoshizawa, Shun Yamazaki and Kasumi Sasaki

Research Laboratory for Affective Design Engineering, Kanazawa Institute of Technology

[Purpose] It is known that the perceived space is of non uniform in the Cartesian coordinate and is not same as the physical space. However, the dynamics of the perceived space surrounding us during the movement has not been clearly characterized yet, we therefore measured uniformity of the perceived space under the perception of the self-induced motion. [Methods] To give a feasible condition equivalent to the physical movement in terms of the visual state to observers, we used the perception of the self-induced motion. It is because the visual stimulus for the perception of the self-induced motion is the same as that when we are moving. Our observers performed the following two tasks in separate. They were asked whether three pairs of vertical bars with an optical flow background were allocated in the manner of parallel, or not, and whether all the pairs had the same distance between the bars, or not. [Results] In the both tasks, all observers showed the same performance which the perceived dimensions produced by the three pairs of the bars did not correspond with the physical dimensions. These results indicate that the perceived space is not of uniform despite whether an observer is moving or not.

[4P140] The perceived size and shape of objects in the peripheral visual field

Robert Pepperell, Nicole Ruta, Alistair Burleigh and Joseph Baldwin

Cardiff School of Art, Cardiff Metropolitan University

Observations made during an artistic study of visual space led us to hypothesise that objects seen in the visual periphery can appear smaller and more compressed than those seen in central vision. To test this we conducted three experiments: In Experiment 1 participants drew a set of discs presented in the peripheral visual field without constraints on eye movement or exposure time. In Experiment 2 we used the Method of Constant Stimuli to test the perceived size of discs at four different eccentricities while eye movements were controlled. In Experiment 3 participants reported the perceived shape of objects presented briefly in the periphery, also with eye movements controlled. In Experiment 1 the peripheral discs were reported as appearing significantly smaller than the central disc, and as having an elliptical and polygonal shape. In Experiment 2 participants judged the size of peripheral discs as being significantly smaller when compared to a centrally viewed disc, and in Experiment 3 participants were quite accurate in reporting the peripheral object shape, except in the far periphery. These results suggest objects in the visual periphery appear diminished when presented for long and brief exposures but only undergo shape distortions when presented for longer times.

[4P141] Spatial phase discrimination in visual textures

Endel Poder

Institute of Psychology, University of Tartu

We can easily discriminate certain phase relations in visual textures but not others. However, the mechanisms of spatial phase perception are not well understood. This study attempts to reveal the role of local luminance cues in phase discrimination using histogram equalization of texture elements. From 2 to 5 texture patches were presented briefly around the fixation point. Observers searched for an odd (different) patch (that was present with the probability 0.5). Textures were composed of either simple Gabor waveforms (0 vs 90 deg phases), or compound Gabors (first plus third harmonics, edge vs bar phases). Both original and histogram equalized versions of patterns were used. The results show that phase is more easily seen in compound as compared to simple Gabors, and histogram equalization heavily reduces discriminability of phase differences. There was no effect of set size (number of texture patches). We conclude that local luminance cues play an important role in spatial phase discrimination in textures; there are some low level mechanisms that discriminate edges from bars; and division of attention does not affect the performance of the task used in this study.

Funding: Supported by Estonian Research Council grant PUT663

[4P142] Colour discrimination, coloured backgrounds, non-verbal IQ and global and local shape perception

Alex J Shepherd1, Ardian Dumani1 and Geddes Wyatt2

1Psychological Sciences, Birkbeck

2Roehampton

Background: Associations between global- and local-shape perception and colour discrimination were investigated. The shape tasks were completed on different background colours, although background colour was irrelevant to either task. The colours selected were tailored to the cone-opponent pathways early in the visual system. This study extends one presented at the 2015 ECVP by including non-verbal components of the WAIS, and Ravens Progressive Matrices Plus, to assess general ability. Method: Participants were presented with either a global square made up of small square (congruent) or diamond (incongruent) local elements, or a global diamond made up of small diamond (congruent) or square (incongruent) local elements. Each display was presented on five coloured backgrounds (tritan/S-cone: purple, neutral, yellow; L(M)-cone: red, neutral, green). Results: Participants were more accurate at the global task than local and responded more quickly for congruent than incongruent trials, as expected. There were no significant differences between performance on any of the coloured backgrounds. There were, however, significant correlations between colour discrimination (Farnsworth-Munsell scores) and some of the shape tasks. Participants with poorer colour discrimination were less accurate despite colour being irrelevant. There were significant correlations between performance on the Farnsworth-Munsell, local-global shape, and the WAIS and progressive matrices tasks.

[4P143] Examining the spatial extent of orientation-tuned contextual modulation in human V1 with fMRI

Susan G Wardle and Kiley Seymour

Department of Cognitive Science, Macquarie University

Recent results in macaque V1 and human psychophysics suggest that the orientation-tuning of surround suppression is modulated by the spatial extent of the surround [Shushruth et al., 2013, J. Neuroscience]. Here we examine the spatial range of orientation-tuned surround suppression in human V1 with fMRI. Both target and surround were 1 c/deg 4 Hz counterphasing sinewave gratings at one of four orientations [0, 90, 45, 135 deg] presented in a block design. In each 20s block, the inner target annulus [radius: 1.5–3.5 deg] and a near [radius: 3.5–6 deg] or far [radius: 6–9.5 deg] surround annulus was presented for the first 10s, followed by the target alone for 10s. The orientation of the target and surround gratings was either parallel or orthogonal. Voxels responding preferentially to the target were isolated using independent localiser runs. Consistent with previous results, greater suppression of the BOLD response to the target occurred for a parallel near surround than for an orthogonal near surround. The results differed for the far-surround, and facilitation of the BOLD response to the target occurred for the orthogonal far-surround. The results suggest that the orientation-tuning of contextual modulation in human V1 is modulated by the spatial extent of the surround.

[4P144] Wish I was here – anisotropy of egocentric distances and perceived self-location

Oliver Tošković

Laboratory for experimental psychology, Faculty of philosophy University of Belgrade

Perceived egocentric distances are anisotropic since vertical distances are perceived as larger than horizontal. This is explained by action-perception relation: enlargement of perceived vertical distances helps us in performing actions because they require more effort on that direction. Surprisingly, in previous studies we did not gain perceived distance anisotropy in near space, which would be expected according to our explanation. We performed three experiments, with participants sitting on a floor, in a dark room, instructed to equalize egocentric distances of two stimuli on vertical and horizontal direction. In first experiment standard stimuli distances were 1 m, 3 m and 5 m, in the second 0.4 m, 0.6 m, 0.8 m and 1 m, while in the third they were 0.8 m and 1 m. In first two experiments participants matched stimuli distances from themselves, while in the third we varied the instruction in three ways: match the distances from you-self, from your right eye, and from your right shoulder. Perceived distance anisotropy exists in near and far space, since in all experiments vertical distances were perceived as larger. Instruction variations in did not change our results, meaning that participants do not locate themselves on some exact point, such as eye or shoulder, in this type of experiment.

[4P145] Visual discrimination of surface attitude from texture

Blue

Samy Blusseau1, Wendy Adams1, James Elder2, Erich Graf1 and Arthur Lugtigheid1

1Psychology, University of Southampton UK

2York University Canada

Judgement of surface attitude is important for a broad range of visual tasks. The visual system employs a number of cues to make such estimates, including projective distortion in surface texture. While egocentric surface attitude is comprised of two variables (slant, tilt), prior surface attitude discrimination studies have focused almost exclusively on slant. Here we employ novel methods to estimate the full 2D discrimination function, using a real textured surface mounted on a pan/tilt unit and viewed monocularly through a 3° aperture. Our experiment was a 3-interval match-to-sample task in which the observer indicated which of the 2nd or 3rd (test) attitudes matched the 1st (standard) attitude. The stimuli could vary in both slant and tilt. We found that thresholds for tilt decrease with slant, as geometry predicts, however thresholds for slant remain relatively invariant with slant. This latter finding is inconsistent with the results of Knill & Saunders (2003), who measured slant thresholds for varying slant but fixed tilt. In contrast, our task required observers to simultaneously estimate both slant and tilt. We discuss the implications of this result for models of surface attitude estimation.

Funding: Work supported by EPSRC (UK) grant EP/K005952/1

[4P146] Shape discrimination. Why is a square better than a triangle for a jumping spider?

Massimo De Agrò1, Iacopo Cassai1, Ilaria Fraccaroli1, Enzo Moretto2 and Lucia Regolin1

1General Psychology Department, University of Padua

2Esapolis Living Insects Museum of Padova Province and of the MicroMegaMondo of Butterfly Arc (PaduaItaly)

The ability to discriminate shapes is typically present in animals endowed with complex eyes, like Salticidae, a family of spiders who navigate through their environment actively hunting for various prey (mostly other arthropods). We trained 36 juveniles or sub-adults (before sexual identification) of the Salticidae species Phidippus regius on triangle vs. square discrimination. Spiders were individually placed in a T-shaped maze and could choose among the two shapes, matched for total area. A prey was placed behind S+ (either square or triangle), which left/right position varied semi-randomly. Spiders were tested in an unrewarded discrimination trial. Spiders didn't shown a significant preference for the trained shape, though subjects that had been trained on triangle showed a preference for the square (2vs11, p = 0.022), spiders trained on the square instead did not exhibit any preference (8vs8). The trend for the subjects trained on the triangle to choose the novel stimulus was confirmed also when restricting the analysis to spiders who passed a subsequent motivation test (0vs8, p = 0.0078). This effect could depend on some peculiar features of the square, which may act as super-stimulus with respect to the triangle. Future investigation will focus on discrimination of alternative shapes, such as circle and square.

[4P147] Measures of orientation-tuned inhibition in human primary visual cortex agree with psychophysics

Kiley Seymour1, Timo Stein2, Colin Clifford3 and Philipp Sterzer4

1ARC Centre of Excellence in Cognition and its Disorders (CCD), Macquarie Univiersity

2Center for Mind/Brain Sciences CIMeC University of Trento

3UNSW Australia

4Charité Universitätsmedizin Berlin

The perceived tilt of an oriented grating is influenced by its spatial and temporal context. The Tilt Illusion (TI) and the Tilt After Effect (TAE) are clear demonstrations of this. Both illusions are thought to depend on shared neural mechanisms involving orientation-tuned inhibitory circuits in primary visual cortex (V1). However, there is little functional evidence to support this. We measured local functional inhibitory circuitry in human V1 of 11 participants using functional magnetic resonance imaging (fMRI). Specifically, we examined surround suppression of V1 responses, indexed as an orientation-specific reduction in Blood-Oxygenation-Level Dependent (BOLD) signal in regions of the cortical map being inhibited. In a separate session, we also measured TI and TAE magnitudes for each participant. We found that the level of surround suppression measured in V1 correlated with the magnitude of participants' TI and TAE. This good quantitative agreement between perception and suppression of V1 responses suggests that shared inhibitory mechanisms in V1 might mediate both spatial and temporal contextual effects on orientation perception in human vision.

Funding: This research was supported by a grant awarded to KS from the Alexander von Humboldt Foundation

[41S201] Healthy ageing and visual motion perception

Karin S Pilz

School of Psychology, University of Aberdeen

Motion perception is a fundamental property of our visual system and being able to perceive motion is essential for us to navigate through the environment and interact in social environments. It has been shown that low-level motion perception is strongly affected by healthy ageing. Older adults, for example, have difficulties detecting and discriminating motion from random-dot displays. We have shown that these low-level motion deficits can be attributed to a decline in the ability to integrate visual information across space and time. Interestingly, in recent studies, we found large differences for motion direction discrimination across different axes of motion, which indicates that an age-related decline in low-level visual motion perception is not absolute. Despite age-related changes in low-level motion perception, the processing of higher-level visual motion is less affected by age. We recently showed that older adults are able to discriminate biological motion as well as younger subjects. However, they seem to rely on the global form rather than local motion information, which suggests that high-level motion perception in healthy ageing is aided by mechanisms that are not necessarily involved in the processing of low-level motion. I will discuss the findings within the context of age-related changes in neural mechanisms.

Funding: BBSRC New Investigator Grant (BB/K007173/1)

[41S202] How does age limit learning on perceptual tasks?

Ben Webb, Paul McGraw and Andrew Astle

Visual Neuroscience Group, The University of Nottingham, UK

Ageing appears to place biological limits on sensory systems, since sensory thresholds typically increase (i.e. visual performance deteriorates) on many perceptual tasks with age. Here we ask whether ageing places concomitant limits on the amount an organism can learn on perceptual tasks. We will argue that ageing should be considered alongside a range of other factors (e.g. visual crowding) that limit sensory thresholds on perceptual tasks, and that this initial performance level is what determines the magnitude of learning. Consistent with this view, we will show that initial sensory thresholds are inversely related to how much participants learn on perceptual tasks: poorer initial perceptual performance predicts more learning on perceptual tasks. Since sensory performance deteriorates with age, it follows that the magnitude of learning on perceptual tasks should increase with age. And this is exactly what we find: learning is larger and faster in older adults than younger adults on a crowded, word identification task in the peripheral visual field. And the magnitude of learning was a constant proportion of the initial threshold level. This Weber-like law for perceptual learning suggests that we should be able to predict the degree of perceptual improvement achievable at different ages via sensory training.

Funding: Funded by the Wellcome Trust and National Institute for Health Research

[41S203] Is there a common cause for perceptual decline in the aging brain?

Michael H Herzog1, Karin Pilz2, Aaron Clarke3, Marina Kunchulia4 and Albulena Shaqiri1

1BMI, EPFL Switzerland

2University of Aberdeen, UK

3Bilkent University, Turkey

4Beritashvili Institute, Georgia

Even in the absence of neurodegenerative disease, aging strongly affects vision. Whereas optical deficits are well documented, much less is known perceptual deficits. In most perceptual studies, one paradigm is tested and it is usually found that older participants perform worse than younger participants. Implicitly, these results are taken as evidence that all visual functions of an individual decline determined by one factor, with some individuals aging more severly than others. However, this is not true. We tested 131 older participants (mean age 70 years old) and 108 younger participants (mean age 22 years old) in 14 perceptual tests (including motion perception, contrast and orientation sensitivity, biological motion perception) and in 3 cognitive tasks (WCST, verbal fluency and digit span). Young participants performed better than older participants in almost all of the tests. However, within the group of older participants, age did not predict performance, i.e., a participant could have good results in biological motion perception but poor results in orientation discrimination. It seems that there is not a single “aging” factor but many.

Funding: Velux Foundation

[41S204] Keeping focused: Selective attention and its effect on visual processing in healthy old age

Cliodhna Quigley1, Søren Andersen2 and Matthias Müller3

1Cognitive Neuroscience Laboratory, German Primate Center – Leibniz Institute for Primate Research

2School of Psychology University of Aberdeen

3Institute for Psychology, Leipzig University

Selective attention prioritises relevant from irrelevant input by means of top-down modulation of feed-forward signals, from the early stages of cortical sensory processing onwards. Although attentional ageing is an active area of research, it is not yet clear which aspects of selective attention decline with age or at which processing stage deficits occur. In a series of experiments examining spatial and feature-selective visual attention, we compared older (60–75 years) with younger adults (20-30 years). Frequency tagging of stimuli and the electroencephalogram were used to measure attentional modulation of early visual processing. I will present the results of experiments requiring either covert spatial selection of spatially separated stimuli, or feature-based selection of overlapping stimuli. The results point to a differential decline in the modulatory effect of selective attention on early visual processing. The relative enhancement of visual processing by spatial attention seems as good as unchanged in healthy old age, while the effects of feature-selective attention show a decline that goes beyond slowed processing. The commonalities and differences in the mechanisms underlying spatial and feature-selective attention are still under debate, and this differentiated pattern of age-related change in tasks relying on spatial vs. non-spatial selection may contribute to our understanding.

Funding: Research was supported by the Deutsche Forschungsgemeinschaft (DFG), graduate program “Function of Attention in Cognition”. CQ was supported by the DFG, Collaborative Research Centre 889 “Cellular Mechanisms of Sensory Processing”

[41S205] Eye movements as a window to decline and stability across adult lifespan

Jutta Billino

Experimental Psychology, Justus-Liebig-Universität

Current theories of age-related functional changes assume a general reduction in processing resources and global decline across lifespan. However, in particular recent studies on ageing of visual perception have highlighted that a detailed differentiation between general decline and specific vulnerabilities is definitely needed. Eye movements provide the opportunity to study closely interwoven perceptual, motor, and cognitive processes. Thus, we suggest that they allow unique insights into decline and stability of specific processes across lifespan. We studied a battery of different eye movement tasks in a large sample of 60 subjects ranging in age from 20 to 80 years. The battery included saccade as well as smooth pursuit paradigms which involved varying cognitive demands, e.g. learning, memory, anticipation. We analyzed age-related changes in standard parameters of the different tasks. Our results corroborate the well-documented deterioration of processing speed with age, but at the same time reveal surprisingly preserved capacities to integrate bottom-up and top-down processes for efficient oculomotor control. These robust resources point out the often ignored complexity of age-related functional changes and emphasize that compensational mechanisms during healthy ageing might have been underestimated so far.

Funding: German Research Foundation, SFB/TRR 135

[41T101] Attention is allocated ahead of the target during smooth pursuit eye movements: evidence from EEG frequency tagging

Jing Chen, Matteo Valsecchi and Karl Gegenfurtner

Department of Psychology, Justus-Liebig-University Giessen, Germany

It is under debate whether attention during smooth pursuit is centered on the pursuit target or allocated preferentially ahead of it. Attentional deployment was previously assessed through an additional task for probing attention. This might have altered attention allocation, leading to inconsistent findings. We used EEG frequency tagging to measure attention allocation in the absence of any secondary probing task. The observers pursued a moving dot while stimuli flickering at different frequencies were presented at various locations ahead or behind the pursuit target. In Exp1 (N = 12), we observed a significant 11.7% increase in EEG power at the flicker frequency of the stimulus in front of the pursuit target, compared to that in the back. In Exp 2 (N = 12), we tested many different locations and found that the enhancement was present up to about 1.5 deg ahead (16.1% increase in power), but was absent at 3.5 deg. In a control experiment using attentional cueing during fixation, we did observe an enhanced EEG response to stimuli at this eccentricity. Overall, we showed that attention is allocated ahead of the pursuit target. EEG frequency tagging seems to be a powerful technique allowing to implicitly investigate attention/perception when an overt task would be disruptive.

Funding: Jing Chen was supported by the DFG International Research Training Group, IRTG 1901, “The Brain in Action”. Matteo Valsecchi was supported by DFG SFB TRR 135 “Cardinal mechanisms of perception” and by the EU Marie Curie Initial Training Network

[41T102] Super-fast endogenous allocation of temporal attention

Yaffa Yeshurun and Shira Tkacz-Domb

Psychology, University of Haifa

It is well known that we can voluntarily allocate attention to a specific point in time at which we expect a relevant event to occur. Here, we employed the constant foreperiod and temporal orienting paradigms to examine the time course of this endogenous temporal attention. With both paradigms, the task was to identify a letter presented for a brief duration (16 ms), preceded by a warning signal. Unlike previous studies, we included a wide range of foreperiods (i.e., the duration of the foreperiod – the interval between the warning signal and the target – varied between 75 to 2400 ms). Critically, to avoid effects of exogenous temporal attention the warning signal did not include an intensity change. In comparison to a non-informative warning signal, identification accuracy was significantly higher when the warning signal indicated the most likely foreperiod. Importantly, such effects of temporal attention were found even with the shortest foreperiod – 75 ms. Given that letter identification was not speeded we can conclude that the allocation of temporal attention to a specific point in time improved perceptual processing. Moreover, this allocation of endogenous temporal attention was extremely fast, considerably faster than the allocation of endogenous spatial attention.

Funding: Israel Science Foundation

[41T103] Using the pupillary light response to track visual attention during pro- and antisaccades

Sebastiaan Mathôt1, Nicki Anderson2 and Mieke Donk2

1Laboratoire de Psychologie Cognitive, CNRS/Aix-Marseille Université

2VU University Amsterdam

How is visual attention distributed when preparing an antisaccade: a saccade away from a salient visual stimulus (cue) toward a non-salient saccade goal? Do you first attend to the cue, and only later to the saccade goal? Or do you simultaneously attend to both? We addressed this question with a novel pupillometric technique. Participants fixated the center of a gray display with a bright stimulus on one side, and a dark stimulus on the other. One stimulus (the cue) rotated briefly. Participants made a saccade toward (prosaccade trials) or away from (antisaccade trials) the cue. In prosaccade trials, a pupillary light response to the brightness of the saccade goal emerged almost immediately after the saccade. Given the high latency of the pupillary light response (±250 ms), this early response must have been prepared along with the saccade itself. However, in antisaccade trials the pattern was very different: The pupil initially responded mostly to the cue's brightness; only long (±350 ms) after the saccade did the pupil respond mostly to the brightness of the saccade goal. This suggests that, during preparation of antisaccades, attention is focused more strongly (but likely not exclusively) on the cue than on the saccade goal.

Funding: Marie Curie ACTPUP 622738

[41T104] Hierarchical binding and illusory part conjunctions

Ed Vul

Psychology, University of California, San Diego

Illusory conjunctions and binding errors are typically construed as misattribtuion of features to objects. Here we show that such binding errors are not limited to basic features, but occur along multiple levels of a hierarchical parse of a scene into objects and their constituent parts. In a series of experiments we show that for many multipart objects, features can be bound to object parts, while (correctly bound) object parts are subject to illusory conjunctions into larger objects. These results indicate that binding is not attributable to the difficulty of aligning feature maps, but rather due to the uncertainty inherent in constructing a hierarchical parse of a scene.

[41T105] Neural mechanisms of divided feature-selective attention to colour

Jasna Martinovic1, Sophie Wuerger2, Hillyard Steven3, Matthias Mueller4 and Soren Andersen1

1School of Psychology, University of Aberdeen

2University of Liverpool

3University of California San Diego

4University of Leipzig

Recent studies have delineated the neural mechanisms of concurrent attention to spatial location and colour, as well as colour and orientation, but mechanisms that implement concurrent attention to two different feature values belonging to the same continuous feature dimension still remain relatively unknown. We tested if neural attentional resources can be divided between colours that spatially overlap and if such concurrent selection depends on linear separability of targets and distractors in colour space. In two experiments, human observers attended concurrently to dots of two different colours contained within fully overlapping random dot kinematograms of four different colours. Stimulus processing was measured using steady-state visual evoked potentials (SSVEPs). When attended colours could be separated from the distractors by a single line in hue space, neural markers of attentional selection were observed in the SSVEPs. We also modelled how between-colour differences related to SSVEP attentional effects. Colour selection was found to sample information from colour mechanisms in an adaptive, context-dependent fashion. Thus, at least in the case of colour, division of early neural resources between features within a single continuous dimension does not need to be mediated by spatial attention and depends on target-distractor dissimilarities within the respective feature space.

[41T106] Learning to attend and ignore: The influence of reward learning on attentional capture and suppression

Daniel Pearson, Thomas Whitford and Mike Le Pelley

School of Psychology, UNSW Australia

Recent studies have demonstrated that pairing a stimulus with reward increases the extent to which it will automatically capture attention. We have shown that this value-modulated attentional capture effect holds for stimuli that have never been task-relevant. In a visual search task, the colour of a salient distractor stimulus determined the magnitude of reward available for a rapid eye-movement to a shape-defined target. Thus, while the distractor signalled the reward available for the trial, responding to it was never required in order to receive that reward. Indeed, if any gaze was captured by the distractor, the reward that would have been obtained was instead omitted. Nevertheless, distractors signalling the availability of high reward produced more capture than those signalling low reward, even though this resulted in the loss of more high-value rewards. We have demonstrated that this value-modulated capture effect is immune to volitional cognitive control, in that the effect persists when participants are explicitly informed of the omission contingency. However, training on the task allows participants to partially suppress capture by the high-value distractor. This suggests that reward learning processes simultaneously augment the attentional priority of stimuli in our environment, as well as our ability to suppress said stimuli.

[41T107] Macaque monkey use of categorical target templates to search for real-world objects

Bonnie Cooper1, Hossein Adeli2, Greg Zelinsky2 and Robert McPeek1

1Department of Biological Sciences, SUNY College of Optometry

2SUNY Stony Brook

Humans use categorical target templates to guide their search, but is the same true for macaques? Here we consider two tasks that differ in how targets were designated, one by showing a picture preview of a category exemplar (exemplar) and the other by using a category-specific symbolic cue (categorical). Stimuli were images of real-world objects arranged into variable set size object arrays. The target is randomly present in half of trials. The objects first fixated during search were determined and compared to the first-fixated objects from an image-based model of attention in the superior colliculus (MASC). On target-present trials, proportions of immediate target fixations were well above chance and changed only minimally with set size. Critically, these patterns were nearly identical for exemplar and categorical search. Moreover, when a novel, never before seen exemplar of the target category was presented, strong and immediate guidance was again observed. On target-absent trials we found that the distractors that were more/less preferentially fixated by the macaque were similarly more/less fixated by MASC. We conclude that monkeys, much like humans, can form and use categorical target templates to guide their search.

[41T108] Moving beyond the single target paradigm: Set-size effects in visual foraging

Arni Kristjansson1, Ian M Thornton2 and Tómas Kristjánsson1

1Psychology, University of Iceland

2University of Malta

Set-size effects in visual search play a key role in theories of visual attention. But such single-target visual search tasks may only capture limited aspects of the function of human visual attention. Foraging tasks with multiple targets of different target types arguably provide a closer analogy to everyday attentional processing. We used an iPad foraging task to measure the effects of absolute and relative set size on foraging patterns during “feature-based” foraging (targets differed from distractors on a single feature) and “conjunction” foraging, where targets are distinguished from distractors by two features. Participants tapped all stimuli from two stimulus categories while avoiding two other categories. Patterns of runs of same target-type selection were similar regardless of set size: long sequential runs during conjunction foraging but short runs during feature foraging, reflecting rapid switching between target types. There were however strong effects of absolute and relative set size on response-times within trials and on the cost of switching between target categories and an interaction between relative set size and switch cost with lower target proportions yielding larger switch costs. We discuss how foraging strategies are affected by set-size manipulations perhaps through changes in saliency and crowding.

Funding: Funded by the Icelandic Research Fund (IRF)

[42T101] Neural responses to partially occluded symmetry

Marco Bertamini, Giulia Rampone, Adel Ferrari and Alexis Makin

Department of Psychological Sciences, University of Liverpool, UK

The Sustained Posterior Negativity (SPN) is a visual symmetry-related EEG component starting 250 ms after onset. The amplitude of the SPN is well predicted by to regularity in the image. We studied whether the SPN can also emerge when information about symmetry is only revealed over time by dynamic occlusion. We used abstract shapes with vertical reflection symmetry compared to random configurations. A light-grey vertical bar occluded half of the shape; 500 ms after stimulus onset, the bar shifted to the other side. The previously visible half became occluded, whilst the previously occluded half was uncovered and was visible for 1000 ms. Participants perceived the whole shape and classified it as reflection or random with greater than 90% accuracy. This shows that parts can be correctly combined into wholes across time. ERP analysis showed an early-SPN, from 250 ms to 500 ms after presentation of the second half of the pattern. Interestingly, this effect was right lateralized. These results suggest that (a) Symmetry can be computed by integration of transient and partial information, generating a response in a symmetry-sensitive network. (b) The right hemisphere plays a major role in such process. (c) Short SPN latency indicates an early symmetry-detection sub-component of the classic SPN.

Funding: ES/K000187/1

[42T102] Grid-texture mechanisms in human vision: contrast detection of regular sparse micro-patterns requires specialist templates

Tim S Meese1 and Daniel Baker2

1School of Life and Health Sciences, Aston University

2University of York

Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness, and rejected a model involving probability summation across all elements. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for various regular texture densities. By assuming probability summation across these mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, we have revealed texture density mechanisms for the first time in human vision at contrast detection threshold (where the fitted level of intrinsic uncertainty was low and the only free parameter).

Funding: EPSRC Grant EP/H000038/1

Malte Persike and Guenter Meinhardt

Psychological Institute, University of Mainz, Germany

Contour integration refers to the ability of the visual system to bind disjoint local elements into coherent global shapes. In cluttered images containing randomly oriented elements a contour becomes salient when its elements are coaligned with a smooth global trajectory, in line with the Gestalt law of good continuation. One of the hallmarks of human contour integration is its susceptibility to curvature. Abrupt changes of contour curvature strongly diminish contour salience down to the point of invisibility. We show that this visibility decline can be easily remedied. By inserting local corner elements at points of angular discontinuity, a jagged contour becomes as salient as a straight one. We report results from a series of experiments for contours with and without corner elements which indicate their psychophysical equivalence. This presents a significant challenge to the notion that contour integration mostly relies on local interactions between neurons tuned to single orientations, and suggests that a site where single orientations and more complex local features are combined constitutes the early basis of contour and 2D shape processing.

[42T104] Binding feedforward and horizontal waves in V1 requires spatio-temporal synergy

Xoana G Troncoso, Benoit Le Bec, Marc Pananceau, Christophe Desbois, Florian Gerard-Mercier and Yves Fregnac

UNIC, CNRS, France

Long distance horizontal connections within primary visual cortex (V1), are hypothesized to mediate a neural propagation process that binds cells with similar functional preferences across the visual field. Intracellular experiments in the anesthetized cat have been designed to dissect out the spatial synergy and temporal coherence requirements necessary for synaptic integration operating beyond the receptive field (RF) extent (Gerard-Mercier et al., 2016). We used here 6-stroke apparent motion (AM) concentric sequences of Gabor patches at saccadic speeds, centered on the subthreshold RF and extending up to 25° into the periphery. The response to stimulation of the RF center alone was compared to the response to the AM sequence, which was either centripetal or centrifugal, with the orientation of the individual elements either collinear or cross-oriented to the motion path. We demonstrate supra-linear surround-center input summation, and a non-linear boosting of the neuronal discharge when the RF stimulation was preceded by the activation of a horizontal wave propagation. Filling-in/predictive responses were also induced by the periphery alone. This is consistent with our hypothesis that cooperative “Gestalt-like” interactions are triggered when the visual input carries a sufficient level of spatio-temporal coherence matching in its geometry and temporal signature the underlying V1 connectivity.

Funding: European Union’s Horizon 2020 Marie Skłodowska-Curie Grant 659593 (ProactionPerception), European Union’s FP7 Marie Curie Grant 302215 (BrainPercepts), Paris-Saclay Idex Icode and NeurosSaclay, CNRS

[42T105] Mandatory feature integration across retinotopic locations

Leila Drissi Daoudi1, Haluk Öğmen2 and Michael H. Herzog1

1Laboratory of Psychophysics, Brain Mind Institute, EPFL

2Department of Electrical and Computer Engineering, Center for NeuroEngineering and Cognitive Science, University of Houston, Houston TX, USA

Although visual integration is often thought to be retinotopic, visual features can be integrated across retinotopic locations. For example, when a Vernier is followed by a sequence of flanking lines on either side, a percept of two diverging motion streams is elicited. Even though the central Vernier is invisible due to metacontrast masking, its offset is visible in the following elements. If an offset is introduced to one of the flanking lines, the two offsets combine (Otto et al., 2006). Here, by varying the number of flanking lines and the position of the flank offset, we show that this integration lasts up to 450 ms. Furthermore, this process is mandatory, i.e, observers are not able to consciously access the individual lines and change their decision. These results suggest that the contents of consciousness can be modulated by an unconscious memory-process wherein information is integrated for up to 450 ms.

Funding: This project was financed with a grant from the Swiss SystemsX.ch initiative, evaluated by the Swiss National Science Foundation.

[42T106] Perception of global object motion without integration of local motion signals

Rémy Allard and Angelo Arleo

Institut de la Vision, Université Pierre et Marie Curie, France

The global motion direction of an object can be correctly perceived even if local motion directions diverge from the global direction (e.g., edges of a rotating diamond perceived through apertures). The perception of such global object motion is generally attributed to integration of local motion signals. The fact that the spatial configuration can strongly influence the global motion perception at fixation has been interpreted as interactions between form and motion integration. To challenge this interpretation, the ability to perceive the global object motion was evaluated for various spatial configurations, when neutralizing energy-based motion processing (stroboscopic motion) and when neutralizing tracking motion system (peripheral viewing). As expected from previous studies, the perception of global object motion depended on the spatial configuration at fixation, but not in the periphery. Moreover, neutralizing energy-based motion processing did not impair the perception of global object motion at fixation, whereas it severely impaired the perception of global object motion in the periphery. These results suggest no substantial interaction between form and motion integration: integration of local energy-based motion signals (without form integration) in the periphery and tracking of a global object after form integration at fixation (without integration of local energy-based motion signals).

Funding: This research was supported by ANR – Essilor SilverSight Chair

[42T201] Predicting behavior from decoded searchlight representations shows where decodable information relates to behavior

Tijl Grootswagers1, Radoslaw Cichy2 and Thomas Carlson1

1Cognitive Science, Macquarie University

2Free University Berlin, Germany

An implicit assumption often made in the interpretation of brain decoding studies is that if information is decodable from a brain region, then the brain is using this information for behavior (but see Williams et al., 2007). In the present study, we sought to study the dissociation between “decodability” and neural correlates of behavior. We used a support vector machine classifier and searchlight analysis to first identify regions of the brain that could decode whether visually presented objects were animate or inanimate from two fMRI datasets (that used different stimuli). A second searchlight analysis was then used on the same data to identify regions whose activity correlates with reaction times for the same animate/inanimate categorization task from humans. We found decodable information along the entire ventral-temporal pathway, but regions that correlated with RT behavior were restricted to inferior temporal cortex (ITC). These results support ITC’s important role in object categorization behavior, consistent with previous region-of-interest based findings (Carlson et al., 2014). Our results further show that our behavioral RT-searchlight method complements standard decoding analyses by differentiating between information that is merely decodable, and information that is more directly related to behavior.

[42T202] Why cognitive scientists should abandon the analysis of mean RT using ANOVA and switch to event history analysis

Sven Panis

Fachbereich Sozialwissenschaften, Fachgebiet Allgemeine Psychologie, University of Kaiserslautern

During the last decades it has become clear that cognitive processes do not unfold as assumed by the discrete stages model. Instead, they unfold concurrently, interactively, and decisions consume time. Nevertheless, most cognitive scientists still treat behavioral latency data using the techniques compatible with the discrete stages model: the analysis of (trimmed) mean correct RT using ANOVA (and the analysis of mean error rate using ANOVA). I present different experimental data sets that illustrate why cognitive scientists should abandon the analysis of means and switch to the statistical technique for analysing time-to-event data that is standard in many scientific disciplines: event history analysis; also known as survival analysis, hazard analysis, duration analysis, failure-time analysis, and transition analysis. Discrete time event history analysis is a distributional method that takes the passage of time explicitly into account, and allows one to study the within-trial (and across-trial) time course of the effect of an experimental manipulation on the hazard (or conditional probability) distribution of response occurrence in detection paradigms, and also on the conditional accuracy distribution of emitted responses in choice paradigms. Event history analysis can reveal what the mean conceals, and can deal with time-varying predictors and right-censored observations.

[42T203] Practice lowers contrast thresholds for detection, not sensory thresholds

Joshua A Solomon and Christopher Tyler

Centre for Applied Vision Research, City University London

In an m-alternative, forced-choice detection task, observers respond incorrectly for one of two reasons. One possibility is that the observer hallucinated one alternative, and that hallucination was more intense than the actual stimulus. The other possibility is that the observer didn’t see anything, was forced to guess, and guessed wrong. This second option (the 'sensory threshold' hypothesis) is reported to be inconsistent with the contrast detection behaviour of experienced psychophysical observers. Inexperienced observers, however, require relatively higher contrast levels for detection. To determine whether these raised detection levels were caused by a higher sensory threshold or some other factor, we asked inexperienced observers detect a briefly flashed Gabor pattern, both in the presence and in the absence of full-field, dynamic noise. Noise elevated detection levels by ∼6 dB. Nonetheless, over the course of 5 days' testing, detection levels dropped ∼3 dB in both conditions, implying that this practice effect was not due to a change in any sensory threshold. Instead, our measurements of detection psychometric functions are consistent with an effect of practice on two components of hallucination-inducing noise: one whose variance increases with that of the external noise, and one whose variance is independent of it.

Funding: BBSRC grant #BB/K01479X/1

[42T204] The impact of feedback in pupil-based biofeedback applications

Jan Ehlers and Anke Huckauf

General Psychology, Ulm University, Germany

Pupil diameter is at any time determined by the antagonistic interplay of two muscle groups governed by sympathetic and parasympathetic parts of the autonomic nervous system. Consequently, pupil size changes provide a direct impression of the user’s cognitive and affective state and may serve as a reference value in classical biofeedback settings. Tallying with this, we recently demonstrated that sympathetic activity indexed by pupil dynamics allows specific manipulation by means of simple cognitive strategies (Ehlers et al., 2015). Subjects received visual real-time feedback on pupil size changes and successfully expanded diameter beyond baseline variations; albeit with varying degrees of success and only over brief periods. The current investigation addresses feedback type as a key criterion in pupil-based biofeedback performance. Three experimental groups used various cognitive strategies to increase/decrease sympathetic arousal (indexed by pupil diameter), whereby each sample performed on the basis of a different feedback mechanism (continuous, discrete & no feedback). Results indicate only a slight increase in pupil size during the absence of feedback, whereas discrete and continuous feedback enable strong self-induced pupil enlargement. Furthermore, continuous visualization of dynamics seems to trigger sympathetic activity and leads to increased pupil sizes even when a reduction of diameter/arousal is envisaged.

[42T205] Effects of flight duration, expertise, and arousal on eye movements in aviators

Stephen L Macknik1, Adriana Nozima1, Leandro Di Stasi1, Susana Martinez-Conde1, Michael McCamy1, Ellis Gayles1, Alexander Cole1, Michael Foster1, Brad Hoare1, Francesco Tenore1, M Sage Jessee1, Eric Pohlmeyer1, Mark Chevillet1, Andrés Catena2 and Wânia C de Souza1

1Downstate Medical Center, State University of New York

University of Granada, Spain

Eye movements can reveal where, what, why, and how the brain processes the constant flow of visual information from the world. Here, we measured the eye movements of United States Marine Corps (USMC) combat aviators to understand the oculomotor effects of fatigue as a function of time-on-flight (TOF), expertise, and arousal during flight training. Saccadic velocities decreased after flights lasting 1 h or more, suggesting that saccadic velocity could serve as a biomarker of aviator fatigue. A follow-on study set out to determine, via oculomotor measures, if TOF affected the aviators’ cognitive processes. We examined oculomotor dynamics in response to emergency procedures in flight, and found that the effects of TOF on eye movements were alleviated temporarily by the addition of high-arousal stressful conditions. Finally, we tasked novice pilots with repeatedly resolving a serious emergency procedure (dual engine failure cascade), followed by watching a video with an expert solving the same emergency procedures. Half of the novices saw the video with the expert eye position indicated, and the other half watched the video without eye movements superimposed. Pilots who were given the expert eye movement information performed better subsequently, and specifically incorporated eye movement strategies from the expert in their behavior.

[42T206] A new 3d virtual reality system to assess visual function and to perfom visual theraphy

Jaume Pujol, Juan C. Ondategui-Parra Ondategui-Parra, Rosa Borràs, Mikel Aldaba, Fernando Díaz-Douton, Carlos E. García-Guerra, Meritxell Vilaseca, Carles Otero, Josselin Gautier, Clara Mestre and Marta Salvador

DAVALOR Research Center - Universitat Politècnica de Catalunya

Assessment of visual function in a clinical optometric examination is carried out through a battery of subjective tests. A complete examination is time-consuming leading to patient fatigue and the results can be influenciated by the optometrist. Vision therapy procedures take even longer sessions and are also dependent on subjective patient responses. A new 3D virtual reality system with matching accommodation and convergence planes has been developed (Eye and Vision Analyzer, EVA, DAVALOR, Spain). While the patient plays a short videogame (<5 min), objective and fast measurements of most optometric parameters are obtained. The system generates 3D images on two displays. Vergence is induced through image disparity and accommodation is stimulated using a varifocal optical system. EVA also incorporates a Hartmann-Shack autorefractometer and an eye-tracker. Measurements are repeated until obtaining a high confidence level and patient collaboration is also measured. A clinical validation of the system was performed in a group of 250 patients. Optometric parameters related with refraction (objective and subjective), accommodation (amplitude, accommodative facility) and vergence (cover test, near point of convergence, fusional vergence and vergence facility) were obtained with EVA and compared to conventional clinical procedures. Results showed good correlation and differences obtained were always within clinical tolerance.

Funding: This research was supported by the Spanish Ministry of Economy and Competitiveness under the grant DPI2014-56850-R, the European Union and DAVALOR. Carles Otero and Clara Mestre would like to thank the Generalitat de Catalunya for a PhD studentship award.

[43T101] Breaking shape-from-shading inference through body form and countershading camouflage

Julie M Harris1, Olivier Penacchio1, P. George Lovell2 and Innes Cuthill3

1School of Psychology and Neuroscience, University of St Andrews

2Abertay University

3University of Bristol, UK

Humans, and possibly many other animals, use shading as a cue towards object-shape. Countershading, one of the most widely observed colour patterns in animals, can disrupt these cues to identification. This is a shading pattern on the body that compensates for directional illumination: being darker on the side exposed to a higher light intensity (typically, a dark back and a light belly). To function effectively, countershading must be tuned to 3D form, but natural countershaded reflectance patterns have never been measured while taking into account shape. Here we tested whether the countershading pattern on prey animals could be predicted from their shape, a key test for the camouflage as adaptation theory. We measured both reflectance and shape for several species of caterpillar. Shape was measured using an optical 3D scanner, and reflectance extracted by measuring outgoing radiance and calculating reflectance based on local shape. We compared the measured reflectance pattern with that predicted based on the measured geometrical shape and known illumination. We found that reflectance was well predicted by shape for some counter-shaded species. The results suggest that body shape and colour can both evolve to counter shape-from-shading inference.

Funding: BBSRC

[43T102] The role of projective consistency in perceiving 3D shape from motion and contour

Manish Singh, Xiaoli He and Jacob Feldman

Center for Cognitive Science, Rutgers University, NJ

Observers spontaneously perceive 3D structure in motion displays that are projectively consistent with rotation in depth. They can, however, also perceive 3D structure in displays that are projectively inconsistent with a 3D interpretation, such as the “rotating columns” display (Froyen et al., JOV2013; Tanrikulu et al., JOV2016) containing multiple alternating regions. We examined the role of projective consistency in standard SFM displays by manipulating (i) the degree of symmetry of the occluding contours; (ii) the speed profile of the dots, varying from constant speed (α = 0) to full cosine speed profile (α = 1). For each level of symmetry, we used the method of constant stimuli to obtain psychometric curves for proportion of “volumetric” responses as a function of α. Results: 1) Observers’ α thresholds were around 0.4-0.6, where the speed profile deviates substantially from projective consistency; 2) Degree of asymmetry had a surprisingly small effect. The α thresholds for the asymmetric displays were only slightly higher. The results show that 3D percepts are surprisingly robust to deviations from projective consistency—both of the speed profile, and of occluding contours. They argue for a more nuanced view of 3D perception in which projective consistency plays a less prominent role than in conventional accounts.

Funding: NIH EY021494; NSF DGE 0549115 (IGERT: Interdisciplinary Training in Perceptual Science)

[43T103] Is the Müller-Lyer illusion a perspective-based illusion?

Dejan Todorović

Psychology, University of Belgrade

An influential general view of visual illusions is that they arise when the visual system applies to 2D images certain processing strategies that are appropriate to handle 3D scenes. A prominent example is the idea that the Müller-Lyer illusion is a consequence of the interpretation by the visual system of certain 2D configurations as projections of portions of objects in 3D. Such perspective-based theories readily acknowledge that the illusion-inducing 2D configurations do not evoke conscious 3D percepts, but claim that they nevertheless contain strong 3D cues which unconsciouly trigger processing mechanisms which would be appropriate for real 3D scenes. One way to test this idea is to study alternative 2D configurations which contain the basic Müller-Lyer ‘motive’ but lack 3D cues, or contain cues that evoke conscious 3D interpretations different from those assumed by perspective-based theories. In five experiments using various alternative 2D configurations it was found that they all evoked Müller-Lyer type illusory effects whose structure was very similar to effects evoked by the classical configuration, though generally somewhat weaker. Such findings are difficult to explain by perspective-based theories and challenge the notion that perspective interpretations provide a major contribution to the Müller-Lyer illusion.

Funding: This work was supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia, Grant ON179033.

Brian J Rogers1 and Jan Koenderink2

1Experimental Psychology, University of Oxford

2KU Leuven, Belgium

Viewing pictures or photographs of 3-D objects and scenes through a synopter - which has the effect of locating the two eyes at coincident spatial positions - has been shown to produce an enhanced depth percept (Koenderink et al., 1994). The accepted explanation is that synoptic viewing creates a vanishing disparity field, which is consistent with viewing a scene at infinity. Synoptic viewing creates both a vergence demand of zero (parallel lines of sight) and a vertical disparity field indicating viewing at infinity. To investigate how these factors influence the enhanced depth effect, we manipulated the vergence demand and the vertical disparity field independently using a wide field (26°×20°) ‘virtual synopter’ display. Test images ranged from perspective line drawings to complex pictorial paintings which were viewed under three different conditions: monocular, binocular and synoptic. Observers judged the amount and vividness of the perceived depth before matching the perceived slant of the depicted surfaces using a gauge figure. Synoptic viewing produced the greatest depth enhancement and vergence demand was more effective than vertical disparity manipulations. Depth enhancement was greater for images containing only shading information (such as sculptures) compared to images that contained strong linear perspective cues.

[43T105] The Bologna’s tower paradox: a dynamic architectural zoom lens illusion in framed visual perception

Leonardo Bonetti and Marco Costa

Department of Psychology, University of Bologna, Italy

In the former monastic complex of San Michele in Bosco in Bologna (Italy), a 162.26 m monumental corridor is perfectly aligned in elevation and azimuth to the top of the medieval main tower of the city (1407 m away). Walking backward and forward along the corridor it is possible to experience a remarkable size illusion of the tower. Near the window facing the city at the north end of the corridor, the tower is perceived quite small, being included in the wide urban view. Going backward the window frame progressively zooms in on the tower top that is perceived to enlarge and to become closer. The phenomenon is generalizable to all situations in which an object, placed quite far from the observer, is perceived through a frame while moving backward and forward. To test the phenomenon 9 pictures were taken along the corridor at 20 m intervals. Thirty-six picture pairs were presented to 144 participants that had to evaluate in which of the two pictures the tower was perceived larger and closer, using a two-alternative forced choice paradigm for both evaluations. Results confirmed that the tower was perceived significantly and progressively larger and closer in the more distant shot.

[43T106] Stereo disparity modulates evoked potentials associated with the perceptual classification of 3D object shape: a high-density ERP study

Charles Leek1, Mark Roberts1 and Alan Pegna2

1Psychology, Bangor University

2University of Queensland

One unresolved theoretical issue is whether the perceptual analysis of 3D object shape is modulated by stereo visual input. Some current models of object recognition attribute no functional significance to stereo input, and strongly predict that stereo should not modulate shape classification. To test this hypothesis high-density (256-channel) EEG was used to record the temporal dynamics, and perceptual sensitivity of shape processing under conditions of mono and stereo viewing. On each trial, observers made image classification (‘Same’/’Different’) judgements to two briefly presented, multi-part, novel objects. Analyses using mass univariate contrasts showed that the earliest sensitivity to mono versus stereo viewing appeared as a negative deflection over posterior locations on the N1 component between 160 ms–220ms post stimulus onset. Later ERP modulations during the N2 time window between 240 ms–370ms were linked to image classification. N2 activity reflected two distinct components – an early N2 (240 ms–290ms) and a late N2 (290 ms–370ms) that showed different patterns of responses to mono and stereo input, and differential sensitivity to 3D object structure. The results show that stereo input modulates the perception of 3D object shape, and challenge current theories that attribute no functional role for stereo input during 3D shape perception.

Funding: Royal Society (UK), Swiss National Science Foundation

[43T201] EEG frequency tagging reveals distinct visual processing of moving humans and motion synchrony

Nihan Alp, Andrey R. Nikolaev, Johan Wagemans and Naoki Kogo

Brain and Cognition, KU Leuven, Belgium

The brain has dedicated networks to process interacting human bodies and willingly attributes interpersonal interaction to synchronous motion even in spatially-scrambled displays. However, it is not clear to what extent the neural processes underlying perception of human bodies differ from those of other objects moving in synchrony. Here we show a clear delineation of these processes by manipulating the motion synchrony and biological nature of point-light displays (PLDs) by conducting a 2(synchronous vs asynchronous) × 2(human vs non-human) experiment. We applied the frequency-tagging technique by giving continuous contrast modulations to the individual PLDs at different frequencies (i.e., f1&f2). Then, in the frequency spectrum of the steady-state EEG recording we looked for the emergent frequency components (i.e., f1 + f2, 2f1 + f2), which indicate the integrated brain responses due to nonlinear summation of neural activity. We found two emergent components which signify distinct levels of interactions between the moving objects: the first component indicates the perception of human-like point-light dancers; the second one indicates the degree of motion synchrony. These findings dissociate the visual processing of moving human bodies from the processing of motion synchrony, in general suggesting that social interactions have shaped a dedicated visual mechanism for the processing of moving human bodies.

Funding: FWO - Research Foundation Flanders: PhD Fellowship

[43T202] Perception of grasping of biological movement in typical and autistic children

Marco Turi1, Francesca Tinelli2, David Burr34, Giulio Sandini5, Maria Concetta Morrone12

1Department of Translational Research On New Technologies in Medicine and Surgery, University of Pisa

2Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa Italy

3Department of Neuroscience Psychology Pharmacology and Child Health, University of Florence, Italy

4School of Psychology, University of Western Australia, Perth, Australia

5Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy

We investigated the ability of children (both typical and with ASD) to discriminate shape (a cylinder from a cube) by observing a point-light movie of an actor grasping the object, either from an allocentric or egocentric viewpoint (observing action of others or self). For adults, the sensitivity was slightly greater for the egocentric than allocentric viewpoint (Campanella et al., Proc Roy Soc, 2010). Typically developing children younger than 7 years could not do the task, then improve gradually up to 16 years. Sensitivity was initially equal for the two viewpoints, then egocentric becomes more sensitive at about 16 year. High functioning autistic children showed a strong selective impairment, only in the allocentric condition, where thresholds were twice as high: egocentric thresholds were similar to age- and ability-matched controls. Furthermore, the magnitude of the impairment correlated strongly with the degree of symptomology (R2 = 0.5). The results suggest that children with ASD are impaired in their ability to predict and infer the consequence of movements of others, which could be related to the social-communicative deficits often reported in autism.

Funding: Early cortical sensory plasticity and adaptability in human adults (ESCPLAIN - 338866)

[43T203] Engaging facial muscular activity biases the emotion recognition of point-light biological walkers

Aiko Murata1, Fernando Marmolejo-Ramos2, Michał Parzuchowski3, Carlos Tirado2 and Katsumi Watanabe1

1School of Fundamental Science and Engineering, Waseda University

2Stockholm University

3SWPS University of Social Sciences and Humanities in Sopot

Blaesi and Wilson (2010) showed that the recognition of other’s facial expressions was biased by the observer’s facial muscular activity. It has been hypothesized that holding chopsticks between the teeth engages the same muscles used for smiling and therefore biases the expression recognition toward happy or more positive mood. However, it is yet unknown whether such modulatory effects are also observed for the recognition of dynamic bodily expressions. Therefore, we investigated the emotion recognition of point-light biological walkers along with that of static face stimuli, while subjects were holding chopsticks in their teeth or without doing so. Under the holding-chopsticks condition, the subjects tended to see happy expressions in the facial stimuli as compared to the no-chopstick condition, in concordance with the aforementioned study. Interestingly, the similar effect was found for the biological motion stimuli as well. These results indicate that the facial muscular activity alters not only the recognition of facial expressions but also that of bodily expression.

Funding: CREST, Japan Science and Technology Agency

[43T204] Model for the integration of form and shading cues in multi-stable body motion perception

Leonid A Fedorov and Martin Giese

IMPRS

Dept. Cognitive Neurology CIN & HIH, University of Tuebingen

Body motion perception from impoverished stimuli, such as point-light motion, shows interesting multi-stability, which disappears in presence of shading cues that suggest depth. Existing models for body motion perception account neither for multi-stability nor for the specific influence of such shading cues. We propose a new model that captures these phenomena. METHOD: We extended a classical hierarchical recognition model for body motion by: (i) a two-dimensional dynamic neural field of snapshot, which reproduces decisions about walking direction and their multi-stability; (ii) a novel hierarchical pathway that processes specifically intrinsic luminance gradients, being invariant against the strong contrast edges on the boundary of the body. RESULTS: We show that the model reproduces the observed multi-stability of perception for unshaded walker silhouettes. The addition of intrinsic shading cues results in monostable unambiguous perception in the model, consistent with psychophysical results. In addition, we show that the novel shading pathway is necessary to accomplish a robust recognition of the relevant intrinsic luminance gradients. CONCLUSIONS: By straight-forward extensions of a classical physiologically-inspired model for body motion recognition we were able to account for the perceptual multistability and the dependence on shading cues of body motion perception.

Funding: BMBF, FKZ: 01GQ1002A, ABC PITN-GA-011-290011, CogIMon H2020 ICT-644727; HBP FP7-604102; Koroibot FP7-611909, DFG GI 305/4-1;

[43T205] A computational model of biological motion detection based on motor invariants

Alessia Vignolo1, Nicoletta Noceti2, Francesca Odone2, Francesco Rea3, Alessandra Sciutti3 and Giulio Sandini3

1RBCS - Robotics, Brain and Cognitive Sciences (IIT)

2DIBRIS - Department of Computer Science, Bioengineering, Robotics and System Engineering (Unige), Istituto Italiano di Tecnologia

Università di Genova

3Istituto Italiano di Tecnologia – Robotics Brain and Cognitive Sciences

Despite the advances in providing artificial agents with human-like capabilities, robots still lack the ability to visually perceive the subtle communication cues embedded in human natural movements which make human interaction so efficient. The long-term goal of our research is to endow the humanoid robot iCub with human-like perception and action capabilities, triggering the mutual understanding proper of human collaborations. Here we investigate how the visual signatures of the regularities of biological motion can be exploited as implicit communication messages during social interaction. Specifically, we explore the use of low-level visual motion features for detecting potentially interacting agents on the basis of biologically plausible dynamics occurring in the scene. To do this we adopt a descriptor based on the so called “Two-Thirds Power Law”, a well-known invariant of end-point human movements, for which we provide a thorough experimental analysis that clarifies the constraints of law readability. We then validate the computational model by implementing it on the robot and proving that it enables the visual detection of human activity even in presence of severe occlusions. This achievement paves the way for the use of HRI based applications on various contexts, ranging from personal robots, to physical, social and cognitive rehabilitation.

Funding: The research presented here has been supported by the European CODEFROR project (FP7-PIRSES-2013-612555)

[43T206] Evidence for norm-based coding of human movement speed

George Mather1, Rebecca Sharman2 and Todd Parsons1

1School of Psychology, University of Lincoln

2University of Stirling

Estimates if the visual speed of human movements such as hand gestures, facial expressions and locomotion are important during social interactions because they can be used to infer mood and intention. However it is not clear how observers use retinal signals to estimate real-world movement speed. We conducted a series of experiments to investigate adaptation-induced changes in apparent human locomotion speed, to test whether the changes show apparent repulsion of similar speeds or global re-normalisation of all apparent speeds. Participants adapted to videos of walking or running figures at various playback speeds, and then judged the apparent movement speed of subsequently presented test clips. Their task was to report whether each test clip appeared to be faster or slower than a ‘natural’ speed. After adaptation to a slow-motion or fast-forward video, psychometric functions showed that the apparent speed of all test clips changed, becoming faster or slower respectively, consistent with global re-normalisation rather than with repulsion of test speeds close to the adapting speed. The adaptation effect depended on the retinal speed of the adapting stimulus but did not require recognizably human movements.

Funding: ESRC grant ES/K006088

A

Abbatecola, Clement – 4P107

Abeles, Dekel – 2P063

Actis-Grosso, Rossana – 3P142

Adamian, Nika – 1P111, 23T105

Adams, Mark – 2P107

Adams, Wendy – 2P011, 4P145, 11S302

Adeli, Hossein – 11T103, 41T107

Aggius-Vella, Elena – 4P104

Agnew, Hannah – 4P003

Agosta, Sara – 13T104

Agostini, Tiziano – 2P082

Agtzidis, Ioannis – 4P121

Aguado, Borja – 3P038

Aguilar-Lleyda, David – 3P076

Aitken, Fraser – 3P050

Aivar, M. Pilar – 1P007

Akbarinia, Arash – 3P132

Akshoomoff, Natacha – 23T101

Alais, David – 12T101, 32T103

Alavedra-Ortiz, Carme – 2P030

Albonico, Andrea – 3P066

Aldaba, Mikel – 2P030, 42T206

Alekseeva, Daria – 1P110

Alexander, Nathan – 4P041

Alexeeva, Svetlana – 3P070

Alho, Kimmo – 32T106

Allard, Rémy – 42T106

Allen, John – 1P013

Allen, William – 3P037

Almeida, Jorge – 2P005

Alonso, Jose-Manuel – 2P065

Alp, Nihan – 43T201

Álvaro, Leticia – 3P042, 3P044, 3P047

Amado, Catarina – 2P001

Amir, Afsana – 3P046

Amiri, Sabrina – 2P059

Amit, Roy – 2P063

Ananyeva, Kristina – 1P042

Andermane, Nora – 2P024

Andersen, Søren – 13T201, 13T205, 41S204, 4P024, 3P013, 41T105

Anderson, Grace – 12T302

Anderson, Nicki – 41T103

Ando, Hiroshi – 2P104, 3P106

Andrew, Morag – 12T202

Anobile, Giovanni – 31T206

Ansell, Joe – 3P023

Ansorge, Ulrich – 1P014

Anstis, Stuart – 2P101

Antonov, Plamen – 13T205

Aparajeya, Prashant – 1P088

Arai, Takeyuki – 4P009

Arató, József – 31S106, 22T102

Ardasheva, Liubov – 1P015, 1P017

Ardestani, Mohammad – 1P097

Arikan, Belkis – 3P110

Arina, Galina – 4P103

Arleo, Angelo – 42T106

Arnold, Derek – 2P013, 32T104

Arranz-Paraíso, Sandra – 3P100

Arrighi, Roberto – 31T206

Artemenkov, Sergei – 1P141

Asano, Takashi – 4P015

Asaoka, Riku – 3P139

Asher, Jordi – 4P080

Ashida, Hiroshi – 4P122

Astle, Andrew – 41S202

Aston, Stacey – 3P035

Atkinson, Janette – 12T202, 23T101

Atmanspacher, Harald – 2P122

Avery, Mark – 2P051

Aviezer, Hillel – 3P094

Awazitani, Tomoko – 2P025

Ayhan, Inci – 3P021

Azañón, Elena – 1P146

Aznar-Casanova, J. – 2P125, 2P128

B

Babenko, Vitaly – 1P110

Babu, Ramesh – 4P054

Bach, Michael – 4P018

Bachmann, Talis – 1P018

Badcock, Nicholas – 2P136, 3P085

Baddeley, Roland – 3P058

Baker, Curtis – 3P128

Baker, Daniel – 3P079, 4P093, 42T102

Bakshi, Ashish – 2P076

Baldson, Tarryn – 2P116

Baldwin, Joseph – 4P140

Ball, Daniel – 32T104

Ban, Hiroshi – 4P028

Bandettini, Peter – 1P050

Bando, Toshihiro – 3P064

Banerjee, Surajit – 4P041

Bannai, Yuichi – 4P094

Bar-Gad, Izhar – 2P063

Barabanschikov, Vladimir – 1P058

Baranyi, Peter – 1P124

Barbeau, Emmanuel – 4P129

Barbosa, João – 2P143, 2P138

Barbot, Antoine – 11T101

Barla, Pascal – 12T306

Barnes, Kelly – 4P111

Barraza Bernal, Maria J – 4P039

Barraza, José – 3P101, 2P080

Barsingerhorn, Annemiek -

 12T204

Bartels, Andreas – 2P120

Bartlett, Laura – 2P011

Bartolo, Angela – 4P004

Barton, Jason – 1P032, 33T105

Baseler, Heidi – 4P051, 4P052

Basyul, Ivan – 1P042

Batmaz, Anil – 4P014

Battaglini, Luca – 3P102

Battelli, Lorella – 4P081, 4P082, 13T104

Bauch, Sebastian – 1P006

Baud-Bovy, Gabriel – 21S106

Beaudot, William – 2P097

Becker, Nicolas – 1P143

Bednarik, Roman – 2P057

Beesley, Tom – 31T201

Beffara, Brice – 4P128

Behler, Christian – 4P016

Bell, Jason – 21S102

Belokopytov, Alexander – 4P046

Ben-Shahar, Ohad – 1P118

Bender, Abigail – 3P057

Bendixen, Alexandra – 33T203

Benjamin, Alex – 4P093

Benosman, Ryad – 2P093

Benton, Chris – 3P090

Berardi, Nicoletta – 4P012

Beraru, Andreea – 4P059

Berga, David – 1P102

Berggren, Nick – 1P001, 1P002, 3P015

Bergsma, D.P. – 1P030

Bermeitinger, Christina – 4P029

Bernadí, Marta – 4P061

Bernard, Jean-Baptiste – 3P074

Bernardis, Paolo – 4P086

Berry, Jacquelyn – 1P011

Bertalmío, Marcelo – 1P103, 13T302

Bertamini, Marco – 1P025, 1P067, 3P129, 3P134, 42T101

Besprozvannaya, Irina – 1P058

Bestue, David – 2P143

Bethge, Matthias – 21T304

Bevilacqua, Frederic – 2P105

Bex, Peter – 2P083, 4P125, 11T106

Biagi, Laura – 22T201

Bijlstra, Gijs – 3P094

Billington, Jac – 3P126

Billino, Jutta – 4P001, 41S205

Bimler, David – 3P092, 3P096

Binda, Paola – 2P067, 11T102

Binetti, Nicola – 3P144, 13T103

Bingham, Geoffrey – 1P112

Bingham, Ned – 1P112

Black, Michael – 1P108

Blangero, Annabelle – 1P035

Blinnikova, Irina – 1P012, 1P048

Blohm, Gunnar – 31T203

Bloj, Marina – 1P098, 2P086, 2P092, 3P023

Blum, Sebastian – 1P095

Blusseau, Samy – 4P145

Bocheva, Nadejda – 2P090, 3P105

Boduroglu, Aysecan – 2P140

Boettcher, Sage – 4P136

Bolshakov, Andrey – 4P047

Bond, Nikki – 2P067

Bondarenko, Yakov – 3P092

Bonetti, Leonardo – 43T105

Bonner, Jack – 1P055

Bonnotte, Isabelle – 4P004

Boonstra, Nienke – 12T204

Borra, Tobias – 1P066

Borràs, Rosa – 42T206

Bos-Roubos, Anja – 3P007

Bosten, Jenny – 12T206

Botsch, Mario – 4P016

Botzer, Assaf – 2P055

Boucher, Vincent – 2P073

Bowns, Linda – 2P091

Boyer, Eric – 2P105

Braddick, Oliver – 12T202, 23T101

Brainard, David – 13T303, 21S205

Brand, Andreas – 4P002

Bratton, Luke – 4P049

Braun, Jochen – 2P123

Brayda, Luca – 4P105

Brecher, Kenneth – 1P079

Breitschaft, Stefan – 2P102

Brenner, Eli – 3P038, 3P113, 3P115, 31T205, 21S104

Bricolo, Emanuela – 3P066

Brielmann, Aenne – 3P005

Brittenham, Chloe – 3P046

Brooks, Kevin – 12T106

Brown, Christopher – 1P001

Brown, Holly – 4P052

Brucker, Sara – 2P145

Bruno, Nicola – 2P007

Budai, Anna – 4P043

Bülthoff, Heinrich – 1P134, 4P032, 31T208

Bülthoff, Isabelle – 1P052, 4P070

Bulut, Tara – 3P008

Burleigh, Alistair – 4P140

Burn, David – 4P037

Burr, David – 12T101, 31T206, 43T202, 31S102

Burton, Mike – 4P069

Burton, Nichola – 1P055

Butila, Eugen – 4P059

Butt, Asmaa – 3P046

Buzás, Péter – 2P029

C

Cacciamani, Laura – 2P139

Cairns, Patrick – 3P057

Cajar, Anke – 2P060

Caldara, Roberto – 12T205

Calvo-Merino, Beatriz – 1P127

Cámara, Clara – 3P123

Campana, Gianluca – 33T104

Campus, Claudio – 4P104, 4P105, 13T106

Canessa, Andrea – 11T107

Cant, Iain – 4P088, 4P110

Cappagli, Giulia – 3P087, 21S106

Carbon, Claus-Christian – 1P074, 1P076, 1P078, 1P126, 3P003, 3P052, 3P062, 12T305, 2P102

Carlson, Thomas – 42T201

Carrasco, Marisa – 11T101, 21S101

Carrigan, Susan – 1P068

Cartaud, Alice – 3P121

Carvalho, Joana – 22T203

Casco, Clara – 3P102

Cass, John – 2P116, 32T103

Cassai, Iacopo – 4P146

Castaldi, Elisa – 22T201

Casteau, Soazig – 1P043

Castet, Eric – 3P074, 13T202

Castro, Leonor – 2P108

Catena, Andrés – 42T205

Cavanagh, Patrick – 1P111, 2P101, 23T105

Caziot, Baptiste – 2P114

Cecchetto, Stefano – 1P109

Cen, Danlu – 21S105

Cerdá-Company, Xim – 1P106

Cesqui, Benedetta – 3P127

Chakravarthi, Ramakrishna -

 3P075,13T201, 13T205

Challinor, Kirsten – 3P107

Chambers, Alison – 2P094

Chan, Jason – 2P098

Chan, Norine – 3P046

Chaney, Wesley – 21T302, 31S101

Chang, Dong-Seon – 31T208

Chang, Dorita H. F. – 1P023, 4P017, 4P028, 4P083, 4P087

Chanovas, Jordi – 1P026

Charvillat, Agnès – 1P039

Chatterjee, Garga – 4P073

Chauhan, Tushar – 2P003, 3P056, 4P101

Chavane, Frédéric – 1P092

Chemla, Sandrine – 1P092

Chen, Chien-chung – 2P023, 2P040, 1P071, 3P029

Chen, Jing – 41T101

Chen, Ke – 1P090

Chen, Pei-Yin – 2P040

Chen, Rongrong – 31T202

Chen, Siyi – 2P147

Chen, Yi-Chuan – 2P115, 32T105

Chernavina, Elena – 1P044

Chetverikov, Andrey – 2P061, 33T104

Chevillet, Mark – 42T205

Chima, Akash – 4P048

Chkonia, Eka – 1P034, 4P002, 4P036

Chou, Idy – 4P083

Christensen, Jeppe – 4P125

Christianson, Grant – 2P039

Christova, Christina – 3P043

Chu, Li – 32T105

Chung, Charles – 3P107

Churches, Owen – 2P009

Cicchini, Guido – 22T201, 31S102

Cichy, Radoslaw – 42T201

Cinelli, Laura – 22T201

Cintoli, Simona – 4P012

Clark, Kait – 2P095, 4P049

Clarke, Aaron – 1P077, 3P065, 41S203

Clifford, Colin – 4P147

Cmiljanović, Marija – 4P072

Coates, Daniel – 3P071, 13T203

Cocchi, Elena – 3P087, 21S106

Coello, Yann – 1P131, 3P121

Coffelt, Mary – 12T104

Cohen, Haggar – 2P146

Cohen, Noga – 4P062

Čokorilo, Vanja – 3P008

Cole, Alexander – 42T205

Colé, Pascale – 13T202

Collado, José – 3P042

Collette, Cynthia – 4P004

Collier, Elizabeth – 4P101

Collin, Charles – 1P054

Collins, Thérèse – 2P058, 2P059, 12T102

Coltheart, Veronika – 2P136

Comfort, William – 4P076

Compte, Albert – 2P138, 2P143

Conci, Markus – 2P147

Constantinidis, Christos – 2P138

Contemori, Giulio – 33T102

Conti, Martina – 4P012

Contò, Federica – 4P081

Conway, Bevil – 21S202

Cooper, Bonnie – 41T107

Cooper, Natalia – 4P088, 4P110

Cornelissen, Frans – 1P022, 1P100, 1P121, 3P073, 22T203

Cornelissen, Tim – 21T308

Corrow, Jeffrey – 1P032, 33T105

Corrow, Sherryse – 1P032, 33T105

Costa, Marcelo – 4P127

Costa, Marco – 43T105

Costela, Francisco – 4P117, 12T104

Cottereau, Benoit – 4P095

Coussens, Scott – 2P009

Coutrot, Antoine – 2P048

Cowan, Jane – 3P034

Coşkun, Turgut – 2P140

Cristino, Filipe – 3P030

Crognale, Michael – 23T201

Crouzet, Sébastien – 4P129

Crucilla, Sarah – 2P083

Csepe, Valeria – 1P124

Culmer, Peter – 3P112

Cunningham, Darren – 3P137

Curley, Lauren – 23T101

Curran, Tim – 4P116

Cuthill, Innes – 1P114, 3P058, 43T101

Cuturi, Luigi – 1P031

Czigler, András – 2P029, 4P043

Czigler, Istvan – 4P135

D

d'Avella, Andrea – 3P127

d'Avossa, Giovanni – 12T203

da Cruz, Janir Nuno – 4P036

Dai, Zhengqiang – 4P118

Daini, Roberta – 2P014, 2P066, 3P066

Dale, Anders – 23T101

Damasse, Jean-Bernard – 2P044, 22T106

Damen, Dima – 31T207

Daneyko, Olga – 2P007

Danilova, Marina – 23T205

Daoudi, Leila – 42T105

Davies-Thompson, Jodie – 33T105

de Agrò, Massimo – 4P146

de Boer, Minke – 1P121

de la Malla, Cristina – 21S104

de la Rosa, Stephan – 1P134, 4P032, 31T208

de Mathelin, Michel – 4P014

de Ridder, Huib – 2P068, 13T306

de Sa Teixeira, Nuno A. – 1P038

de Souza, Wania – 42T205

de Tommaso, Matteo – 1P005

Deal, Nele – 3P041

Dechterenko, Filip – 2P016

Décima, Agustín – 3P101

Dehaene, Stanislas – 1P140

Deiana, Katia – 3P133

Del Viva, Maria Michela – 4P012, 4P137

Delevoye-Turrell, Yvonne – 4P031

Delicato, Louise – 4P037

Demay, Anaïs – 4P031

Demidov, Alexander – 1P042

Dendramis, Aris – 22T204

Denis-Noël, Ambre – 13T202

Deny, Stephane – 11S205

Derry-Sumner, Hannah – 4P049

Desbois, Christophe – 42T104

Descamps, Marine – 13T202

Desebrock, Clea – 1P135

de’Sperati, Claudio – 2P020

Devillez, Hélène – 4P116

Devyatko, Dina – 1P070

Dhawan, Aishwar – 3P127

di Luca, Massimiliano – 2P118

di Stasi, Leandro – 42T205

Díaz-Douton, Fernando – 42T206

Dimitriadis, Alexandros – 2P071

Dingemanse, Mark – 4P112

Do Carmo Blanco, Noelia – 1P013

Dobs, Katharina – 12T303

Doerig, Adrien – 3P065, 13T204

Doerschner, Katja – 1P118, 2P089

Domagalik, Aleksandra – 3P014, 4P120

Domijan, Drazen – 1P105

Donato, Maria – 4P012

Dong, Junyu – 3P130

Donk, Mieke – 41T103

Donners, Maurice – 2P077

Doré-Mazars, Karine – 1P039, 4P056

Dorer, Marcel – 1P123

Dorr, Michael – 4P121

dos Santos, Natanael – 4P076

Dotsch, Ron – 3P094

Doughty, Hazel – 31T207

Dovencioglu, Dicle – 1P118

Draschkow, Dejan – 1P136

Dresp-Langley, Birgitta – 4P014

Drewes, Jan – 2P127, 13T102

Drewing, Knut – 2P099, 2P117

Driver, Meagan – 1P007

Du, Huiyun – 1P064

Dubuc, Constance – 3P037

Duchaine, Brad – 1P032, 33T105

Duguleana, Mihai – 4P059

Dul, Mitchell – 2P065

Dúll, Andrea – 1P142

Dumani, Ardian – 4P142

Dumitru, Adrian – 4P059

Dumitru, Magda – 1P113

Dumoulin, Serge – 1P100, 2P005, 21T305

Durand, Jean-Baptiste – 4P095

Durant, Szonya – 4P135

Duyck, Marianne – 23T105

Dzhelyova, Milena – 12T301

E

Eberhardt, Lisa – 13T206

Ecker, Alexander – 21T304

Eckstein, Miguel – 11T108

Economou, Elias – 2P071

Eger, Evelyn – 1P140

Ehinger, Benedikt – 2P053

Ehlers, Jan – 42T204

Eimer, Martin – 1P002, 3P015

Einhäuser, Wolfgang – 4P057, 22T103, 33T203

Elbaum, Tomer – 2P055

Elder, James – 4P145

Elshout, J.A. – 1P030

Elzein, Yasmeenah – 4P098

Emery, Kara – 23T201

Engbert, Ralf – 2P060, 2P062

Engel, Stephen – 2P004

Ennis, Robert – 21T307

Erb, Michael – 4P033, 4P034

Erkelens, Casper – 3P001

Ernst, Daniel – 3P020

Ernst, Marc – 2P111, 31T204

Ernst, Udo – 1P072, 32T204

Eskew, Rhea – 23T206

Espósito, Flavia – 3P089

Etzi, Roberta – 2P112

Evans, Karla – 21T306

Evers, Kris – 12T201

F

Facchin, Alessio – 2P066

Fademrecht, Laura – 1P134

Faghel-Soubeyrand, Simon -

 1P059

Faivre, Nathan – 13T201

Fantoni, Carlo – 1P132

Farkas, Attila – 1P063

Fast, Elizabeth – 2P004

Faubert, Jocelyn – 4P026

Faure, Sylvane – 2P137

Fayel, Alexandra – 1P039

Fedorov, Leonid – 31T208, 43T204

Feldman, Jacob – 43T102

Felisberti, Fatima – 4P066, 4P100

Fendrich, Robert – 1P081

Fennell, John – 3P058

Fernandes, Thiago – 4P076

Ferrari, Adel – 3P134, 42T101

Ferrari, Ulisse – 11S205

Ferstl, Ylva – 4P032

Feuerriegel, Daniel – 2P009

Fiehler, Katja – 3P111, 3P115, 3P116, 4P030, 31T203

Field, David – 2P038

Figueiredo, Patrícia – 4P036

Fink, Bernhard – 1P126

Finocchietti, Sara – 3P087, 4P104, 21S106

Fischer, Andreas – 3P002

Fischer, Jason – 31S101

Fischer, Uwe – 3P003

Fiser, József – 22T102, 33T101, 31S106, 4P125, 3P078

Fisher, Carmen – 4P138

Fleming, Roland – 1P099, 12T304, 12T306

Fletcher, Kimberley – 33T105

Foerster, Rebecca – 3P117, 4P016

Formankiewicz, Monika A. -

 2P119, 4P048

Forschack, Norman – 3P013

Forster, Bettina – 1P127

Forster, Sophie – 1P001, 32T102

Foster, Michael – 42T205

Foxwell, Matthew – 2P012

Fracasso, Alessio – 2P005

Fraccaroli, Ilaria – 4P146

Frackowiak, Richard – 4P034

Francis, Greg – 3P065

Franklin, Anna – 12T206

Franklish, Clive – 3P034

Frasson, Eleonora – 3P066

Freeman, Tom – 23T103

Fregnac, Yves – 42T104

Freitag, Christine – 2P098

Frielink-Loing, Andrea – 4P025

Friston, Karl – 4P034

Fronius, Maria – 22T202

Fuchs, Philippe – 2P027

Fujie, Ryuto – 2P008

Fujii, Kazuki – 3P017

Fujita, Kinya – 4P009

Fülöp, Diána – 2P029

Funke, Christina – 21T304

Furuhashi, Seiichi – 4P015

Furukawa, Shihori – 2P097

G

Galambos, Peter – 1P124

Galán, Borja – 1P093, 4P058

Gale, Richard – 4P051, 4P052

Gallace, Alberto – 2P112

Gallego, Emma – 1P026

Galliussi, Jessica – 4P086

Galmonte, Alessandra – 2P082

Gamkrelidze, Tinatin – 1P034

Ganizada, Jahan – 4P074

Gao, Ying – 3P130

Garcia-Guerra, Carlos-Enrique -

 4P061, 42T206

Garcia-Zurdo, Ruben – 1P049

Gardella, Christophe – 11S205

Garofalo, Gioacchino – 2P007

Gatys, Leon – 21T304

Gatzidis, Christos – 1P009

Gautier, Josselin – 2P049, 4P060, 4P061, 42T206

Gayles, Ellis – 42T205

Geerdinck, Leonie – 2P077

Geers, Laurie – 1P057

Gegenfurtner, Karl – 2P056, 3P049, 4P001, 13T301, 21T307, 41T101, 21S103

Geier, János – 2P085

Geisler, Wilson – 2P016

Gekas, Nikos – 2P006

Genova, Bilyana – 2P090, 3P105

Georgiev, Stilian – 3P043

Gerard-Mercier, Florian – 42T104

Gerardin, Peggy – 4P107

Gerbino, Walter – 1P132, 4P086

Germeys, Filip – 4P040

Gert, Anna – 1P060

Gertz, Hanna – 4P030

Geuss, Michael – 1P108

Geuzebroek, Anna – 4P035

Gheorghiu, Elena – 3P131

Ghin, Filippo – 3P109

Ghose, Tandra – 4P063, 4P126

Ghosh, Kuntal – 2P076

Gibaldi, Agostino – 11T106, 11T107

Giel, Katrin – 1P108

Giese, Martin – 1P097, 2P092, 31T208, 32T201, 43T204

Giesel, Martin – 2P092

Gilchrist, Alan – 2P071, 11S301

Gilchrist, Iain – 1P006, 2P130, 3P114

Giles, Oscar – 3P112

Gintner, Tímea – 1P088

Girbacia, Florin – 4P059

Girbacia, Teodora – 4P059

Glennerster, Andrew – 2P107

Glowania, Catharina – 2P111

Godde, Anaïs – 1P036

Goetschalckx, Lore – 1P138

Goettker, Alexander – 2P056

Goffaux, Valerie – 1P057

Gold, Ian – 3P099

Golubickis, Marius – 2P124

Gonzalez, Fernando – 3P042

Gonzalez-García, Fran – 1P099

Goodale, Melvyn – 2P013

Goodship, Nicola – 2P034

Goodwin, Charlotte – 2P038

Goossens, Jeroen – 12T204, 2P077

Gordienko, Ekaterina – 1P101

Gordon, Gael – 4P006

Gordon, James – 3P046

Gorea, Andrei – 22T101

Gori, Monica – 1P031, 3P087, 4P104, 4P105, 13T106, 21S106

Gosselin, Frédéric – 1P059

Goutcher, Ross – 3P031

Gouws, Andre – 1P116, 1P117, 4P051, 4P052

Gracheva, Maria – 4P044, 4P047

Graf, Erich – 2P011, 4P145

Grassi, Pablo – 2P120

Grassini, Simone – 1P119, 4P119

Grasso, Giuseppina – 3P066

Gravel, Nicolás – 1P100

Gregory, Samantha – 2P131

Gremmler, Svenja – 2P043

Grillini, Alessandro – 3P073

Grootswagers, Tijl – 42T201

Grzeczkowski, Lukasz – 1P077, 4P086

Grzymisch, Axel – 1P072

Guadron, Leslie – 2P077

Güllich, Arne – 4P126

Gulyás, Levente – 1P142

Gupta, Avisek – 4P073

Gyoba, Jiro – 3P139

H

Hadad, Bat-Sheva – 3P138

Hagberg, Gisela – 4P033

Haladjian, Harry – 2P101

Halbertsma, Hinke – 1P121

Hall, Joanna – 1P114

Handwerker, Daniel – 1P050

Hanke, Sarah – 2P111

Hansen, Thorsten – 21T307

Hansmann-Roth, Sabrina – 2P069, 23T204

Harada, Toshinori – 2P074

Hardiess, Gregor – 1P123

Harris, Julie – 1P098, 2P086, 2P092, 3P023, 32T203, 43T101

Harris, Laurence – 3P110, 4P098

Harrison, Charlotte – 3P144, 13T103

Harrison, Neil – 4P115

Harvey, Ben – 1P100, 2P005, 21T305

Harvey, Monika – 4P010

Harwood, Mark – 1P035

Hashimoto, Sho – 1P115

Hassan, Syed – 3P046

Hauta-Kasari, Markku – 2P057

Havelka, Jelena – 3P041

Hayashi, Ryusuke – 2P087

Hayn-Leichsenring, Gregor –

Blue Ocean Strategy

 2P078

Hazenberg, Simon – 4P113

He, Xiaoli – 43T102

Heard, Priscilla – 1P045, 3P104

Hecht, Heiko – 1P038, 3P063

Hegele, Mathias – 4P030

Hein, Elisabeth – 1P065

Heinrich, Sven – 4P018

Henderson, Audrey – 3P057

Henik, Avishai – 4P062

Henning, Bruce – 23T203

Henriksen, Mark – 3P045

Herbert, William – 2P034

Herbik, Anne – 22T205

Hermann, Petra – 1P053

Hermans, Erno – 3P018

Herpich, Florian – 4P082

Hershman, Ronen – 4P062

Herwig, Arvid – 2P050

Herzog, Michael – 1P034, 1P077, 3P065, 3P141, 4P002, 4P007, 4P036, 4P086, 13T204, 23T104, 32T205, 42T105, 41S203

Hesse, Constanze – 1P024

Hibbard, Paul – 2P072, 3P027, 3P031, 4P080

Hiebel, Hannah – 1P003, 1P004

Higashi, Hiroshi – 2P088

Higashiyama, Atsuki – 2P028

Higham, James – 3P037

Hilano, Teluhiko – 1P084, 1P085

Hilger, Maximilian – 4P030

Hills, Charlotte – 33T105

Hine, Kyoko – 1P144

Hine, Trevor – 3P097

Hiramatsu, Chihiro – 3P037

Hiroe, Nobuo – 2P134

Hirose, Hideaki – 3P039

Hjemdahl, Rebecca – 3P057

Hoare, Chad – 42T205

Hochmitz, Ilanit – 3P141

Hochstein, Shaul – 1P145, 32T202

Hodzhev, Yordan – 1P027

Hoffmann, Michael – 22T205

Höfler, Margit – 1P003, 1P004, 1P006, 2P130, 3P114

Hofmann, Lukas – 4P041

Hold, Ray – 3P112

Holm, Linus – 2P004

Holm, Suvi – 4P119

Holmes, Tim – 3P010, 3P011

Holubova, Martina – 4P019

Honbolygo, Ferenc – 1P124

Horsfall, Ryan – 2P106

Horstmann, Gernot – 3P020

Horváth, Gábor – 2P029

Hoshino, Yukiko – 3P017

Hossner, Ernst-Joachim – 2P042

Hoyng, C.B. – 1P030

Hristov, Ivan – 3P043

Hu, Zhaoqi – 2P096

Download

Huang, Jing – 4P001

Huang, Zhehao – 31S104

Huckauf, Anke – 3P067, 13T206, 42T204

Hudák, Mariann – 2P085

Hughes, Anna – 3P068

Hunt, Amelia – 1P024

Hunter, David – 3P027

Hurlbert, Anya – 3P035

I

I-Ping, Chen – 3P009

Iannantuoni, Luisa – 4P002

Ianni, Geena – 1P050

Ichihara, Shigeru – 4P089, 4P096

Ichikawa, Makoto – 1P130, 3P140, 3P145

Ikeda, Hanako – 1P028

Ikeda, Takashi – 2P132

Ikumi, Nara – 3P119

Imai, Akira – 2P103

Imura, Ayasa – 2P135

Imura, Tomoko – 3P084

Inomata, Kentaro – 3P082, 4P015

Ioumpa OU Iuba, Kalliopi – 4P108

Ischebeck, Anja – 1P003, 1P004, 2P130

Ishiguchi, Akira – 4P134

Ishihara, Masami – 4P089, 4P096

Ishii, Michitomo – 3P064

Ivanov, Vladimir – 1P021, 3P006

Ivanovich, Artem – 4P064

Iwami, Masato – 4P099

Izmalkova, Anna – 1P012, 1P048

J

Jackson, Margaret – 2P131, 2P141

Jacquemont, Charlotte – 4P004

Jaén, Mirta – 1P010

Jagini, Kishore – 3P012

Jakovljev, Ivana – 3P040

Jakubowiczová, Linda – 4P115

Jalali, Sepehr – 22T104

Jandó, Gábor – 2P029, 4P043

Jansonius, Nomdo – 3P073, 22T203

Jarmolowska, Joanna – 1P132

Jean-Charles, Geraldine – 12T205

Jeffery, Linda – 1P055

Jehee, Janneke – 11S203

Jellinek, Sára – 33T101

Jenderny, Sascha – 1P066

Jenkins, Michael – 1P002

Jenkins, Rob – 4P069

Jernigan, Terry – 23T101

Jessee, M – 42T205

Jiang, Yi – 2P096, 3P019, 33T205

Jicol, Crescent – 2P100

Jin, Jianzhong – 2P065

Joergensen, Gitte – 1P113

John, James – 4P066

Johnston, Alan – 3P144, 13T103, 23T102

Johnston, Richard – 22T206

Joly-Mascheroni, Ramiro – 1P127

Jonauskaite, Domicele – 3P041

Joos, Ellen – 2P129

Joosten, Eva – 2P058

Jordan, Gabriele – 3P035

Joswik, Kamila – 3P028

Jovanovic, Ljubica – 2P110

Jozefowiez, Jeremie – 1P013

Juhasz, Petra – 4P043

Jünemann, Kristin – 22T205

K

Kaasinen, Valtteri – 13T105

Kaestner, Milena – 1P098, 3P023

Kaiser, Daniel – 4P123

Kaiser, Jochen – 2P098

Kalenine, Solene – 3P077, 4P004

Kalia, Amy – 2P083

Kanaya, Hidetoshi – 3P108, 4P091

Kane, David – 13T302

Kaneko, Hiroshi – 2P008

Kanwisher, Nancy – 21S202

Karnatak, Kartikeya – 4P063

Karpinskaia, Valeriia – 1P075

Kartashova, Tatiana – 13T306

Katahira, Kenji – 1P115

Katsube, Maki – 3P017

Kauffmann, Louise – 4P128

Kaufhold, Lilli – 2P053

Kawamura, Takuya – 3P120

Kawasaki, Keigo – 4P015

Kay, Kendrick – 11S202

Kaye, Helen – 4P008

Keage, Hannah – 2P009

Keefe, Bruce – 1P116, 1P117

Keil, Matthias – 2P084, 13T304

Kellman, Philip – 1P068

Kelly, Kristina – 1P037

Kennedy, Henry – 4P107

Kerridge, Jon – 4P021

Kerzel, Dirk – 1P019

Ketkar, Madhura – 33T201

Khalid, Shah – 1P147

Khani, Abbas – 31S106

Khoei, Mina – 2P093

Khuu, Sieu – 3P107

Kietzmann, Tim – 1P060

Kikuchi, Kouki – 1P085

Kikuchi, Masayuki – 1P073

Kim, Jihyun – 1P103

Kim, Juno – 4P091

Kim, Minjung – 11S304

Kim, Yeon – 13T305

Kimchi, Ruth – 1P070, 3P136

Kimura, Atsushi – 3P004

Kimura, Ayano – 1P040

Kimura, Chisato – 4P050

Kimura, Eiji – 1P130, 3P059

Kingdom, Frederick – 3P128

Kingstone, Alan – 4P023

Kircher, Tilo – 3P110

Kiritani, Yoshie – 3P055

Kirkland, John – 3P096

Kirsanova, Sofia – 1P048

Kiryu, Tohru – 1P104

Kiselev, Sergey – 3P088

Kjernsmo, Karin – 1P114

Klanke, Jan-Nikolas – 33T204

Kleinschmidt, Andreas – 1P140

Kleiser, Raimund – 3P125

Klimova, Oksana – 4P064

Klinghammer, Mathias – 31T203

Klostermann, André – 2P042

Knakker, Balázs – 1P053

Knelange, Elisabeth – 3P124

Knoblauch, Kenneth – 4P107

Kobayashi, Misa – 3P140

Kobayashi, Yuki – 2P075

Kobor, Andrea – 1P124

Koenderink, Jan – 43T104

Koenig, Stephan – 22T103

Kogo, Naoki – 2P121, 43T201

Koivisto, Mika – 1P119, 4P119

Kojima, Haruyuki – 2P135

Komatsu, Hidehiko – 21S203

Komatsu, Hidemi – 4P089, 4P096

Komura, Norio – 4P015

Kondo, Aki – 2P025, 2P074

König, Peter – 1P060, 2P053, 4P023, 22T205

Konina, Alena – 3P070

Koning, Arno – 3P007, 3P098, 4P025

Konkina, S.A. – 4P053

Körner, Christof – 1P003, 1P004, 1P006, 2P130, 3P114

Kornmeier, Jürgen – 2P122, 2P129, 3P002

Korolkova, Olga – 3P091

Koskin, Sergey – 4P038

Kotaria, Nato – 4P007

Kotorashvili, Adam – 4P007

Kouider, Sid – 13T201

Kountouriotis, Georgios – 3P126

Kovalev, Artem – 4P097

Kovács, Gyula – 1P053, 2P001

Kovács, Ilona – 1P088, 2P029, 2P123

Kovács, Petra – 1P053

Kozhevnikov, Denis – 2P109

Kramer, Robin – 4P069

Krampe, Ralf – 3P083

Krasilshchikova, Natalya – 4P114

Kremlacek, Jan – 4P013, 4P019

Kriegeskorte, Nikolaus – 1P095, 11S206, 3P028

Kristensen, Stephanie – 2P005

Kristjansson, Árni – 2P020, 2P022, 33T104, 41T108

Kristjánsson, Tómas – 2P022, 41T108

Krivykh, Polina – 1P041

Kroliczak, Gregory – 1P046, 2P144

Kruttsova, Ekaterina – 4P045

Krügel, André – 2P062

Kuba, Miroslav – 4P013, 4P019

Kubiak, Agnieszka – 2P144

Kubova, Karolina – 4P013

Kubova, Zuzana – 4P013, 4P019

Kulieva, Almara – 1P128

Kulikova, Alena – 1P020

Kuling, Irene – 31T205

Kunchulia, Marina – 4P002, 4P007, 41S203

Kuniecki, Michał – 3P014, 3P054, 4P120

Kunimi, Mitsunobu – 2P134

Kuravi, Pradeep – 32T201

Kurbel, David – 3P093

Kurucz, Attila – 1P142

Kutylev, Sergey – 3P026

Kuvaldina, Maria – 1P128, 2P061, 4P022

Kvasova, Daria – 1P016

L

Lachnit, Harald – 22T103

Lacquaniti, Francesco – 3P127

Laeng, Bruno – 2P070

Lafer-Sousa, Rosa – 21S202

Lages, Martin – 3P080

Lahlaf, Safiya – 23T206

Lamminpia, Aino – 4P038

Langrova, Jana – 4P013, 4P019

Lappe, Markus – 2P043, 11T104

Lappi, Otto – 4P065

Laptev, Vladimir – 1P021

Lasne, Gabriel – 1P140

Lau, Kenji – 4P087

Laubrock, Jochen – 2P060

Lauer, Tim – 21T308

Lauffs, Marc – 3P141, 23T104

Lauritzen, Jan – 4P054

Lawrence, Samuel – 1P116, 1P117, 4P052

Lawson, Rebecca – 1P109, 4P101

Lazar, Aurel – 11S201

Le Bec, Benoit – 42T104

Le Couteur Bisson, Thomas -

 3P035

Le Meur, Olivier – 2P048

Le Pelley, Mike – 41T106

Learmonth, Gemma – 4P010

Lecci, Giovanni – 1P132

Lecker, Maya – 3P094

Ledgeway, Timothy – 22T206

Lee, Barry – 21T303

Leek, Charles – 3P030, 43T106

Lehtimäki, Taina – 2P033

Lek, Jia – 4P042

Lemoine-Lardennois, Christelle -

 4P056

Lengyel, Mate – 3P078

Lenoble, Quentin – 4P042

Leonards, Ute – 31T207

Lepri, Antonio – 22T204

Lepri, Martina – 22T204

Lerer, Alejandro – 2P084

Leroy, Anaïs – 2P137

Leymarie, Frederic – 1P088

Lezkan, Alexandra – 2P099

Li, Chaoyi – 1P091

Li, Dan – 22T105, 31T204

Li, Hsin-Hung – 11T101

Li, Hui – 1P091

Li, Li – 31T202

Li, Min – 2P118

Li, Qi – 4P132

Li, Yongjie – 1P091

Liaci, Emanuela – 3P002

Liberman, Alina – 21T302, 31S101

Likova, Lora – 2P139

Lillakas, Linda – 3P022

Lillo, Julio – 3P042, 3P044, 3P047

Lin, Hsiao-Yuan – 3P029

Lin, Yih-Shiuan – 1P071

Lin, Yu – 3P029, 4P122

Linares, Daniel – 22T105

Linhares, João – 3P061

Lisboa, Isabel – 4P027

Lisi, Domenico – 22T204

Lisi, Matteo – 22T101

Liu, Juan – 2P104

Liu, Jun – 3P130

Liu, Shengxi – 1P069

Liu, Xiang-Yun – 4P084

Liu-Shuang, Joan – 1P051, 1P056

Llorente, Miquel – 1P127

Loconsole, Maria – 1P137

Loffler, Gunter – 4P006

Logan, Andrew – 4P006

Logvinenko, Alexander – 3P051

Lojowska, Maria – 3P018

Loney, Ryan – 3P072

Longmore, Christopher – 12T302

Longo, Matthew – 1P146

López-Moliner, Joan – 3P076, 3P123, 3P124, 22T105

Lorenceau, Jean – 2P105

Losonczi, Anna – 1P142

Lovell, P. – 43T101

Lugtigheid, Arthur – 4P145

Lukavsky, Jiri – 1P089, 4P100

Lukes, Sophie – 2P018, 2P019

Lunghi, Claudia – 22T204

Luniakova, Elizaveta – 4P074

Lupiáñez, Juan – 1P139

Lyakhovetskii, Vsevolod – 1P075

Lygo, Freya – 3P050

M

Machizawa, Maro – 2P134

MacInnes, Joseph – 1P015, 1P017, 1P020, 1P101

Macknik, Stephen – 1P026, 12T104, 42T205

MacLean, Rory – 4P021

MacLeod, Donald – 23T201

Macrae, Neil – 2P124

Maddison, Sarah – 2P010

Madelain, Laurent – 3P081

Madill, Mark – 4P005

Madipakkam, Apoorva – 1P037

Maerker, Gesine – 4P010

Maertens, Marianne – 11S303

Magalhaes, Adsson – 4P127

Magnago, Denise – 13T104

Maguinness, Corrina – 2P113

Maho, Cristina – 3P038

Mahon, Aoife – 1P024

Maiello, Guido – 11T106

Maier, Thomas – 1P099

Makin, Alexis – 1P067, 3P129, 3P143, 42T101

Malevich, Tatiana – 1P015, 1P017

Mallick, Arijit – 2P076

Mallot, Hanspeter – 1P123

Malo, Jesús – 1P093, 4P058

Maloney, Laurence – 11S306

Maloney, Ryan – 1P098, 3P023

Maltsev, Dmitry – 4P038

Mamani, Edgardo – 1P010

Mamassian, Pascal – 2P006, 2P069, 2P110, 2P114, 3P032

Manahilov, Velitchko – 1P027

Manassi, Mauro – 21T302, 31S101

Maniglia, Marcello – 33T102

Mansour Pour, Kiana – 2P045

Marco-Pallarés, Josep – 3P123

Mareschal, Isabelle – 3P144, 13T103

Margolf-Hackl, Sabine – 4P001

Marini, Francesco – 2P014

Marković, Slobodan – 3P008

Marmolejo-Ramos, Fernando -

 43T203

Marre, Olivier – 11S205

Marshev, Vasili – 2P061

Martelli, Marialuisa – 3P066

Martín, Andrés – 2P080, 3P101

Martin, Jacob – 4P078

Martin, Paul – 21S201

Martin, Sian – 22T104

Martinez-Conde, Susana – 1P026, 12T104, 42T205

Martinez-Garcia, Marina – 1P093, 4P058

Martinovic, Jasna – 41T105

Masame, Ken – 3P095

Masayuki, Sato – 2P008, 2P036

Maselli, Antonella – 3P127

Massendari, Delphine – 2P059

Masson, Guillaume – 2P045, 22T106, 21S102

Mast, Fred – 1P077

Mastropasqua, Tommaso – 1P005

Masuda, Ayako – 4P015

Masuda, Naoe – 4P089, 4P096

Mather, George – 2P012, 3P109, 43T206

Mathôt, Sebastiaan – 41T103

Matsunaga, Shinobu – 1P040

Matsuno, Takanori – 1P040

Matsushita, Soyogu – 1P087, 2P075

Mattler, Uwe – 1P081, 1P143

Matyas, Thomas – 3P125

Maus, Gerrit – 12T102

Reco Love Blue Ocean Iso Download Pc

May, Keith – 12T105

Mazade, Reece – 2P065

McCamy, Michael – 12T104, 42T205

McCants, Cody – 1P002, 3P015

McGovern, David – 4P085

McGraw, Paul – 2P046, 41S202

McKeefry, Declan – 1P116

McKendrick, Allison – 4P042

McLeod, Katie – 4P024

McLoon, Linda – 2P004

McMillin, Rebecca – 23T103

McPeek, Robert – 41T107

Meermeier, Annegret – 2P043

Meese, Tim – 42T102

Meinhardt, Günter – 2P019, 3P093, 32T204, 42T103, 2P018

Meinhardt-Injac, Bozana – 3P093

Melcher, David – 2P127, 13T102

Melin, Amanda – 3P037

Melnikova, Anna – 3P042, 3P044, 3P047

Reco Love Blue Ocean Iso Download Full

Menshikova, Galina – 1P041, 1P047, 1P129, 3P092, 4P097, 4P114

Menzel, Claudia – 2P078

Meredith, Zoe – 23T103

Mermillod, Martial – 4P128

Meso, Andrew – 21S102

Mestre, Clara – 4P060, 42T206

Metzger, Anna – 2P117

Meyer, Georg – 2P106, 4P088, 4P110

Mielke, Lena – 3P098

Miellet, Sebastien – 1P009, 12T205

Mifsud, Nathan – 31T201

Miguel, Helga – 4P027

Mihaylova, Milena – 1P027, 3P043

Milella, Fernando – 4P088

Miler, Dorante – 4P128

Millela, Ferdinando – 4P110

Miller, Joe – 1P003, 1P004

Milliken, Bruce – 1P139

Minami, Tetsuto – 4P068, 4P075

Mineff, Kristyo – 2P139

Minkov, Vasiliy – 3P026

Mironova, Anastasia – 3P062

Mitov, Dimitar – 3P043

Miyashita, Tatsuya – 3P004

Miyoshi, Kiyofumi – 4P122

Mizokami, Yoko – 2P081

Mizuguchi, Ayane – 4P134

Mlynarova, Aneta – 4P019

Mogan, Gheorghe – 4P059

Mohler, Betty – 1P108

Mohr, Christine – 3P041, 4P002

Moiseenko, Galina – 1P033, 4P038

Mölbert, Simone – 1P108

Mole, Callum – 3P126

Mollon, J – 23T205

Mon-Williams, Mark – 3P112

Mond, Jonathan – 12T106

Mongillo, Gianluigi – 22T101

Montagner, Cristina – 3P061

Montagnini, Anna – 2P044, 2P045, 22T106, 21S102

Montague-Johnson, Christine -

 12T202

Moors, Pieter – 33T202

Mora, Thierry – 11S205

Moreira, Humberto – 3P042, 3P044, 3P047

Moreno-Sánchez, Manuel -

 2P125, 2P128

Moretto, Enzo – 4P146

Morgan, Michael – 23T106

Mori, Shiori – 4P124

Morikawa, Kazunori – 2P075

Morita, Marie – 2P002

Morland, Antony – 1P116, 1P117, 4P051, 4P052

Morris, Thomas – 4P126

Morrone, M Concetta – 11T102, 13T106, 22T201, 22T204, 43T202

Motoyoshi, Isamu – 3P069, 4P124

Motoyoshi, Kanako – 3P055

Mouta, Sandra – 4P027

Muckli, Lars – 1P095

Mueller, Matthias – 3P013, 41T105

Mueller, Stefanie – 3P111

Mulckhuyse, Manon – 3P018

Mullen, Kathy – 13T305

Müller, Hermann – 2P147

Müller, Matthias – 41S204

Murakoshi, Takuma – 1P130

Murata, Aiko – 43T203

Murata, Kayoko – 4P089, 4P096

Muravyova, Svetlana – 1P033

Murgia, Mauro – 2P082

Murray, Janice – 4P005

Murray, Jennifer – 4P021

Murray, Richard – 11S304

Muschter, Evelyn – 13T102

Muth, Claudia – 1P076

Muto, Kazuhito – 1P115

Muukkonen, Ilkka – 2P133

N

Nabae, Yuki – 2P081

Nagata, Noriko – 1P115, 4P015

Nagy, Balazs – 4P127

Naito, Seiichiro – 3P053

Nakakoga, Satoshi – 4P068

Nakano, Yasushi – 4P089, 4P096

Nakashima, Ryoichi – 4P132

Nakauchi, Shigeki – 2P057, 2P088, 3P039, 4P068, 4P075

Näsänen, Risto – 2P033

Nascimento, Sérgio – 3P061

Naughton, Thomas – 2P033

Naumer, Marcus – 2P098

Navarro, V.M. – 4P079

Nawrot, Elizabeth – 3P086

Nawrot, Mark – 2P039, 3P086

Nelson, Elizabeth – 1P054

Nemes, Vanda – 2P029, 4P043

Neveu, Pascaline – 2P027, 2P037

Newell, Fiona – 4P085

Newman, Erik – 23T101

Nicholas, Spero – 2P139

Niemelä, Mikko – 2P033

Nihei, Yuji – 4P068, 4P075

Nikata, Kunio – 4P015

Nikitina, Elena – 4P077

Nikolaev, Andrey – 43T201

Nikolaeva, Valentina – 4P103

Nishida, Shin'ya – 2P087

Nishimoto, Shinji – 11S204

Nishina, Shigeaki – 2P041

Noceti, Nicoletta – 43T205

Noens, Ilse – 12T201

Nogueira, Joana – 1P122

Nordhjem, Barbara – 1P121

Nowik, Agnieszka – 1P046

Nozima, Adriana – 42T205

Nunez, Valerie – 3P046

Nuthmann, Antje – 4P057

O

O'Hare, Louise – 3P103

O'Keeffe, Johnathan – 3P028

O'Regan, J. – 23T204

O'Reilly, Randall – 4P116

O’Shea, Robert – 2P125

Oberfeld, Daniel – 3P063

Odone, Francesca – 43T205

Ogawa, Masaki – 4P091

Öğmen, Haluk – 23T104, 42T105

Oka, Takashi – 3P004

Okajima, Katsunori – 3P060, 4P109

Okazaki, Akane – 3P055

Okuda, Shino – 4P109

Ölander, Kaisu – 2P133

Oliver, Zoe – 3P030

Olkkonen, Maria – 2P142

Olson, Jay – 1P133

Ondategui-Parra, Juan C. -

 42T206

Ono, Hiroshi – 2P097, 3P022

Ono, Takumi – 2P008

Ookubo, Noriko – 3P055

Oqruashvili, Mariam – 1P034

Or, Charles – 1P051

Or, Kazim – 3P036

Orlov, Pavel – 1P021, 3P006

Ortiz, Javier – 1P139

Ortlieb, Stefan – 3P003, 3P052

Osaka, Mariko – 2P132

Osaka, Naoyuki – 2P132

Oshchepkova, Maria – 1P047

Ostendorf, Florian – 11T105

Otazu, Xavier – 1P102, 1P106, 32T203

Otero, Carles – 2P030, 4P061, 42T206

Otero-Millan, Jorge – 12T104

Overvliet, Krista – 3P083

Ozakinci, Gozde – 3P057

Ozawa, Yuta – 2P036

P

Pachai, Matthew – 13T204

Palczewski, Krzysztof – 4P041

Palermo, Romina – 3P085

Palmieri, Laura – 4P137

Palmisano, Stephen – 4P091

Pan, Jing – 1P112

Panchagnula, Sweta – 4P054

Panchagnula, Udaya – 4P054

Panis, Sven – 42T202

Pannasch, Sebastian – 2P052

Papai, Marta – 4P106

Papathomas, Thomas – 1P063

Paramei, Galina – 3P092

Parozzi, Elena – 3P142

Parr, Jeremy – 12T202

Parraga, Alejandro – 3P132

Parsons, Todd – 43T206

Parzuchowski, Michał – 43T203

Pastilha, Ruben – 3P061

Pastukhov, Alexander – 2P123, 33T204

Pásztor, Sára – 2P029

Paulun, Vivian – 12T304

Pavan, Andrea – 2P012, 3P109

Pavlov, Yuri – 4P055

Pavlova, Daria – 3P006

Pavlova, Marina – 2P145, 4P033, 4P034

Pearson, Daniel – 41T106

Peelen, Marius – 4P123

Pegna, Alan – 43T106

Pehme, Patricia – 3P046

Peirce, Jonathan – 3P137

Peiti, Antea – 3P066

Pelli, Denis – 3P005

Penacchio, Olivier – 32T203, 43T101

Pepperell, Robert – 4P140

Pereira, Alfredo – 4P027

Perepelkina, Olga – 4P103

Pérez-Bellido, Alexis – 4P111

Perre-Dowd, Alicia – 1P035

Perrett, David – 3P057

Perrinet, Laurent – 2P044, 2P045, 22T106

Persa, Gyorgy – 1P124

Persike, Malte – 1P072, 2P018, 2P019, 3P093, 32T204, 42T103

Pertzov, Yoni – 2P146

Pesonen, Henri – 13T105

Peterzell, David – 23T201

Petilli, Marco – 2P014

Petras, Kirsten – 1P057

Petrini, Karin – 2P100

Petro, Lucy – 1P095

Peyrin, Carole – 3P077, 4P128

Philippe, Matthieu – 2P027

Phillips, David – 3P104

Phipps, Natasha – 2P013

Piazza, Manuela – 1P140

Pichet, Cedric – 3P077

Pilarczyk, Joanna – 3P014, 3P054, 4P120

Pilz, Karin – 4P002, 4P003, 4P007, 41S201, 41S203

Pinheiro, Ana – 3P121

Pinna, Baingio – 3P133

Pinto, Carlo – 4P110

Piotrowska, Barbara – 4P021

Pisanski, Katarzyna – 1P126

Pitcher, David – 1P050

Pitchford, Nicola – 22T206

Pittino, Ferdinand – 3P067

Plaisier, Myrthe – 31T205

Plantier, Justin – 2P032, 2P037

Plewan, Thorsten – 3P025

Poder, Endel – 4P141

Podvigina, Daria – 1P120

Pohlmeyer, Eric – 42T205

Poletti, Martina – 21S101

Pollack, Jordan – 3P099

Pollick, Frank – 2P100, 4P034

Pons, Carmen – 2P065

Pont, Sylvia – 2P068, 13T306, 11S305

Portelli, Benjamin – 2P086

Portron, Arthur – 2P105

Porubanova, Michaela – 4P022

Postelnicu, Cristian – 4P059

Poth, Christian – 4P016

Powell, Georgie – 4P049, 23T103

Prado-León, Lilia – 3P042

Pressigout, Alexandra – 1P039

Priamikov, Alexander – 22T202

Priot, Anne-Emmanuelle – 2P027, 2P032, 2P037

Prokopenya, Veronika – 1P044, 1P120

Pronin, Sergey – 1P029, 4P038

Pronina, Marina – 1P033

Prpic, Valter – 2P082

Ptukha, Anna – 3P032

Pu, Xuan – 1P091

Pujol, Jaume – 2P030, 4P058, 4P060, 4P061, 42T206

Pusztai, Agota – 4P043

Q

Qian, Jiehui – 1P069

Quesque, François – 1P131

Quigley, Cliodhna – 41S204

R

Raab, Marius – 3P062

Racheva, Kalina – 3P043

Radonjić, Ana – 13T303

Radó, János – 2P029

Rafegas, Ivet – 3P048

Rago, Anett – 1P142

Railo, Henry – 4P119, 13T105

Rainer, Gregor – 31S106

Rajenderkumar, Deepak – 4P049

Ramos-Gameiro, Ricardo -

 22T205

Rampone, Giulia – 1P067, 42T101

Ramsey, Chris – 1P009

Rančić, Katarina – 3P008

Rashal, Einat – 3P136

Raz, Amir – 1P133

Razmi, Nilufar – 2P064

Rea, Francesco – 43T205

Read, Jenny – 2P034

Reddy, Leila – 12T303

Redfern, Annabelle – 3P090

Redies, Christoph – 2P078

Regolin, Lucia – 1P137, 4P146

Reilly, Ronan – 2P033

Renken, Remco – 1P100, 3P073, 22T203

Reuter, Magdalena – 1P046

Reuther, Josephine – 3P075

Revina, Yulia – 1P095

Revol, Patrice – 2P103

Revonsuo, Antti – 1P119

Rhodes, Darren – 13T101, 32T101

Rhodes, Gillian – 1P055

Rider, Andrew – 23T203

Ridley, Nicole – 3P085

Ridwan, Carim-Sanni – 3P046

Rieiro, Hector – 1P026

Riesenhuber, Maximilian – 4P078

Rifai, Katharina – 4P039

Riggio, Lucia – 2P007

Rima, Samy – 4P095

Ripamonti, Caterina – 23T203

Rizzo, Stanislao – 22T201

Roach, Neil – 2P010, 2P046, 2P094, 4P085, 22T206

Roberts, Mark – 43T106

Röder, Susanne – 1P126

Roelofs, Karin – 3P018

Rogers, Brian – 43T104

Roinishvili, Maya – 1P034, 4P002, 4P036

Rolke, Bettina – 1P065, 2P017

Romei, Vincenzo – 4P080

Romeo, August – 1P096

Rose, David – 3P146

Rose, Dylan – 2P083

Roseboom, Warrick – 4P133, 13T101, 32T101

Rossetti, Yves – 2P103

Rossion, Bruno – 1P051, 1P056, 12T301

Rothkirch, Marcus – 1P037

Rothkopf, Constantin – 3P118

Roudaia, Eugenie – 4P026, 4P085

Roumes, Corinne – 2P027

Roux-Sibilon, Alexia – 3P077, 4P128

Roy, Sourya – 2P076

Rozhkova, Galina – 4P045, 4P046, 4P047

Rubino, Cristina – 1P032

Rucci, Michele – 3P038, 21S101

Rucker, Frances – 3P045

Rudd, Michael – 32T206

Rugani, Rosa – 1P137

Rushton, Simon – 2P095, 4P049, 4P098, 21S105

Ruta, Nicole – 4P140

Rutar, Danaja – 3P003, 3P052

Ruzzoli, Manuela – 2P108, 4P020

Ryan, Thomas – 3P104

Rychkova, Svetlana – 4P044, 4P047

S

Saarela, Toni – 2P142

Sabary, Shahar – 1P070

Sabatini, Silvio – 11T106, 11T107

Sahraie, Arash – 2P124

Sahu, Ishan – 4P073

Sakano, Yuichi – 3P106

Sakata, Katsuaki – 4P011

Sakurai, Kenzo – 2P097

Salah, Albert – 3P021

Sale, Alessandro – 22T204

Salmela, Viljami – 2P133, 32T106

Salmi, Juha – 32T106

Salminen-Vaparanta, Niina -

 1P119

Salo, Emma – 32T106

Salvador, Marta – 42T206

Sampaio, Adriana – 4P027

Sanchis-Jurado, Vicent – 4P058

Sandini, Giulio – 13T106, 43T202, 43T205

Santoro, Ilaria – 2P082

Santos, Jorge – 4P027

Sapir, Ayelet – 12T203

Sarah, Bhutto – 3P097

Sarbak, Klára – 1P142

Sasaki, Kasumi – 4P139

Blue Ocean Traders

Sasaki, Masaharu – 4P090

Sasaki, Yasunar – 3P064

Sato, Hiromi – 3P069

Sato, Takao – 2P002, 3P108

Sato, Yusuke – 4P090

Saussard, Bertrand – 2P037

Saveleva, Olga – 1P129

Sawada, Tadamasa – 2P031, 3P026

Sawai, Ken-ichi – 4P091

Sayim, Bilge – 3P071, 13T203

Scarfe, Peter – 2P107

Schaeffner, Lukas – 1P094

Scheel, Anne – 3P080

Scheffler, Klaus – 4P033

Schelske, Yannik – 4P063

Schiffer, Alina – 32T204

Schiller, Florian – 21T307

Schloss, Karen – 21S204

Schmid, Alexandra – 2P089

Schmidt, Filipp – 12T304

Schmidt, Thomas – 2P079

Schmidtmann, Gunnar – 3P099

Schneider, Tobias – 12T305

Schneider, Werner – 2P050, 3P117, 4P016

Schoeberl, Tobias – 1P014

Scholes, Chris – 2P046

Schottdorf, Manuel – 21T303

Schröder, Melanie – 2P079

Schubert, Torsten – 1P008

Schuette, Peter – 3P046

Schulz, Johannes – 2P052

Schönhammer, Josef – 1P019

Schütz, Alexander – 2P047, 2P054

Schütz, Immo – 4P057, 31T203

Sciutti, Alessandra – 43T205

Scott, Helen – 3P010

Scott-Samuel, Nicholas – 1P114, 3P034, 3P058

Sedgwick, Harold – 3P024

Seibold, Verena – 2P017

Seitz, Rüdiger – 3P125

Seizova-Cajic, Tatjana – 2P101, 23T105

Sekizuka, Mayu – 3P059

Semenova, Maria – 1P012

Seno, Takeharu – 4P091, 4P092

Senyazar, Berhan – 3P021

Sergienko, R.A – 4P053

Serino, Andrea – 4P105

Serrano-Pedraza, Ignacio -

 2P034, 3P100

Seth, Anil – 4P133, 13T101, 32T101

Setoguchi, Emi – 4P092

Seymour, Kiley – 4P143, 4P147

Sframeli, Angela – 22T204

Shackelford, Todd – 1P126

Shapley, Robert – 3P046, 11S303

Shaqiri, Albulena – 4P002, 41S203

Sharman, Rebecca – 3P131, 43T206

Shelepin, Eugene – 1P029, 1P033

Shelepin, Yuriy – 1P033, 4P038, 1P029, 4P053

Shepard, Timothy – 23T206

Shepherd, Alex – 4P142

Shi, Bertram – 22T202

Shigemasu, Hiroaki – 3P120

Shiina, Kenpei – 1P086, 1P082

Shimakura, Hitomi – 4P011

Shinohara, Kazuko – 4P099

Shiori, Mochizuki – 3P135

Reco Love Blue Ocean Iso Download Pc

Shirai, Nobu – 3P084

Shiraiwa, Aya – 4P015

Shoshina, Irina – 4P053

Shtereva, Katerina – 1P027

Sierro, Guillaume – 4P002

Simko, Juraj – 4P100

Simmons, David – 4P130

Simoes, Elisabeth – 2P145

Simpson, William – 12T302

Singh, Manish – 43T102

Sinha, Pawan – 2P083

Siniatchkin, Michael – 2P098

Siromahov, Metodi – 1P146

Skelton, Alice – 12T206

Skerswetat, Jan – 2P119

Sleiman, Daria – 3P099

Slesareva, Oksana – 4P055

Smeets, Jeroen – 3P113, 3P115, 31T205, 21S104

Smith, Andy – 4P095

Smith, Daniel – 2P051, 3P016

Sohaib, Ali – 3P056

Sokolov, Alexander – 2P145, 4P033

Sokolov, Arseny – 4P034

Sol, Jean-Christophe – 4P129

Solomon, Joshua – 22T104, 23T106, 42T203

Soltész, Péter – 2P123

Song, MIao – 1P062

Song, Yoomin – 3P046

Soo, Leili – 4P024, 13T205

Soranzo, Alessandro – 1P025

Sors, Fabrizio – 2P082

Soto-Faraco, Salvador – 1P016, 2P108, 3P119, 4P020, 4P106

Souman, Jan – 1P066

Spaas, Charlotte – 2P121

Spence, Charles – 1P135, 2P115, 4P102

Spence, Leslie – 4P130

Spencer, Lucy – 21T306

Sperandio, Irene – 2P067, 4P138

Spillmann, Lothar – 2P040

Spotorno, Sara – 2P137, 2P141

Srismith, Duangkamol – 4P070

Stadler, Cornelia – 3P125

Stakina, Yulia – 4P131

Stanciu, Oana – 3P078

Startsev, Mikhail – 4P121

Stefanov, Simeon – 2P090, 3P105

Stefanova, Miroslava – 2P090, 3P105

Stein, Maximilian – 1P081

Stein, Timo – 4P147, 33T202

Stephen, Ian – 12T106

Sterzer, Philipp – 1P037, 2P126, 4P147, 33T201

Stevanov, Jasmina – 3P011

Steven, Hillyard – 41T105

Stevenson, Richard – 12T106

Steyaert, Jean – 12T201

Stocker, Alan – 31S103

Stockman, Andrew – 23T203

Stojilovic, Ivan – 3P003, 3P052

Storrs, Katherine R – 3P028

Strachan, James – 4P067

Strasburger, Hans – 4P018

Straub, Dominik – 3P118

Straube, Benjamin – 3P110

Streuber, Stephan – 1P108

Stroyan, Keith – 2P039

Stuit, Sjoerd – 2P121

Sturman, Daniel – 12T106

Suarez-Pinilla, Marta – 4P133

Sugihara, Kokichi – 2P026

Sui, Jie – 1P135

Sukhinin, Mikhail – 4P038

Sukigara, Sachiko – 2P025, 2P074

Sullivan, Peter – 12T202

Sulykos, Istvan – 4P135

Sumner, Petroc – 4P049

Sun, Hua-Chun – 3P128

Sunny, Meera – 3P012

Supèr, Hans – 1P096, 2P084, 3P089

Suzuki, Masahiro – 1P084

Suzuki, Takeshi – 4P063

Swalwell, Robert – 3P016

Szanyi, Jana – 4P013, 4P019

T

Tabeta, Shin – 1P104

Tagu, Jérôme – 4P056

Takahashi, Kohske – 2P021

Takahashi, Natsumi – 3P059

Takahashi, Nobuko – 1P080

Takano, Ruriko – 3P055

Takehara, Takuma – 1P061

Takeichi, Masaru – 4P009

Takemura, Akihisa – 4P109

Takeshima, Yasuhiro – 2P015

Talamas, Sean – 3P057

Talas, Laszlo – 3P058

Tamada, Yasuaki – 2P008, 2P036

Tamura, Hideki – 2P088

Tanahashi, Shigehito – 1P104

Tanaka, Hideyuki – 4P099

Tanaka, Kazuaki – 1P115

Tang, Xiaoqiao – 1P064

Tanijiri, Toyohisa – 1P061

Tardif, Carole – 1P036

Taubert, Jessica – 12T101

Taya, Shuichiro – 1P083

Taylor, Chris – 3P045

Taylor, Henry – 3P071

te Pas, Susan – 13T306

Tenore, Francesco – 42T205

Terbeck, Sylvia – 12T302

Tessera, Marica – 3P066

Thaler, Anne – 1P108

Thomassen, Sabine – 33T203

Thornton, Ian – 2P020, 2P022, 41T108

Thorpe, Simon – 4P078, 4P129

Thunell, Evelina – 4P129

Tiainen, Mikko – 4P100

Tiippana, Kaisa – 4P100

Tinelli, Francesca – 43T202

Ting, Travis – 1P023, 4P017

Tipper, Steven – 4P067

Tirado, Carlos – 43T203

Tkacz-Domb, Shira – 41T102

Todorović, Dejan – 43T103

Tognoni, Gloria – 4P012

Togoli, Irene – 31T206

Tohju, Masashi – 4P094

Tolhurst, David – 3P068

Töllner, Thomas – 2P147

Tommasi, Luca – 2P070, 3P142

Tonelli, Alessia – 4P105

Torfs, Katrien – 1P056

Torii, Shuko – 4P090

Torok, Agoston – 1P124

Török, Béla – 2P029

Torralba, Mireia – 4P020

Toscani, Matteo – 13T301, 21T307

Tošković, Oliver – 4P144

Totev, Tsvetalin – 3P043

Tresilian, James – 3P112

Triesch, Jochen – 22T202

Trkulja, Marija – 3P008

Troje, Nikolaus – 4P028

Troncoso, Xoana – 42T104

Trotter, Yves – 4P095, 33T102

Tsank, Yuliy – 11T108

Tsao, Raphaele – 1P036

Tseng, Chia-huei – 2P023

Tsuinashi, Seiichi – 2P028

Tsujita, Masaki – 3P145

Tsukuda, Maki – 2P057

Tsushima, Yoshiaki – 3P106

Tsybovsky, Yaroslav – 4P041

Tubau, Elisabet – 3P076

Tudge, Luke – 1P008

Tulenina, Nadezhda – 4P055

Tuominen, Jarno – 13T105

Turatto, Massimo – 1P005

Turi, Marco – 43T202

Turner, Jay – 3P035

Turoman, Nora – 4P102

Tuvi, Iiris – 1P018

Tyler, Christopher – 2P139, 23T202, 42T203

Tyler, Sarah – 4P081

U

Ueda, Sachiyo – 4P134

Ueda, Takashi – 1P086

Umebayashi, Chiaki – 2P074

Ungerleider, Leslie – 1P050

Ushitani, Tomokazu – 3P017, 3P135

Utochkin, Igor – 4P131

Utz, Sandra – 1P078

V

Vainio, Lari – 4P100

Vainio, Martti – 4P100

Vakhrameeva, Olga – 4P038

Valle-Inclán, Fernando – 1P026

Valsecchi, Matteo – 13T301, 41T101, 21S103

Valton, Luc – 4P129

van Assen, Jan Jaap – 12T304, 12T306

van Asten, F. – 1P030

van Boxtel, Jeroen J – 12T103

van Dam, Loes C – 31T204, 2P111, 31T204

van den Berg, A.V. – 1P030, 4P035

van der Burg, Erik – 2P116, 32T103

van der Hallen, Ruth – 12T201

van der Vliet, Skye – 3P011

van Ee, Raymond – 2P121, 33T202

van Elst, Ludger Tebartz – 3P002

van Esch, Lotte – 12T201

van Kemenade, Bianca M. -

 3P110

van Konigsbruggen, Martijn -

 4P082

van Leeuwen, Tessa M – 4P112

van Lier, Rob – 3P007, 3P098, 4P025, 4P108, 4P113

van Rooij, Marieke – 2P122

Vancleef, Kathleen – 2P034

Vanmarcke, Steven – 12T201

Vann, Seralynne – 21S105

Vanrell, Maria – 3P048

VanRullen, Rufin – 1P092

Vater, Christian – 2P042

Vaughan, Sarah – 3P068

Vengadeswaran, Abhi – 1P054

Verfaillie, Karl – 4P040

Vergilino-Perez, Dorine – 4P056

Vergne, Judith – 4P056

Vernon, Richard – 1P116, 1P117, 4P052

Vidnyánszky, Zoltán – 1P053

Vienne, Cyril – 2P032, 2P037

Vignolo, Alessia – 43T205

Vilaseca, Meritxell – 42T206

Vilidaite, Greta – 3P079

Vishwanath, Dhanraj – 2P035

Visoikomogilski, Aleksandar -

 2P124

Vit, Frantisek – 4P013, 4P019

Vitkova, Viktoriya – 1P039

Vitu, Françoise – 1P043, 2P059, 11T103

Võ, Melissa – 1P136, 21T308, 4P136

Vogels, Rufin – 32T201

Volbrecht, Vicki – 23T201

Volk, Denis – 2P031

von Castell, Christoph – 3P063

von der Heydt, Rüdiger – 21T301

von Kriegstein, Katharina – 2P113

Voudouris, Dimitris – 3P115, 3P116

Vrancken, Leia – 4P040

Vrankovic, Jasmina – 2P136

Vul, Ed – 41T104

Vullings, Cécile – 3P081

Vyazovska, Olga – 4P079

W

Wada, Makoto – 1P028

Wade, Alex – 1P098, 1P116, 2P086, 2P092, 3P023, 3P050, 4P093, 21T306

Wagemans, Johan – 1P138, 2P121, 3P136, 12T201, 13T203, 33T202, 43T201

Wagner, Michael – 2P055

Wahl, Siegfried – 4P039

Wahn, Basil – 4P023

Wailes-Newson, Kirstie – 4P093

Wakebe, Toshihiro – 4P091

Walker, Robin – 1P043

Wallis, Thomas – 21T304

Wallwiener, Diethelm – 2P145

Wamain, Yannick – 3P121, 4P031

Wang, Lina – 3P130

Wang, Ling – 4P118

Wang, Ying – 2P096, 33T205

Wardle, Susan – 4P143

Wasserman, E.A. – 4P079

Watanabe, Katsumi – 2P021, 43T203

Watanabe, Osamu – 2P087

Waters, Amy – 3P057

Watson, Tamara – 11T104

Waugh, Sarah – 2P119, 4P048

Webb, Abigail – 2P072

Webb, Ben – 2P010, 41S202

Webster, Michael – 23T201

Weege, Bettina – 1P126

Weiss, David – 3P049

Weiß, Katharina – 2P050

Welbourne, Lauren – 3P050

Welchman, Andrew – 1P094

West, Peter – 23T203

Wexler, Mark – 2P101, 31S105

White, Mark – 4P088, 4P110

Whitehead, Ross – 3P057

Whitford, Thomas – 31T201, 41T106

Whitney, David – 21T302, 31S101

Whitney, Heather – 1P114

Wichmann, Felix – 21T304

Wiebel, Christiane – 11S303

Wiener, Jan – 1P009

Wijntjes, Maarten – 1P118, 3P033

Wilbertz, Gregor – 1P037, 2P126, 33T201

Wilder, John – 11S304

Wilkie, Richard – 3P112, 3P126

Wilkins, Arnold – 32T203

Willemin, Julie – 4P002

Williams, Jeremy – 2P013

Williford, Jonathan – 21T301

Willis, Alexandra – 4P021

Willis, Megan – 3P085

Wilson, Christopher – 1P025

Wimmer, Sibylle – 3P125

Wincenciak, Joanna – 4P037

Witzel, Christoph – 23T204

Wolf, Christian – 2P047

Wolff, Anika – 22T205

Wołoszyn, Kinga – 3P054, 4P120

Wolpert, Daniel – 3P078

Wong, Nicole H. L. – 4P017, 1P023

Woodall, Rachel – 4P051

Woodhouse, Maeve – 2P034

Woods, Russell – 4P117

Wright, Damien – 3P129

Wu, Qitao – 3P130

Wuerger, Sophie – 2P003, 2P106, 3P056, 41T105

Wyatt, Geddes – 4P142

X

Xia, Ye – 31S101

Xiao, Kaida – 3P056

Xie, Xin-Yu – 33T103

Xu, He – 32T205

Xu, Qian – 33T205

Y

Yaguchi, Hirohisa – 2P081

Yakimova, Elena – 1P029

Yakovlev, Volodya – 1P145

Yakushijin, Reiko – 4P134

Yamada, Koichiro – 3P145

Yamanouchi, Toshiaki – 1P084, 4P050

Yamashita, Okito – 2P134

Yamauchi, Naoto – 4P099

Yamazaki, Shun – 4P139

Yan, Hongmei – 1P064

Yanaka, Kazuhisa – 1P084, 4P050

Yanase, Tiffany – 3P045

Yanchus, Victor – 3P006

Yarrow, Kielan – 22T104, 32T104

Yasuaki, Tamada – 2P008, 2P036

Yasuda, Takashi – 1P086

Yates, Julian – 3P056

Yau, Jeffrey – 4P111

Yavna, Denis – 1P110

Yeatman, Jason – 11S202

Yeh, Su-Ling – 32T105

Yeshurun, Yaffa – 3P141, 41T102

Yildirim, Funda – 1P022

Yin, Jiaojiao – 4P118

Ying-Rong, Lu – 3P009

Yokosawa, Kazuhiko – 4P132

Yokota, Hiroki – 3P053

Yokoyama, Hiroki – 2P087

Yonezawa, Miki – 3P060

Yoshikawa, Megumi – 2P074

Yoshizawa, Tatsuya – 4P139

Young, Andrew – 4P069

Yu, Cong – 4P084, 33T103

Yu, Deyue – 3P072, 3P074

Yuan, Xiangyong – 3P019

Yukumatsu, Shinji – 1P080

Yuval-Greenberg, Shlomit – 2P063

Z

Zacharkin, Denis – 1P129

Zaidi, Qasim – 2P065, 31S104

Zana, Yossi – 4P076

Zanker, Johannes – 3P010, 3P011

Zaretskaya, Natalia – 2P120

Zavagno, Daniele – 2P066, 2P070

Zdravković, Sunčica – 2P071, 3P040, 3P041, 4P072

Zelinsky, Gregory – 11T103, 41T107

Zhang, Fan – 2P068

Zhang, JunYun – 4P084

Zhang, Lipeng – 4P118

Zhang, Xue – 33T205

Zhao, Huaiyong – 3P118

Zhao, Mintao – 1P052, 4P070

Zhao, Su – 3P050

Zhaoping, Li – 1P107, 12T105

Zhmailova, Ulyana – 1P021

Zhmurov, Michail – 4P044

Zhou, Yuan – 4P071

Zhu, Weina – 2P127, 13T102

Zimmermann, Eckart – 11T102

Zlatkute, Giedre – 2P035

Zlokazov, Kirill – 4P055