Effects of different visual displays on the time and precision of bare-handed or tool-mediated eye-hand coordination were investigated in a pick-and-place-task with complete novices. All of them scored well above average in spatial perspective taking ability and performed the task with their dominant hand. Two groups of novices, four men and four women in each group, had to place a small object in a precise order on the centre of five targets on a Real-world Action Field (RAF), as swiftly as possible and as precisely as possible, using a tool or not (control). Each individual session consisted of four visual display conditions. The order of conditions was counterbalanced between individuals and sessions. Subjects looked at what their hands were doing 1) directly in front of them (“natural” top-down view) 2) in top-down 2D fisheye view 3) in top-down undistorted 2D view or 4) in 3D stereoscopic top-down view (head-mounted OCULUS DK 2). It was made sure that object movements in all image conditions matched the real-world movements in time and space. One group was looking at the 2D images with the monitor positioned sideways (sub-optimal); the other group was looking at the monitor placed straight ahead of them (near-optimal). All image viewing conditions had significantly detrimental effects on time (seconds) and precision (pixels) of task execution when compared with “natural” direct viewing. More importantly, we find significant trade-offs between time and precision between and within groups, and significant interactions between viewing conditions and manipulation conditions. The results shed new light on controversial findings relative to visual display effects on eye-hand coordination, and lead to conclude that differences in camera systems and adaptive strategies of novices are likely to explain these.
This paper has been published in PlosOne. You can reach the publication from here.