How Much Spatial Information Is Lost in the Sensory Substitution Process? Comparing Visual, Tactile, and Auditory Approaches/

By: Contributor(s): Material type: ArticleArticlePublication details: sage 2019Description: Vol: 48, issue: 11, 2019: (1079-1103 p.)Subject(s): Online resources: In: PerceptionSummary: Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Vol info Status Date due Barcode Item holds
E-Journal E-Journal Library, SPAB E-Journals v. 48(1-12) / Jan-Dec. 2019 Available
Total holds: 0

Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).

There are no comments on this title.

to post a comment.

Library, SPA Bhopal, Neelbad Road, Bhauri, Bhopal By-pass, Bhopal - 462 030 (India)
Ph No.: +91 - 755 - 2526805 | E-mail: [email protected]

OPAC best viewed in Mozilla Browser in 1366X768 Resolution.
Free counter