Comparison between two 3d-sound engines of the accuracy in determining the position of a source

University essay from Luleå/Department of Computer Science, Electrical and Space Engineering

Abstract: Augmented reality, AR, has been popular on mobile devices the past years; these AR's are mostly build on graphics extraneous information. Sound can also be used as a part of an AR. For a sound-based AR to work on a mobile device, it needs to represent sounds in the 3d-space well, so that the users can perceive direction and distance to the source and also experience the virtual environment. The current three-dimensional sound engine on iPhone 4, openAL, lacks of both environmental and spatial clues to fully give a pleasant three-dimensional experience. A 3d-sound engine is proposed, using a generalized or individualized head-related transfer function as a spatial model and an image source method for early reflections and feedback delay networks for late reverberation. The proposed 3d-sound engine is implemented and tested against the current engine, openAL, with an additional reverberation module supported for Mac, which is not supported in the current version of OpenAL in iPhone 4. The test compares both 3d-sound engines of the accuracy in locating a source with respect to azimuth, elevation and distance. The result shows that the two sound engines are fairly equal in all three parameters. It also raised the importance of mapping the mental image to the sound source improve the precision.

  CLICK HERE TO DOWNLOAD THE WHOLE ESSAY. (in PDF format)