scispace - formally typeset
Search or ask a question

Showing papers on "Acoustic source localization published in 2005"


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for detecting the acoustic waves in solid objects generated by a slight finger knock and extracting information related to the source location from a simulated time reversal experiment in the computer.
Abstract: Time reversal in acoustics is a very efficient solution to focus sound back to its source in a wide range of materials including reverberating media. It expresses the following properties: A wave still has the memory of its source location. The concept presented in this letter first consists in detecting the acoustic waves in solid objects generated by a slight finger knock. In a second step, the information related to the source location is extracted from a simulated time reversal experiment in the computer. Then, an action (turn on the light or a compact disk player, for example) is associated with each location. Thus, the whole system transforms solid objects into interactive interfaces. Compared to the existing acoustic techniques, it presents the great advantage of being simple and easily applicable to inhomogeneous objects whatever their shapes. The number of possible touch locations at the surface of objects is shown to be directly related to the mean wavelength of the detected acoustic wave.

170 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined, discussed, and compared the two measurement principles with particular regard to the sources of error in sound power determination, and showed that the phase calibration of intensity probes that combine different transducers is very critical below 500 Hz if the measurement surface is very close to the source under test.
Abstract: The dominating method of measuring sound intensity in air is based on the combination of two pressure microphones. However, a sound intensity probe that combines an acoustic particle velocity transducer with a pressure microphone has recently become available. This paper examines, discusses, and compares the two measurement principles with particular regard to the sources of error in sound power determination. It is shown that the phase calibration of intensity probes that combine different transducers is very critical below 500 Hz if the measurement surface is very close to the source under test. The problem is reduced if the measurement surface is moved further away from the source. The calibration can be carried out in an anechoic room.

150 citations


Journal ArticleDOI
TL;DR: Results show good agreement with theoretical predictions of the sound pressure due to a point monochromatic source in a uniform, high Mach number flow and with Fast Field Program calculations of sound propagation in a stratified moving atmosphere.
Abstract: Finite-difference, time-domain (FDTD) calculations are typically performed with partial differential equations that are first order in time. Equation sets appropriate for FDTD calculations in a moving inhomogeneous medium (with an emphasis on the atmosphere) are derived and discussed in this paper. Two candidate equation sets, both derived from linearized equations of fluid dynamics, are proposed. The first, which contains three coupled equations for the sound pressure, vector acoustic velocity, and acoustic density, is obtained without any approximations. The second, which contains two coupled equations for the sound pressure and vector acoustic velocity, is derived by ignoring terms proportional to the divergence of the medium velocity and the gradient of the ambient pressure. It is shown that the second set has the same or a wider range of applicability than equations for the sound pressure that have been previously used for analytical and numerical studies of sound propagation in a moving atmosphere. Practical FDTD implementation of the second set of equations is discussed. Results show good agreement with theoretical predictions of the sound pressure due to a point monochromatic source in a uniform, high Mach number flow and with Fast Field Program calculations of sound propagation in a stratified moving atmosphere.

143 citations


BookDOI
01 Jan 2005
TL;DR: In this paper, a working model of Binaural processing is presented for sound localization in nonmammalian tetrapods and insects, with a focus on sound source localization.
Abstract: to Sound Source Localization.- Directional Hearing in Insects.- Sound Source Localization by Fishes.- Directional Hearing in Nonmammalian Tetrapods.- Comparative Mammalian Sound Localization.- Development of the Auditory Centers Responsible for Sound Localization.- Interaural Correlation as the Basis of a Working Model of Binaural Processing: An Introduction.- Models of Sound Localization.

110 citations


Proceedings ArticleDOI
04 Sep 2005
TL;DR: In this paper, a Kalman filter is employed to directly update the speaker's position estimate based on the observed time delay of arrival (TDOA) estimation, which consists of the observation associated with an extended Kalman Filter whose state corresponds to the speaker position.
Abstract: In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

92 citations


Book ChapterDOI
08 May 2005
TL;DR: This paper examines the feasibility of sound source localization (SSL) in a home environment, and explores its potential to support inference of communication activity between people, and provides a quantitative assessment of the accuracy and precision of the system.
Abstract: In this paper, we examine the feasibility of sound source localization (SSL) in a home environment, and explore its potential to support inference of communication activity between people. Motivated by recent research in pervasive computing that uses a variety of sensor modes to infer high-level activity, we are interested in exploring how the relatively simple information of SSL might contribute. Our SSL system covers a significant portion of the public space in a realistic home setting by adapting traditional SSL algorithms developed for more highly-controlled lab environments. We describe engineering tradeoffs that result in a localization system with a fairly good 3D resolution. To help make design decisions for deploying a SSL system in a domestic environment, we provide a quantitative assessment of the accuracy and precision of our system. We also demonstrate how such a sensor system can provide a visualization to help humans infer activity in that space. Finally, we show preliminary results for automatic detection of face-to-face conversations.

91 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: Experimental results show that accurate acoustic localization can be achieved using ILD alone, and an algorithm for computing the sound source location by combining likelihood functions, one for each microphone pair is presented.
Abstract: Interaural level difference (ILD) is an important cue for acoustic localization. Although its behavior has been studied extensively in natural systems, it remains an untapped resource for computer-based systems. We investigate the possibility of using ILD for acoustic localization, deriving constraints on the location of a sound source given the relative energy level of the signals received by two microphones. We then present an algorithm for computing the sound source location by combining likelihood functions, one for each microphone pair. Experimental results show that accurate acoustic localization can be achieved using ILD alone.

89 citations


Proceedings ArticleDOI
18 Apr 2005
TL;DR: A system that gives a humanoid robot the ability to localize, separate and recognize simultaneous sound sources and an automatic speech recognizer based on the Missing Feature Theory that recognizes separated sounds in real-time by generating missing feature masks automatically from the post-filtering step.
Abstract: A humanoid robot under real-world environments usually hears mixtures of sounds, and thus three capabilities are essential for robot audition; sound source localization, separation, and recognition of separated sounds. While the first two are frequently addressed, the last one has not been studied so much. We present a system that gives a humanoid robot the ability to localize, separate and recognize simultaneous sound sources. A microphone array is used along with a real-time dedicated implementation of Geometric Source Separation (GSS) and a multi-channel post-filter that gives us a further reduction of interferences from other sources. An automatic speech recognizer (ASR) based on the Missing Feature Theory (MFT) recognizes separated sounds in real-time by generating missing feature masks automatically from the post-filtering step. The main advantage of this approach for humanoid robots resides in the fact that the ASR with a clean acoustic model can adapt the distortion of separated sound by consulting the post-filter feature masks. Recognition rates are presented for three simultaneous speakers located at 2m from the robot. Use of both the post-filter and the missing feature mask results in an average reduction in error rate of 42% (relative).

78 citations


Journal ArticleDOI
TL;DR: In this paper, a simulation study showed that there is no appreciable difference between the quality of predictions of the pressure in the measurement plane and the prediction of the particle velocity based on knowledge of the velocity in the measured plane.
Abstract: Near field acoustic holography is usually based on measurement of the pressure. This paper describes an investigation of an alternative technique that involves measuring the normal component of the acoustic particle velocity. A simulation study shows that there is no appreciable difference between the quality of predictions of the pressure based on knowledge of the pressure in the measurement plane and predictions of the particle velocity based on knowledge of the particle velocity in the measurement plane. However, when the particle velocity is predicted close to the source on the basis of the pressure measured in a plane further away, high spatial frequency components corresponding to evanescent modes are not only amplified by the distance but also by the wave number ratio (kz∕k). By contrast, when the pressure is predicted close to the source on the basis of the particle velocity measured in a plane further away, high spatial frequency components are reduced by the reciprocal wave number ratio (k∕kz). ...

74 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: The novel multiple-input multiple-output (MIMO)-based approach is evaluated and compared with the known SIMO-based method in a reverberant acoustic environment using reference data of the positions obtained from infrared sensors and the results show that the new approach is very robust against reverberation and background noise.
Abstract: Blind adaptive filtering for time delay of arrival (TDOA) estimation is a very powerful method for acoustic source localization in reverberant environments with broadband signals like speech. Based on a recently presented generic framework for blind signal processing for convolutive mixtures, called TRINICON, we present a TDOA estimation method for simultaneous multidimensional localization of multiple sources. Moreover, an interesting link to the known single-input multiple-output (SIMO)-based adaptive eigenvalue decomposition (AED) method is shown. We evaluate the novel multiple-input multiple-output (MIMO)-based approach and compare it with the known SIMO-based method in a reverberant acoustic environment using reference data of the positions obtained from infrared sensors. The results show that the new approach is very robust against reverberation and background noise.

66 citations


Proceedings ArticleDOI
24 Apr 2005
TL;DR: The paper describes the hardware and software platform developed for this application and summarizes the lessons learned during the development of the system.
Abstract: Experiences developing a sensor network-based acoustic shooter localization system are presented. The system is able to localize the position of a shooter and the trajectory of the projectile using observed acoustic events, such as the muzzle blast and the ballistic shockwave. The network consists of a large number of cheap sensors communicating through an ad-hoc wireless network, which enables the system to resolve multiple simultaneous acoustic sources, eliminate multipath effects, tolerate multiple sensor failures while providing good coverage and high accuracy, even in such challenging environment as urban terrain. The paper describes the hardware and software platform developed for this application and summarizes the lessons learned during the development of the system.

Proceedings ArticleDOI
10 Oct 2005
TL;DR: A newly built sound source localization system that can detect the direction of a sound source arbitrarily located in front of it is reported.
Abstract: We have thus far developed two types of sound source localization system, one of which can localize the horizontal direction and the other the vertical direction. These systems can acquire the localization ability by self-organization through repetition of movement and perception. In this paper, we report a newly built sound source localization system that can detect the direction of a sound source arbitrarily located in front of it. This system is composed of a robot that has two microphones with reflectors corresponding to human's pinnas. To acquire the horizontal direction, the interaural time difference is used as the auditory cue. To acquire the vertical direction, the features on the audio spectrum induced by the reflectors are used as the auditory cue. The robot can establish the relationship between the cues and the sound direction through learning.

Journal ArticleDOI
TL;DR: Using close-range acoustic recording, this work describes both the directional radiation pattern and the detailed frequency composition of the sound produced by a tethered flying (Lucilia sericata) using a series of harmonics.
Abstract: Many insects produce sounds during flight. These acoustic emissions result from the oscillation of the wings in air. To date, most studies have measured the frequency characteristics of flight sounds, leaving other acoustic characteristics—and their possible biological functions—unexplored. Here, using close-range acoustic recording, we describe both the directional radiation pattern and the detailed frequency composition of the sound produced by a tethered flying (Lucilia sericata). The flapping wings produce a sound wave consisting of a series of harmonics, the first harmonic occurring around 190Hz. In the horizontal plane of the fly, the first harmonic shows a dipolelike amplitude distribution whereas the second harmonic shows a monopolelike radiation pattern. The first frequency component is dominant in front of the fly while the second harmonic is dominant at the sides. Sound with a broad frequency content, typical of that produced by wind, is also recorded at the back of the fly. This sound qualifie...

Book ChapterDOI
17 Oct 2005

Patent
Akihito Uetake1, Kinya Matsuzawa1
20 Jun 2005
TL;DR: In this paper, a super-directional acoustic system for reproducing a sound signal supplied from a real sound source by using a superdirectional speaker and producing a virtual sound source in a vicinity of a sound wave reflection surface is presented.
Abstract: A superdirectional acoustic system for reproducing a sound signal supplied from a real sound source by using a superdirectional speaker and producing a virtual sound source in a vicinity of a sound wave reflection surface. The system includes an ultrasonic speaker, which includes an ultrasonic transducer for oscillating a sound wave in an ultrasonic frequency band, for reproducing an audio signal in a relatively medium to high frequency sound range, which is included in the sound signal supplied from the real sound source; and a low frequency sound reproducing speaker for reproducing an audio signal in a relatively low frequency sound range, which is included in the sound signal supplied from the real sound source. Sound in the medium-high frequency range is reproduced in a manner such that the sound is produced from a virtual sound source which is formed in the vicinity of the sound signal reflection surface such as a screen.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: A three ring microphone array estimates the horizontal/vertical direction and distance of sound sources and separates multiple sound sources for mobile robot audition and can separate 3 different pressure speech sources without drowning out.
Abstract: This paper describes a three ring microphone array estimates the horizontal/vertical direction and distance of sound sources and separates multiple sound sources for mobile robot audition. Arrangement of microphones is simulated and an optimized pattern which has three rings is implemented with 32 microphones. Sound localization and separation are achieved by delay and sum beam forming (DSBF) and frequency band selection (FBS). From on-line experiments results of sound horizontal and vertical localization, we confirmed that one or two sounds sources could be localized with an error of about 5 degrees and 200 to 300 mm in the case of the distance of about lm. The off-line experiments of sound separation were evaluated by power spectrums in each frequency of separated sounds, and we confirmed that an appropriate frequency band could be selected by DSBF and FBS. The system can separate 3 different pressure speech sources without drowning out.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: It is shown that acoustic source localization based on wave field decomposition has the potential to unambiguously localize multiple simultaneously active wideband sources in the array's full 360 degrees field-of-view.
Abstract: This paper is concerned with the problem of localizing multiple wideband acoustic sources. In contrast to existing techniques, this method takes the physics of wave propagation into account. 2D wave fields are decomposed using cylindrical harmonics as basis functions by a circular array mounted into a rigid cylindrical baffle. The obtained wave field representation is then used to serve as a basis for high-resolution subspace beamforming methods, most notably ESPRIT. It is shown that acoustic source localization based on wave field decomposition has the potential to unambiguously localize multiple simultaneously active wideband sources in the array's full 360 degrees field-of-view.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: A new algorithm for sound source localization developed specifically for directional microphones is presented and results obtained from real meeting room setups show a typical error of less than 3 degrees.
Abstract: Previous research in sound source localization has helped increase the robustness of estimates to noise and reverberation. Circular arrays are of particular interest for a number of scenarios, particularly because they can be placed in the center of the sources. First, that improves the sound capture due to the reduced distance. Second, it helps on the direction estimation, not only because of the reduced distance, but also because it increases the angle differences. Nevertheless, most research on circular arrays focused on the case of omni-directional microphones. In this paper, we present a new algorithm for sound source localization developed specifically for directional microphones. Results obtained from real meeting room setups show a typical error of less than 3 degrees.

Journal ArticleDOI
TL;DR: In this paper, an acoustic needle, mechanically driven by an ultrasonic transducer in air, can rotate sound-trapped small particles around its tip in water, and the rotation is very stable when sound field around the tip is appropriate.
Abstract: We show that an acoustic needle, mechanically driven by an ultrasonic transducer in air, can rotate sound-trapped small particles around its tip in water. The rotation is very stable when sound field around the tip is appropriate. For an acoustic needle at a given location in a sound field, the revolution speed of trapped particles can be controlled by the acoustic pressure near the tip or by driving voltage and frequency of the ultrasonic transducer. For Flying Color seeds, a revolution speed larger than 300rpm can be obtained.

PatentDOI
TL;DR: In this paper, a system for locating an acoustic source from an acoustic event of the acoustic source is proposed, which includes a sensor network having a plurality of spatially separated sensor nodes each located in a predetermined position encountering acoustic waves generated by an acoustic events passing proximate to the plurality of sensor nodes.
Abstract: A system for locating an acoustic source from an acoustic event of the acoustic source. In one embodiment, the system includes a sensor network having a plurality of spatially separated sensor nodes each located in a predetermined position encountering acoustic waves generated by an acoustic event passing proximate to the plurality of spatially separated sensor nodes, where the plurality of spatially separated sensor nodes are synchronized to a common time base such that when the acoustic event is detected, information of the acoustic waves from each of the plurality of spatially separated sensor nodes is obtained and broadcasted through the sensor network. The system further includes a base station for receiving information of the acoustic waves broadcasted from the sensor network and processing the received information of the acoustic waves so as to locate the acoustic source of the acoustic event.

Journal ArticleDOI
TL;DR: Passive acoustic techniques are presented to solve the localization problem of a sound source in three-dimensional space using off-the-shelf hardware and are compared with a state-of- the-art method that requires centralized digitization of the signals from the microphones of all the arrays.
Abstract: Passive acoustic techniques are presented to solve the localization problem of a sound source in three-dimensional space using off-the-shelf hardware. Multiple microphone arrays are employed, which operate independently, in estimating the direction of arrival of sound, or, equivalently, a direction vector from the array’s geometric center towards the source. Direction vectors and array centers are communicated to a central processor, where the source is localized by finding the intersection of the direction lines defined by the direction vectors and the associated array centers. The performance of the method in the air is demonstrated experimentally and compared with a state-of-the-art method that requires centralized digitization of the signals from the microphones of all the arrays.

Patent
28 Oct 2005
TL;DR: In this article, a system and method for recording and reproducing three-dimensional sound events using a discretized, integrated macro-micro sound volume for reproducing a 3D acoustical matrix that reproduces sound including natural propagation and reverberation.
Abstract: A system and method for recording and reproducing three-dimensional sound events using a discretized, integrated macro-micro sound volume for reproducing a 3D acoustical matrix that reproduces sound including natural propagation and reverberation. The system and method may include sound modeling and synthesis that may enable sound to be reproduced as a volumetric matrix. The volumetric matrix may be captured, transferred, reproduced, or otherwise processed, as a spatial spectra of discretely reproduced sound events with controllable macro-micro relationships.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: The improved Julius is used as an MFT-based automatic speech recognizer (ASR) for humanoid robots and the improvement in the system performance is shown through three simultaneous speech recognition on the humanoid SIG2.
Abstract: A humanoid robot under real-world environments usually hears mixtures of sounds, and thus three capabilities are essential for robot audition; sound source localization, separation, and recognition of separated sounds. We have adopted the missing feature theory (MFT) for automatic recognition of separated speech, and developed the robot audition system. A microphone array is used along with a real-time dedicated implementation of geometric source separation (GSS) and a multi-channel post-filter that gives us a further reduction of interferences from other sources. The automatic speech recognition based on MFT recognizes separated sounds by generating missing feature masks automatically from the post-filtering step. The main advantage of this approach for humanoid robots resides in the fact that the ASR with a clean acoustic model can adapt the distortion of separated sound by consulting the post-filter feature masks. In this paper, we used the improved Julius as an MFT-based automatic speech recognizer (ASR). The Julius is a real-time large vocabulary continuous speech recognition (LVCSR) system. We performed the experiment to evaluate our robot audition system. In this experiment, the system recognizes a sentence, not an isolated word. We showed the improvement in the system performance through three simultaneous speech recognition on the humanoid SIG2.

Book ChapterDOI
01 Jan 2005

Proceedings ArticleDOI
18 Mar 2005
TL;DR: Some new sound field measurement methods by using a laser Doppler vibrometer (LDV) are described and a 3D sound field reconstruction from some 2D laser projections based on computed tomography (CT) techniques is made.
Abstract: In this paper, we describe some new sound field measurement methods by using a laser Doppler vibrometer (LDV). By irradiating the reflection wall with a laser, we can observe the light velocity change that is caused by the refractive index change from the change in air density. It means that it is possible to observe the change of the sound pressure. We measured a sound field projection on a 2D plane using a scanning laser Doppler vibrometer (SVM) which can visualize a sound field. And we made a 3D sound field reconstruction from some 2D laser projections based on computed tomography (CT) techniques. We made the reconstructed image for the sound field near the loudspeaker or in the room.

PatentDOI
Yong Rui1, Dinei Florencio1
TL;DR: In this paper, a system and process for finding the location of a sound source using direct approaches having weighting factors that mitigate the effect of both correlated and reverberation noise is presented.
Abstract: A system and process for finding the location of a sound source using direct approaches having weighting factors that mitigate the effect of both correlated and reverberation noise is presented. When more than two microphones are used, the traditional time-delay-of-arrival (TDOA) based sound source localization (SSL) approach involves two steps. The first step computes TDOA for each microphone pair, and the second step combines these estimates. This two-step process discards relevant information in the first step, thus degrading the SSL accuracy and robustness. In the present invention, direct, one-step, approaches are employed. Namely, a one-step TDOA SSL approach and a steered beam (SB) SSL approach are employed. Each of these approaches provides an accuracy and robustness not available with the traditional two-step approaches.

Journal ArticleDOI
TL;DR: In this article, a numerical experiment is undertaken to investigate the issue of whether a similar surface shear stress term constitutes a true source of dipole sound, and it is shown that the correction to the sound in the Fraunhofer zone is proportional to δbl∕λ (the ratio of oscillatory boundary-layer thickness to acoustic wavelength).
Abstract: The sound generated due to a localized flow over a large (compared to the acoustic wavelength) plane no-slip wall is considered It has been known since 1960 that for inviscid flow the pressure, while appearing to be a source of dipole sound in a formal solution to the Lighthill equation, is, in fact, not a true dipole source, but rather represents the surface reflection of volume quadrupoles The subject of the present work—namely, whether a similar surface shear stress term constitutes a true source of dipole sound—has been controversial Some have boldly assumed it to be a true source and have used it to calculate the noise in boundary-layer flows Others have argued that, like the surface pressure, the surface shear stress is not a valid source of sound but rather represents a propagation effect Here, a numerical experiment is undertaken to investigate the issue A portion of an otherwise static wall is oscillated tangentially in an acoustically compact region to create shear stress fluctuations The resulting sound field, computed directly from the compressible Navier-Stokes equations, is nearly dipolar and its amplitude agrees with an acoustic analogy prediction that regards the surface shear as acoustically compact and as a true source of sound However, there is a correction that becomes noticeable for observers near wall-grazing angles as the computational domain size Ld along the wall is increased An estimate, validated by the simulations, shows that as Ld→∞ the correction to the sound in the Fraunhofer zone is proportional to δbl∕λ (the ratio of oscillatory boundary-layer thickness to acoustic wavelength) times a directivity factor that becomes large at angles close to grazing For observers at such angles, Lighthill’s acoustic analogy does not apply

Patent
Kaoru Suzuki1, Toshiyuki Koga
27 Sep 2005
TL;DR: In this article, a frequency decomposer analyzes two amplitude data input from microphones to an acoustic signal input unit, and a two-dimensional data forming unit obtains a phase difference between the amplitude data for each frequency.
Abstract: A frequency decomposer analyzes two amplitude data input from microphones to an acoustic signal input unit, and a two-dimensional data forming unit obtains a phase difference between the two amplitude data for each frequency. This phase difference for each frequency is given two-dimensional coordinate values to form two-dimensional data. A figure detector analyzes the generated two-dimensional data on an X-Y plane to detect a figure. A sound source information generator processes information of the detected figure to generate sound source information containing the number of sound sources as generation sources of acoustic signals, the spatial existing range of each sound source, the temporal existing period of a sound generated by each sound source, the components of each source sound, a separated sound of each sound source, and the symbolic contents of each source sound.

01 Jan 2005
TL;DR: In this article, the performance of a product is checked by measuring the radiated sound (noise) from the vibrating structure, which is done in an environment with background noise, which makes the measurement difficult.
Abstract: The performance (or quality) of a product is often checked by measuring the radiated sound (noise) from the vibrating structure. Often this test has to be done in an environment with background noise, which makes the measurement difficult. When using a (pressure) microphone the background noise can be such that it dominates the radiated sound from the vibrating structure. However, when using a particle velocity sensor, the Microflown [1,2], near the vibrating structure, the background noise has almost no influence (it is almost cancelled) and the sound from the structure is measured with a good S/N ratio. The experimental results are explained in terms of the different boundary conditions at the surface of the vibrating structure for the pressure and the particle velocity.

Journal ArticleDOI
TL;DR: In this paper, a three-dimensional finite difference time domain (FDTD) analysis was carried out in order to obtain the sound field focused by a biconcave acoustic lens specialized to measure the normal incidence of the spherical wave.
Abstract: Recently, the finite difference time domain (FDTD) method has been frequently used for the analysis of underwater sound propagation. There are demonstrated advantages of this FDTD method in terms of obtaining data regarding snapshots of sound pressure distribution and a series of waveforms at any point. In addition, the method facilitates the modeling of factors, such as the sound source and media into the analysis domain. In this study, a three-dimensional FDTD analysis was carried out in order to obtain the sound field focused by a biconcave acoustic lens specialized to measure the normal incidence of the spherical wave. Additionally, the results of the analysis were compared with experimental results obtained in a water tank. When the frequency of the sound source was 500 kHz, the range between the acoustic lens and the sound source was 1.78 m, and the attenuation constant was 0.5–1.0 dB/λ, the experimental results regarding the position of the focal point, the on-axis characteristics and the beam pattern were all found to agree well with the simulation results obtained by FDTD method.