Abstract: In a sonar system using a large array multielement sonar detector, the raw phase and intensity data is reduced to less than three bits per channel per slice for each of the detectors in the multielement array before the raw data is transmitted to a beamformer for transforming the data to information about the spatial positions of objects reflecting the sonar signals.
Abstract: An object is measured to record the relative surface coordinates. Then, a portion of the object “the front side” immersed in a fluid is imaged by directing a sonar pulse at the object and recording sonar signals reflected from the object with a sonar imaging array. Then, the recorded relative surface coordinates are iteratively fit to coordinates calculated from the sonar image. Thereafter, the coordinates of the surface of the “backside” of the object that is not observable in the sonar image are known, and a computer generated image of the backside is stitched to sonar image so that the object can be viewed from a plurality of viewpoints separated from the sonar imaging array.
Abstract: Sonar imaging data obtained by sending multiple sonar pings towards an object is reduced by assigning values measured by each sonar ping to bins, where each bin is fixed in World Space, and calculating the opacity of each bin to produce an image of the object. A normal vector associated with each bin is used to calculate the light reflected from a model constructed from the data. The most preferred normal vector is calculated from the vector sum of normals calculated from each ping.
Abstract: Sonar three dimensional data are represented by a two dimensional image. Pixels of the two dimensional image are emphasized if the if the three dimensional data associated with the pixel differ by more than a criterion from the three dimensional data associated with neighboring pixels.