Patents by Inventor Joshua D. Atkin
Joshua D. Atkin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220394406Abstract: A method performed by a programmed processor of an audio system, the method includes receiving a sound track that has a track length, producing a binaural audio version of a sound track, the binaural audio version having an extended track length performing a fading operation upon the binaural audio version to gradually reduce a signal level of the binaural audio version to below a signal threshold level at a time along the extended track length that corresponds to an end time of the track length of the sound track; and storing the binaural audio version having the track length of the sound track in memory for later transmission to an audio playback device for driving one or more speakers.Type: ApplicationFiled: May 5, 2022Publication date: December 8, 2022Inventors: Juha O. Merimaa, Abdullah Fahim, Andrey D. Del Pozo, Joshua D. Atkins
-
Patent number: 11514928Abstract: A device implementing a system for processing speech in an audio signal includes at least one processor configured to receive an audio signal corresponding to at least one microphone of a device, and to determine, using a first model, a first probability that a speech source is present in the audio signal. The at least one processor is further configured to determine, using a second model, a second probability that an estimated location of a source of the audio signal corresponds to an expected position of a user of the device, and to determine a likelihood that the audio signal corresponds to the user of the device based on the first and second probabilities.Type: GrantFiled: December 9, 2019Date of Patent: November 29, 2022Assignee: Apple Inc.Inventors: Mehrez Souden, Ante Jukic, Jason Wung, Ashrith Deshpande, Joshua D. Atkins
-
Patent number: 11508388Abstract: A device for processing audio signals in a time-domain includes a processor configured to receive multiple audio signals corresponding to respective microphones of at least two or more microphones of the device, at least one of the multiple audio signals comprising speech of a user of the device. The processor is configured to provide the multiple audio signals to a machine learning model, the machine learning model having been trained based at least in part on an expected position of the user of the device and expected positions of the respective microphones on the device. The processor is configured to provide an audio signal that is enhanced with respect to the speech of the user relative to the multiple audio signals, wherein the audio signal is a waveform output from the machine learning model.Type: GrantFiled: November 20, 2020Date of Patent: November 22, 2022Assignee: Apple Inc.Inventors: Mehrez Souden, Symeon Delikaris Manias, Joshua D. Atkins, Ante Jukic, Ramin Pishehvar
-
Publication number: 20220366927Abstract: Disclosed is a multi-task machine learning model such as a time-domain deep neural network (DNN) that jointly generate an enhanced target speech signal and target audio parameters from a mixed signal of target speech and interference signal. The DNN may encode the mixed signal, determine masks used to jointly estimate the target signal and the target audio parameters based on the encoded mixed signal, apply the mask to separate the target speech from the interference signal to jointly estimate the target signal and the target audio parameters, and decode the masked features to enhance the target speech signal and to estimate the target audio parameters. The target audio parameters may include a voice activity detection (VAD) flag of the target speech. The DNN may leverage multi-channel audio signal and multi-modal signals such as video signals of the target speaker to improve the robustness of the enhanced target speech signal.Type: ApplicationFiled: May 15, 2021Publication date: November 17, 2022Inventors: Ramin Pishehvar, Ante Jukic, Mehrez Souden, Jason Wung, Feipeng Li, Joshua D. Atkins
-
Publication number: 20220369030Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.Type: ApplicationFiled: May 17, 2021Publication date: November 17, 2022Inventors: Mehrez Souden, Jason Wung, Ante Jukic, Ramin Pishehvar, Joshua D. Atkins
-
Patent number: 11503409Abstract: Digital audio signal processing techniques used to provide an acoustic transparency function in a pair of headphones. A number of transparency filters can be computed at once, using optimization techniques or using a closed form solution, that are based on multiple re-seatings of the headphones and that are as a result robust for a population of wearers. In another embodiment, a transparency hearing filter of a headphone is computed by an adaptive system that takes into consideration the changing acoustic to electrical path between an earpiece speaker and an interior microphone of that headphone while worn by a user. Other embodiments are also described and claimed.Type: GrantFiled: March 12, 2021Date of Patent: November 15, 2022Assignee: APPLE INC.Inventors: Ismael H. Nawfal, Joshua D. Atkins, Stephen J. Nimick, Guy C. Nicholson, Jason M. Harlow
-
Patent number: 11341988Abstract: A hybrid machine learning-based and DSP statistical post-processing technique is disclosed for voice activity detection. The hybrid technique may use a DNN model with a small context window to estimate the probability of speech by frames. The DSP statistical post-processing stage operates on the frame-based speech probabilities from the DNN model to smooth the probabilities and to reduce transitions between speech and non-speech states. The hybrid technique may estimate the soft decision on detected speech in each frame based on the smoothed probabilities, generate a hard decision using a threshold, detect a complete utterance that may include brief pauses, and estimate the end point of the utterance. The hybrid voice activity detection technique may incorporate a target directional probability estimator to estimate the direction of the speech source. The DSP statistical post-processing module may use the direction of the speech source to inform the estimates of the voice activity.Type: GrantFiled: September 23, 2019Date of Patent: May 24, 2022Assignee: APPLE INC.Inventors: Ramin Pishehvar, Feiping Li, Ante Jukic, Mehrez Souden, Joshua D. Atkins
-
Publication number: 20220059123Abstract: Processing of ambience and speech can include extracting from audio signals, ambience and speech signals. One or more spatial parameters can be generated that define spatial characteristics of ambience sound in the one or more ambience audio signals. The primary speech signal, the one or more ambience audio signals, and the spatial parameters can be encoded into one or more encoded data streams. Other aspects are described and claimed.Type: ApplicationFiled: October 29, 2021Publication date: February 24, 2022Inventors: Jonathan D. Sheaffer, Joshua D. Atkins, Mehrez Souden, Symeon Delikaris Manias, Sean A. Ramprashad
-
Patent number: 11222652Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.Type: GrantFiled: July 19, 2019Date of Patent: January 11, 2022Assignee: APPLE INC.Inventors: Ante Jukic, Mehrez Souden, Joshua D. Atkins
-
Publication number: 20210329381Abstract: An audio device can sense sound in a physical environment using a plurality of microphones to generate a plurality of microphone signals. Clean speech can be extracted from microphone signals. Ambience can be extracted from the microphone signals. The clean speech can be encoded at a first compression level. The ambience can be encoded at a second compression level that is higher than the first compression level. Other aspects are also described and claimed.Type: ApplicationFiled: June 28, 2021Publication date: October 21, 2021Inventors: Tomlinson Holman, Christopher T. Eubank, Joshua D. Atkins, Soenke Pelzer, Dirk Schroeder
-
Publication number: 20210329405Abstract: Processing sound in an enhanced reality environment can include generating, based on an image of a physical environment, an acoustic model of the physical environment. Audio signals captured by a microphone array, can capture a sound in the physical environment. Based on these audio signals, one or more measured acoustic parameters of the physical environment can be generated. A target audio signal can be processed using the model of the physical environment and the measured acoustic parameters, resulting in a plurality of output audio channels having a virtual sound source with a virtual location. The output audio channels can be used to drive a plurality of speakers. Other aspects are also described and claimed.Type: ApplicationFiled: June 28, 2021Publication date: October 21, 2021Inventors: Christopher T. Eubank, Joshua D. Atkins, Soenke Pelzer, Dirk Schroeder
-
Patent number: 11012774Abstract: A method for producing a target directivity function that includes a set of spatially biased HRTFs. A set of left ear and right ear head related transfer functions (HRTFs) are selected. The left ear and right ear head HRTFs are multiplied with an on-camera emphasis function (OCE), to produce the spatially biased HRTFs. The OCE may be designed to shape the sound profile of the HRTFs to provide emphasis in a desired location or direction that is a function of the specific orientation of the device as it is being used to make a video recording. Other aspects are also described and claimed.Type: GrantFiled: September 18, 2019Date of Patent: May 18, 2021Assignee: APPLE INC.Inventors: Jonathan D. Sheaffer, Joshua D. Atkins, Peter A. Raffensperger, Symeon Delikaris Manias
-
Patent number: 10978086Abstract: An echo canceller is disclosed in which audio signals of the playback content received by one or more of the microphones from a loudspeaker of the device may be used as the playback reference signals to estimate the echo signals of the playback content received by a target microphone for echo cancellation. The echo canceller may estimate the transfer function between a reference microphone and the target microphone based on the playback reference signal of the reference microphone and the signal of the target microphone. To mitigate near-end speech cancellation at the target microphone, the echo canceller may compute a mask to distinguish between target microphone audio signals that are echo-signal dominant and near-end speech dominant. The echo canceller may use the mask to adaptively update the transfer function or to modify the playback reference signal used by the transfer function to estimate the echo signals of the playback content.Type: GrantFiled: July 19, 2019Date of Patent: April 13, 2021Assignee: Apple Inc.Inventors: Jason Wung, Sarmad Aziz Malik, Ashrith Deshpande, Ante Jukic, Joshua D. Atkins
-
Patent number: 10951990Abstract: Digital audio signal processing techniques used to provide an acoustic transparency function in a pair of headphones. A number of transparency filters can be computed at once, using optimization techniques or using a closed form solution, that are based on multiple re-seatings of the headphones and that are as a result robust for a population of wearers. In another embodiment, a transparency hearing filter of a headphone is computed by an adaptive system that takes into consideration the changing acoustic to electrical path between an earpiece speaker and an interior microphone of that headphone while worn by a user. Other embodiments are also described and claimed.Type: GrantFiled: July 6, 2018Date of Patent: March 16, 2021Assignee: APPLE INC.Inventors: Ismael H. Nawfal, Joshua D. Atkins, Stephen J. Nimick, Guy C. Nicholson, Jason M. Harlow
-
Publication number: 20210020189Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.Type: ApplicationFiled: July 19, 2019Publication date: January 21, 2021Inventors: Ante Jukic, Mehrez Souden, Joshua D. Atkins
-
Publication number: 20210020188Abstract: An echo canceller is disclosed in which audio signals of the playback content received by one or more of the microphones from a loudspeaker of the device may be used as the playback reference signals to estimate the echo signals of the playback content received by a target microphone for echo cancellation. The echo canceller may estimate the transfer function between a reference microphone and the target microphone based on the playback reference signal of the reference microphone and the signal of the target microphone. To mitigate near-end speech cancellation at the target microphone, the echo canceller may compute a mask to distinguish between target microphone audio signals that are echo-signal dominant and near-end speech dominant. The echo canceller may use the mask to adaptively update the transfer function or to modify the playback reference signal used by the transfer function to estimate the echo signals of the playback content.Type: ApplicationFiled: July 19, 2019Publication date: January 21, 2021Inventors: Jason Wung, Sarmad Aziz Malik, Ashrith Deshpande, Ante Jukic, Joshua D. Atkins
-
Publication number: 20200409995Abstract: A device with microphones can generate microphone signals during an audio recording. The device can store, in an electronic audio data file, the microphone signals, and metadata that includes impulse responses of the microphones. Other aspects are described and claimed.Type: ApplicationFiled: June 11, 2020Publication date: December 31, 2020Inventors: Jonathan D. Sheaffer, Symeon Delikaris Manias, Gaetan R. Lorho, Peter A. Raffensperger, Eric A. Allamanche, Frank Baumgarte, Dipanjan Sen, Joshua D. Atkins, Juha O. Merimaa
-
Patent number: 10848889Abstract: Image analysis of a video signal is performed to produce first metadata, and audio analysis of a multi-channel sound track associated with the video signal is performed to produce second metadata. A number of time segments of the sound track are processed, wherein each time segment is processed by either (i) spatial filtering of the audio signals or (ii) spatial rendering of the audio signals, not both, wherein for each time segment a decision was made to select between the spatial filtering or the spatial rendering, in accordance with the first and second metadata. A mix of the processed sound track and the video signal is generated. Other embodiments are also described and claimed.Type: GrantFiled: January 4, 2019Date of Patent: November 24, 2020Assignee: APPLE INC.Inventors: Jonathan D. Sheaffer, Joshua D. Atkins, Martin E. Johnson, Stuart J. Wood
-
Patent number: 10798511Abstract: Processing input audio channels for generating spatial audio can include receiving a plurality of microphone signals that capture a sound field. Each microphone signal can be transformed into a frequency domain signal. From each frequency domain signal, a direct component and a diffuse component can be extracted. The direct component can be processed with a parametric renderer. The diffuse component can be processed with a linear renderer. The components can be combined, resulting in a spatial audio output. The levels of the components can be adjusted to match a direct to diffuse ratio (DDR) of the output with the DDR of the captured sound field. Other aspects are also described and claimed.Type: GrantFiled: April 8, 2019Date of Patent: October 6, 2020Assignee: APPLE INC.Inventors: Jonathan D. Sheaffer, Juha O. Merimaa, Jason Wung, Martin E. Johnson, Peter A. Raffensperger, Joshua D. Atkins, Symeon Delikaris Manias, Mehrez Souden
-
Publication number: 20200312315Abstract: An acoustic environment aware method for selecting a high quality audio stream during multi-stream speech recognition. A number of input audio streams are processed to determine if a voice trigger is detected, and if so a voice trigger score is calculated for each stream. An acoustic environment measurement is also calculated for each audio stream. The trigger score and acoustic environment measurement are combined for each audio stream, to select as a preferred audio stream the audio stream with the highest combined score. The preferred audio stream is output to an automatic speech recognizer. Other aspects are also described and claimed.Type: ApplicationFiled: March 28, 2019Publication date: October 1, 2020Inventors: Feipeng Li, Mehrez Souden, Joshua D. Atkins, John Bridle, Charles P. Clark, Stephen H. Shum, Sachin S. Kajarekar, Haiying Xia, Erik Marchi