Patents by Inventor Ante Jukic
Ante Jukic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250029618Abstract: Disclosed are apparatuses, systems, and techniques that may use machine learning for implementing speaker recognition, verification, and/or diarization. The techniques include receiving a first set of audio data channels (ADCs) jointly capturing a speech produced by one or more speakers and obtaining, using the first set of ADCs, a second set of one or more ADCs. Individual ADCs of the second set of ADCs represent one or more channels of the first set of ADCs, and at least one channel of the second set of ADCs represents a cluster of two or more ADCs of the first set of ADCs, the two of more ADCs being selected based on similarity of audio data of the two or more ADCs. The techniques further include processing, using an audio processing neural network model, the second set of ADCs to obtain an association of the speech to the one or more speakers.Type: ApplicationFiled: January 11, 2024Publication date: January 23, 2025Inventors: Taejin Park, Ante Jukic, He Huang, Venkata Naga Krishna Chaitanya Puvvada, Kunal Dhawan, Nithin Rao Koluguri, Nikolay Karpov, Aleksandr Laptev, Jagadeesh Balam
-
Publication number: 20250029632Abstract: Disclosed are apparatuses, systems, and techniques that may use machine learning for implementing speaker recognition, verification, and/or diarization. The techniques include processing audio data channels (ADCs) using a voice detection model to determine voice activity likelihoods (VALs) that individual ADCs include speech, obtaining, using VALs, a second set of ADC(s), and processing, using an audio processing a neural network (NN) model, the second set of ADCs to obtain association of the speech to the one or more speakers. The techniques also include generating a plurality of embeddings associated with the ADCs, processing the plurality of embeddings to obtain aggregated embedding(s) that represent audio data of multiple ADCs, and processing the aggregated embedding(s), using the audio processing NN model, to obtain association of the speech to the one or more speakers.Type: ApplicationFiled: January 11, 2024Publication date: January 23, 2025Inventors: Taejin Park, Ante Jukic, He Huang, Venkata Naga Krishna Chaitanya Puvvada, Kunal Dhawan, Nithin Rao Koluguri, Nikolay Karpov, Aleksandr Laptev, Jagadeesh Balam
-
Publication number: 20250016517Abstract: Approaches presented herein provide for identification of sound from a sound source relative to an array of microphones of a potentially unknown configuration using, in part, differences in the audio signals received by the microphones. In at least one embodiment, audio signals are captured using an array of microphones and audio features are extracted from those signals. The audio features can be processed using a first neural network to generate a feature vector representing a spatial location of an audio source with respect to the plurality of microphones, where the spatial location is inferred based on audio differences and independent of an availability of information indicating a physical configuration of the plurality of microphones. The feature vector can be provided to a task-specific model to perform at least one audio-related task based in part on the spatial location.Type: ApplicationFiled: July 5, 2023Publication date: January 9, 2025Inventors: Ante Jukic, Jagadeesh Balam, Boris Ginsburg
-
Patent number: 12141347Abstract: An audio processing device may generate a plurality of microphone signals from a plurality of microphones of the audio processing device. The audio processing device may determine a gaze of a user who is wearing a playback device that is separate from the audio processing device, the gaze of the user being determined relative to the audio processing device. The audio processing device may extract speech that correlates to the gaze of the user, from the plurality of microphone signals of the audio processing device by applying the plurality of microphone signals of the audio processing device and the gaze of the user to a machine learning model. The extracted speech may be played to the user through the playback device.Type: GrantFiled: November 15, 2022Date of Patent: November 12, 2024Assignee: Apple Inc.Inventors: Mehrez Souden, Symeon Delikaris Manias, Ante Jukic, John Woodruff, Joshua D. Atkins
-
Publication number: 20240214734Abstract: Aspects of the subject technology relate to determining a location of a device using sound that is output from the device. For example, an audio output from one or more speakers of an electronic device may be received at one or more microphones of another electronic device, and used by the other electronic device to determine the location of the electronic device.Type: ApplicationFiled: January 9, 2023Publication date: June 27, 2024Inventors: Ping Wen ONG, Ante JUKIC, Troy D. SCHULTZ
-
Patent number: 12010490Abstract: An audio renderer can have a machine learning model that jointly processes audio and visual information of an audiovisual recording. The audio renderer can generate output audio channels. Sounds captured in the audiovisual recording and present in the output audio channels are spatially mapped based on the joint processing of the audio and visual information by the machine learning model. Other aspects are described.Type: GrantFiled: January 3, 2023Date of Patent: June 11, 2024Assignee: Apple Inc.Inventors: Symeon Delikaris Manias, Mehrez Souden, Ante Jukic, Matthew S. Connolly, Sabine Webel, Ronald J. Guglielmone, Jr.
-
Patent number: 11996114Abstract: Disclosed is a multi-task machine learning model such as a time-domain deep neural network (DNN) that jointly generate an enhanced target speech signal and target audio parameters from a mixed signal of target speech and interference signal. The DNN may encode the mixed signal, determine masks used to jointly estimate the target signal and the target audio parameters based on the encoded mixed signal, apply the mask to separate the target speech from the interference signal to jointly estimate the target signal and the target audio parameters, and decode the masked features to enhance the target speech signal and to estimate the target audio parameters. The target audio parameters may include a voice activity detection (VAD) flag of the target speech. The DNN may leverage multi-channel audio signal and multi-modal signals such as video signals of the target speaker to improve the robustness of the enhanced target speech signal.Type: GrantFiled: May 15, 2021Date of Patent: May 28, 2024Assignee: Apple Inc.Inventors: Ramin Pishehvar, Ante Jukic, Mehrez Souden, Jason Wung, Feipeng Li, Joshua D. Atkins
-
Publication number: 20230410828Abstract: Disclosed is a reference-less echo mitigation or cancellation technique. The technique enables suppression of echoes from an interference signal when a reference version of the interference signal conventionally used for echo mitigation may not be available. A first stage of the technique may use a machine learning model to model a target audio area surrounding a device so that a target audio signal estimated as originating from within the target audio area may be accepted. In contrast, audio signals such as playback of media content on a TV or other interfering signals estimated as originating from outside the target audio area may be suppressed. A second stage of the technique may be a level-based suppressor that further attenuates the residual echo from the output of the first stage based on an audio level threshold. Side information may be provided to adjust the target audio area or the audio level threshold.Type: ApplicationFiled: June 21, 2022Publication date: December 21, 2023Inventors: Ramin Pishehvar, Mehrez Souden, Sean A. Ramprashad, Jason Wung, Ante Jukic, Joshua D. Atkins
-
Patent number: 11849291Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.Type: GrantFiled: May 17, 2021Date of Patent: December 19, 2023Assignee: Apple Inc.Inventors: Mehrez Souden, Jason Wung, Ante Jukic, Ramin Pishehvar, Joshua D. Atkins
-
Patent number: 11546692Abstract: An audio renderer can have a machine learning model that jointly processes audio and visual information of an audiovisual recording. The audio renderer can generate output audio channels. Sounds captured in the audiovisual recording and present in the output audio channels are spatially mapped based on the joint processing of the audio and visual information by the machine learning model. Other aspects are described.Type: GrantFiled: July 8, 2021Date of Patent: January 3, 2023Assignee: APPLE INC.Inventors: Symeon Delikaris Manias, Mehrez Souden, Ante Jukic, Matthew S. Connolly, Sabine Webel, Ronald J. Guglielmone, Jr.
-
Patent number: 11514928Abstract: A device implementing a system for processing speech in an audio signal includes at least one processor configured to receive an audio signal corresponding to at least one microphone of a device, and to determine, using a first model, a first probability that a speech source is present in the audio signal. The at least one processor is further configured to determine, using a second model, a second probability that an estimated location of a source of the audio signal corresponds to an expected position of a user of the device, and to determine a likelihood that the audio signal corresponds to the user of the device based on the first and second probabilities.Type: GrantFiled: December 9, 2019Date of Patent: November 29, 2022Assignee: Apple Inc.Inventors: Mehrez Souden, Ante Jukic, Jason Wung, Ashrith Deshpande, Joshua D. Atkins
-
Patent number: 11508388Abstract: A device for processing audio signals in a time-domain includes a processor configured to receive multiple audio signals corresponding to respective microphones of at least two or more microphones of the device, at least one of the multiple audio signals comprising speech of a user of the device. The processor is configured to provide the multiple audio signals to a machine learning model, the machine learning model having been trained based at least in part on an expected position of the user of the device and expected positions of the respective microphones on the device. The processor is configured to provide an audio signal that is enhanced with respect to the speech of the user relative to the multiple audio signals, wherein the audio signal is a waveform output from the machine learning model.Type: GrantFiled: November 20, 2020Date of Patent: November 22, 2022Assignee: Apple Inc.Inventors: Mehrez Souden, Symeon Delikaris Manias, Joshua D. Atkins, Ante Jukic, Ramin Pishehvar
-
Publication number: 20220369030Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.Type: ApplicationFiled: May 17, 2021Publication date: November 17, 2022Inventors: Mehrez Souden, Jason Wung, Ante Jukic, Ramin Pishehvar, Joshua D. Atkins
-
Publication number: 20220366927Abstract: Disclosed is a multi-task machine learning model such as a time-domain deep neural network (DNN) that jointly generate an enhanced target speech signal and target audio parameters from a mixed signal of target speech and interference signal. The DNN may encode the mixed signal, determine masks used to jointly estimate the target signal and the target audio parameters based on the encoded mixed signal, apply the mask to separate the target speech from the interference signal to jointly estimate the target signal and the target audio parameters, and decode the masked features to enhance the target speech signal and to estimate the target audio parameters. The target audio parameters may include a voice activity detection (VAD) flag of the target speech. The DNN may leverage multi-channel audio signal and multi-modal signals such as video signals of the target speaker to improve the robustness of the enhanced target speech signal.Type: ApplicationFiled: May 15, 2021Publication date: November 17, 2022Inventors: Ramin Pishehvar, Ante Jukic, Mehrez Souden, Jason Wung, Feipeng Li, Joshua D. Atkins
-
Patent number: 11341988Abstract: A hybrid machine learning-based and DSP statistical post-processing technique is disclosed for voice activity detection. The hybrid technique may use a DNN model with a small context window to estimate the probability of speech by frames. The DSP statistical post-processing stage operates on the frame-based speech probabilities from the DNN model to smooth the probabilities and to reduce transitions between speech and non-speech states. The hybrid technique may estimate the soft decision on detected speech in each frame based on the smoothed probabilities, generate a hard decision using a threshold, detect a complete utterance that may include brief pauses, and estimate the end point of the utterance. The hybrid voice activity detection technique may incorporate a target directional probability estimator to estimate the direction of the speech source. The DSP statistical post-processing module may use the direction of the speech source to inform the estimates of the voice activity.Type: GrantFiled: September 23, 2019Date of Patent: May 24, 2022Assignee: APPLE INC.Inventors: Ramin Pishehvar, Feiping Li, Ante Jukic, Mehrez Souden, Joshua D. Atkins
-
Patent number: 11222652Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.Type: GrantFiled: July 19, 2019Date of Patent: January 11, 2022Assignee: APPLE INC.Inventors: Ante Jukic, Mehrez Souden, Joshua D. Atkins
-
Patent number: 10978086Abstract: An echo canceller is disclosed in which audio signals of the playback content received by one or more of the microphones from a loudspeaker of the device may be used as the playback reference signals to estimate the echo signals of the playback content received by a target microphone for echo cancellation. The echo canceller may estimate the transfer function between a reference microphone and the target microphone based on the playback reference signal of the reference microphone and the signal of the target microphone. To mitigate near-end speech cancellation at the target microphone, the echo canceller may compute a mask to distinguish between target microphone audio signals that are echo-signal dominant and near-end speech dominant. The echo canceller may use the mask to adaptively update the transfer function or to modify the playback reference signal used by the transfer function to estimate the echo signals of the playback content.Type: GrantFiled: July 19, 2019Date of Patent: April 13, 2021Assignee: Apple Inc.Inventors: Jason Wung, Sarmad Aziz Malik, Ashrith Deshpande, Ante Jukic, Joshua D. Atkins
-
Publication number: 20210074316Abstract: A device implementing a system for processing speech in an audio signal includes at least one processor configured to receive an audio signal corresponding to at least one microphone of a device, and to determine, using a first model, a first probability that a speech source is present in the audio signal. The at least one processor is further configured to determine, using a second model, a second probability that an estimated location of a source of the audio signal corresponds to an expected position of a user of the device, and to determine a likelihood that the audio signal corresponds to the user of the device based on the first and second probabilities.Type: ApplicationFiled: December 9, 2019Publication date: March 11, 2021Inventors: Mehrez SOUDEN, Ante JUKIC, Jason WUNG, Ashrith DESHPANDE, Joshua D. ATKINS
-
Publication number: 20210020188Abstract: An echo canceller is disclosed in which audio signals of the playback content received by one or more of the microphones from a loudspeaker of the device may be used as the playback reference signals to estimate the echo signals of the playback content received by a target microphone for echo cancellation. The echo canceller may estimate the transfer function between a reference microphone and the target microphone based on the playback reference signal of the reference microphone and the signal of the target microphone. To mitigate near-end speech cancellation at the target microphone, the echo canceller may compute a mask to distinguish between target microphone audio signals that are echo-signal dominant and near-end speech dominant. The echo canceller may use the mask to adaptively update the transfer function or to modify the playback reference signal used by the transfer function to estimate the echo signals of the playback content.Type: ApplicationFiled: July 19, 2019Publication date: January 21, 2021Inventors: Jason Wung, Sarmad Aziz Malik, Ashrith Deshpande, Ante Jukic, Joshua D. Atkins
-
Publication number: 20210020189Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.Type: ApplicationFiled: July 19, 2019Publication date: January 21, 2021Inventors: Ante Jukic, Mehrez Souden, Joshua D. Atkins