Patents by Inventor Mehrez Souden

Mehrez Souden has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240029754
    Abstract: Implementations of the subject technology provide systems and methods for providing audio source separation for audio input, such as for audio devices having limited power and/or computing resources. The subject technology may allow an audio device to leverage processing and/or power resources of a companion device that is communicatively coupled to the audio device. The companion device may identify a noise condition of the audio device, select a source separation model based on the noise condition, and provide the source separation model to the audio device. In this way, the audio device can provide audio source separation functionality using a relatively small footprint source separation model that is specific to the noise condition in which the audio device is operated.
    Type: Application
    Filed: October 3, 2023
    Publication date: January 25, 2024
    Inventors: Carlos M. AVENDANO, John WOODRUFF, Jonathan HUANG, Mehrez SOUDEN, Andreas KOUTROUVELIS
  • Publication number: 20230410828
    Abstract: Disclosed is a reference-less echo mitigation or cancellation technique. The technique enables suppression of echoes from an interference signal when a reference version of the interference signal conventionally used for echo mitigation may not be available. A first stage of the technique may use a machine learning model to model a target audio area surrounding a device so that a target audio signal estimated as originating from within the target audio area may be accepted. In contrast, audio signals such as playback of media content on a TV or other interfering signals estimated as originating from outside the target audio area may be suppressed. A second stage of the technique may be a level-based suppressor that further attenuates the residual echo from the output of the first stage based on an audio level threshold. Side information may be provided to adjust the target audio area or the audio level threshold.
    Type: Application
    Filed: June 21, 2022
    Publication date: December 21, 2023
    Inventors: Ramin Pishehvar, Mehrez Souden, Sean A. Ramprashad, Jason Wung, Ante Jukic, Joshua D. Atkins
  • Patent number: 11849291
    Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: December 19, 2023
    Assignee: Apple Inc.
    Inventors: Mehrez Souden, Jason Wung, Ante Jukic, Ramin Pishehvar, Joshua D. Atkins
  • Patent number: 11810588
    Abstract: Implementations of the subject technology provide systems and methods for providing audio source separation for audio input, such as for audio devices having limited power and/or computing resources. The subject technology may allow an audio device to leverage processing and/or power resources of a companion device that is communicatively coupled to the audio device. The companion device may identify a noise condition of the audio device, select a source separation model based on the noise condition, and provide the source separation model to the audio device. In this way, the audio device can provide audio source separation functionality using a relatively small footprint source separation model that is specific to the noise condition in which the audio device is operated.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: November 7, 2023
    Assignee: Apple Inc.
    Inventors: Carlos M. Avendano, John Woodruff, Jonathan Huang, Mehrez Souden, Andreas Koutrouvelis
  • Publication number: 20230111509
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In accordance with one example, a method includes, at an electronic device with one or more processors, memory, and a plurality of microphones, sampling, at each of the plurality of microphones of the electronic device, an audio signal to obtain a plurality of audio signals; processing the plurality of audio signals to obtain a plurality of audio streams; and determining, based on the plurality of audio streams, whether any of the plurality of audio signals corresponds to a spoken trigger. The method further includes, in accordance with a determination that the plurality of audio signals corresponds to the spoken trigger, initiating a session of the digital assistant; and in accordance with a determination that the plurality of audio signals does not correspond to the spoken trigger, foregoing initiating a session of the digital assistant.
    Type: Application
    Filed: December 13, 2022
    Publication date: April 13, 2023
    Inventors: Yoon KIM, John BRIDLE, Joshua D. ATKINS, Feipeng LI, Mehrez SOUDEN
  • Patent number: 11546692
    Abstract: An audio renderer can have a machine learning model that jointly processes audio and visual information of an audiovisual recording. The audio renderer can generate output audio channels. Sounds captured in the audiovisual recording and present in the output audio channels are spatially mapped based on the joint processing of the audio and visual information by the machine learning model. Other aspects are described.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: January 3, 2023
    Assignee: APPLE INC.
    Inventors: Symeon Delikaris Manias, Mehrez Souden, Ante Jukic, Matthew S. Connolly, Sabine Webel, Ronald J. Guglielmone, Jr.
  • Patent number: 11532306
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In accordance with one example, a method includes, at an electronic device with one or more processors, memory, and a plurality of microphones, sampling, at each of the plurality of microphones of the electronic device, an audio signal to obtain a plurality of audio signals; processing the plurality of audio signals to obtain a plurality of audio streams; and determining, based on the plurality of audio streams, whether any of the plurality of audio signals corresponds to a spoken trigger. The method further includes, in accordance with a determination that the plurality of audio signals corresponds to the spoken trigger, initiating a session of the digital assistant; and in accordance with a determination that the plurality of audio signals does not correspond to the spoken trigger, foregoing initiating a session of the digital assistant.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: December 20, 2022
    Assignee: Apple Inc.
    Inventors: Yoon Kim, John Bridle, Joshua D. Atkins, Feipeng Li, Mehrez Souden
  • Patent number: 11514928
    Abstract: A device implementing a system for processing speech in an audio signal includes at least one processor configured to receive an audio signal corresponding to at least one microphone of a device, and to determine, using a first model, a first probability that a speech source is present in the audio signal. The at least one processor is further configured to determine, using a second model, a second probability that an estimated location of a source of the audio signal corresponds to an expected position of a user of the device, and to determine a likelihood that the audio signal corresponds to the user of the device based on the first and second probabilities.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: November 29, 2022
    Assignee: Apple Inc.
    Inventors: Mehrez Souden, Ante Jukic, Jason Wung, Ashrith Deshpande, Joshua D. Atkins
  • Patent number: 11508388
    Abstract: A device for processing audio signals in a time-domain includes a processor configured to receive multiple audio signals corresponding to respective microphones of at least two or more microphones of the device, at least one of the multiple audio signals comprising speech of a user of the device. The processor is configured to provide the multiple audio signals to a machine learning model, the machine learning model having been trained based at least in part on an expected position of the user of the device and expected positions of the respective microphones on the device. The processor is configured to provide an audio signal that is enhanced with respect to the speech of the user relative to the multiple audio signals, wherein the audio signal is a waveform output from the machine learning model.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: November 22, 2022
    Assignee: Apple Inc.
    Inventors: Mehrez Souden, Symeon Delikaris Manias, Joshua D. Atkins, Ante Jukic, Ramin Pishehvar
  • Publication number: 20220369030
    Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 17, 2022
    Inventors: Mehrez Souden, Jason Wung, Ante Jukic, Ramin Pishehvar, Joshua D. Atkins
  • Publication number: 20220366927
    Abstract: Disclosed is a multi-task machine learning model such as a time-domain deep neural network (DNN) that jointly generate an enhanced target speech signal and target audio parameters from a mixed signal of target speech and interference signal. The DNN may encode the mixed signal, determine masks used to jointly estimate the target signal and the target audio parameters based on the encoded mixed signal, apply the mask to separate the target speech from the interference signal to jointly estimate the target signal and the target audio parameters, and decode the masked features to enhance the target speech signal and to estimate the target audio parameters. The target audio parameters may include a voice activity detection (VAD) flag of the target speech. The DNN may leverage multi-channel audio signal and multi-modal signals such as video signals of the target speaker to improve the robustness of the enhanced target speech signal.
    Type: Application
    Filed: May 15, 2021
    Publication date: November 17, 2022
    Inventors: Ramin Pishehvar, Ante Jukic, Mehrez Souden, Jason Wung, Feipeng Li, Joshua D. Atkins
  • Patent number: 11490218
    Abstract: A device for reproducing spatial audio using a machine learning model may include at least one processor configured to receive multiple audio signals corresponding to a sound scene captured by respective microphones of a device. The at least one processor may be further configured to provide the multiple audio signals to a machine learning model, the machine learning model having been trained based at least in part on a target rendering configuration. The at least one processor may be further configured to provide, responsive to providing the multiple audio signals to the machine learning model, multichannel audio signals that comprise a spatial reproduction of the sound scene in accordance with the target rendering configuration.
    Type: Grant
    Filed: December 24, 2020
    Date of Patent: November 1, 2022
    Assignee: Apple Inc.
    Inventors: Symeon Delikaris Manias, Mehrez Souden
  • Publication number: 20220270629
    Abstract: Implementations of the subject technology provide systems and methods for providing audio source separation for audio input, such as for audio devices having limited power and/or computing resources. The subject technology may allow an audio device to leverage processing and/or power resources of a companion device that is communicatively coupled to the audio device. The companion device may identify a noise condition of the audio device, select a source separation model based on the noise condition, and provide the source separation model to the audio device. In this way, the audio device can provide audio source separation functionality using a relatively small footprint source separation model that is specific to the noise condition in which the audio device is operated.
    Type: Application
    Filed: January 31, 2022
    Publication date: August 25, 2022
    Inventors: Carlos M. AVENDANO, John WOODRUFF, Jonathan HUANG, Mehrez SOUDEN, Andreas KOUTROUVELIS
  • Patent number: 11341988
    Abstract: A hybrid machine learning-based and DSP statistical post-processing technique is disclosed for voice activity detection. The hybrid technique may use a DNN model with a small context window to estimate the probability of speech by frames. The DSP statistical post-processing stage operates on the frame-based speech probabilities from the DNN model to smooth the probabilities and to reduce transitions between speech and non-speech states. The hybrid technique may estimate the soft decision on detected speech in each frame based on the smoothed probabilities, generate a hard decision using a threshold, detect a complete utterance that may include brief pauses, and estimate the end point of the utterance. The hybrid voice activity detection technique may incorporate a target directional probability estimator to estimate the direction of the speech source. The DSP statistical post-processing module may use the direction of the speech source to inform the estimates of the voice activity.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: May 24, 2022
    Assignee: APPLE INC.
    Inventors: Ramin Pishehvar, Feiping Li, Ante Jukic, Mehrez Souden, Joshua D. Atkins
  • Publication number: 20220059123
    Abstract: Processing of ambience and speech can include extracting from audio signals, ambience and speech signals. One or more spatial parameters can be generated that define spatial characteristics of ambience sound in the one or more ambience audio signals. The primary speech signal, the one or more ambience audio signals, and the spatial parameters can be encoded into one or more encoded data streams. Other aspects are described and claimed.
    Type: Application
    Filed: October 29, 2021
    Publication date: February 24, 2022
    Inventors: Jonathan D. Sheaffer, Joshua D. Atkins, Mehrez Souden, Symeon Delikaris Manias, Sean A. Ramprashad
  • Patent number: 11222652
    Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: January 11, 2022
    Assignee: APPLE INC.
    Inventors: Ante Jukic, Mehrez Souden, Joshua D. Atkins
  • Publication number: 20210097998
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In accordance with one example, a method includes, at an electronic device with one or more processors, memory, and a plurality of microphones, sampling, at each of the plurality of microphones of the electronic device, an audio signal to obtain a plurality of audio signals; processing the plurality of audio signals to obtain a plurality of audio streams; and determining, based on the plurality of audio streams, whether any of the plurality of audio signals corresponds to a spoken trigger. The method further includes, in accordance with a determination that the plurality of audio signals corresponds to the spoken trigger, initiating a session of the digital assistant; and in accordance with a determination that the plurality of audio signals does not correspond to the spoken trigger, foregoing initiating a session of the digital assistant.
    Type: Application
    Filed: December 3, 2020
    Publication date: April 1, 2021
    Inventors: Yoon KIM, John BRIDLE, Joshua D. ATKINS, Feipeng LI, Mehrez SOUDEN
  • Publication number: 20210074316
    Abstract: A device implementing a system for processing speech in an audio signal includes at least one processor configured to receive an audio signal corresponding to at least one microphone of a device, and to determine, using a first model, a first probability that a speech source is present in the audio signal. The at least one processor is further configured to determine, using a second model, a second probability that an estimated location of a source of the audio signal corresponds to an expected position of a user of the device, and to determine a likelihood that the audio signal corresponds to the user of the device based on the first and second probabilities.
    Type: Application
    Filed: December 9, 2019
    Publication date: March 11, 2021
    Inventors: Mehrez SOUDEN, Ante JUKIC, Jason WUNG, Ashrith DESHPANDE, Joshua D. ATKINS
  • Publication number: 20210020189
    Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.
    Type: Application
    Filed: July 19, 2019
    Publication date: January 21, 2021
    Inventors: Ante Jukic, Mehrez Souden, Joshua D. Atkins
  • Patent number: 10798511
    Abstract: Processing input audio channels for generating spatial audio can include receiving a plurality of microphone signals that capture a sound field. Each microphone signal can be transformed into a frequency domain signal. From each frequency domain signal, a direct component and a diffuse component can be extracted. The direct component can be processed with a parametric renderer. The diffuse component can be processed with a linear renderer. The components can be combined, resulting in a spatial audio output. The levels of the components can be adjusted to match a direct to diffuse ratio (DDR) of the output with the DDR of the captured sound field. Other aspects are also described and claimed.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: October 6, 2020
    Assignee: APPLE INC.
    Inventors: Jonathan D. Sheaffer, Juha O. Merimaa, Jason Wung, Martin E. Johnson, Peter A. Raffensperger, Joshua D. Atkins, Symeon Delikaris Manias, Mehrez Souden