Patents by Inventor Ramin Pishehvar
Ramin Pishehvar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230410828Abstract: Disclosed is a reference-less echo mitigation or cancellation technique. The technique enables suppression of echoes from an interference signal when a reference version of the interference signal conventionally used for echo mitigation may not be available. A first stage of the technique may use a machine learning model to model a target audio area surrounding a device so that a target audio signal estimated as originating from within the target audio area may be accepted. In contrast, audio signals such as playback of media content on a TV or other interfering signals estimated as originating from outside the target audio area may be suppressed. A second stage of the technique may be a level-based suppressor that further attenuates the residual echo from the output of the first stage based on an audio level threshold. Side information may be provided to adjust the target audio area or the audio level threshold.Type: ApplicationFiled: June 21, 2022Publication date: December 21, 2023Inventors: Ramin Pishehvar, Mehrez Souden, Sean A. Ramprashad, Jason Wung, Ante Jukic, Joshua D. Atkins
-
Patent number: 11849291Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.Type: GrantFiled: May 17, 2021Date of Patent: December 19, 2023Assignee: Apple Inc.Inventors: Mehrez Souden, Jason Wung, Ante Jukic, Ramin Pishehvar, Joshua D. Atkins
-
Patent number: 11508388Abstract: A device for processing audio signals in a time-domain includes a processor configured to receive multiple audio signals corresponding to respective microphones of at least two or more microphones of the device, at least one of the multiple audio signals comprising speech of a user of the device. The processor is configured to provide the multiple audio signals to a machine learning model, the machine learning model having been trained based at least in part on an expected position of the user of the device and expected positions of the respective microphones on the device. The processor is configured to provide an audio signal that is enhanced with respect to the speech of the user relative to the multiple audio signals, wherein the audio signal is a waveform output from the machine learning model.Type: GrantFiled: November 20, 2020Date of Patent: November 22, 2022Assignee: Apple Inc.Inventors: Mehrez Souden, Symeon Delikaris Manias, Joshua D. Atkins, Ante Jukic, Ramin Pishehvar
-
Publication number: 20220366927Abstract: Disclosed is a multi-task machine learning model such as a time-domain deep neural network (DNN) that jointly generate an enhanced target speech signal and target audio parameters from a mixed signal of target speech and interference signal. The DNN may encode the mixed signal, determine masks used to jointly estimate the target signal and the target audio parameters based on the encoded mixed signal, apply the mask to separate the target speech from the interference signal to jointly estimate the target signal and the target audio parameters, and decode the masked features to enhance the target speech signal and to estimate the target audio parameters. The target audio parameters may include a voice activity detection (VAD) flag of the target speech. The DNN may leverage multi-channel audio signal and multi-modal signals such as video signals of the target speaker to improve the robustness of the enhanced target speech signal.Type: ApplicationFiled: May 15, 2021Publication date: November 17, 2022Inventors: Ramin Pishehvar, Ante Jukic, Mehrez Souden, Jason Wung, Feipeng Li, Joshua D. Atkins
-
Publication number: 20220369030Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.Type: ApplicationFiled: May 17, 2021Publication date: November 17, 2022Inventors: Mehrez Souden, Jason Wung, Ante Jukic, Ramin Pishehvar, Joshua D. Atkins
-
Patent number: 11341988Abstract: A hybrid machine learning-based and DSP statistical post-processing technique is disclosed for voice activity detection. The hybrid technique may use a DNN model with a small context window to estimate the probability of speech by frames. The DSP statistical post-processing stage operates on the frame-based speech probabilities from the DNN model to smooth the probabilities and to reduce transitions between speech and non-speech states. The hybrid technique may estimate the soft decision on detected speech in each frame based on the smoothed probabilities, generate a hard decision using a threshold, detect a complete utterance that may include brief pauses, and estimate the end point of the utterance. The hybrid voice activity detection technique may incorporate a target directional probability estimator to estimate the direction of the speech source. The DSP statistical post-processing module may use the direction of the speech source to inform the estimates of the voice activity.Type: GrantFiled: September 23, 2019Date of Patent: May 24, 2022Assignee: APPLE INC.Inventors: Ramin Pishehvar, Feiping Li, Ante Jukic, Mehrez Souden, Joshua D. Atkins
-
Patent number: 10546593Abstract: A number of features are extracted from a current frame of a multi-channel speech pickup and from side information that is a linear echo estimate, a diffuse signal component, or a noise estimate of the multi-channel speech pickup. A DNN-based speech presence probability is produced for the current frame, where the SPP value is produced in response to the extracted features being input to the DNN. The DNN-based SPP value is applied to configure a multi-channel filter whose input is the multi-channel speech pickup and whose output is a single audio signal. In one aspect, the system is designed to run online, at low enough latency for real time applications such voice trigger detection. Other aspects are also described and claimed.Type: GrantFiled: December 4, 2017Date of Patent: January 28, 2020Assignee: APPLE INC.Inventors: Jason Wung, Mehrez Souden, Ramin Pishehvar, Joshua D. Atkins
-
Patent number: 10403299Abstract: A digital speech enhancement system that performs a specific chain of digital signal processing operations upon multi-channel sound pick up, to result in a single, enhanced speech signal. The operations are designed to be computationally less complex yet as a whole yield an enhanced speech signal that produces accurate voice trigger detection and low word error rates by an automatic speech recognizer. The constituent operations or components of the system have been chosen so that the overall system is robust to changing acoustic conditions, and can deliver the enhanced speech signal with low enough latency so that the system can be used online (enabling real-time, voice trigger detection and streaming ASR.) Other embodiments are also described and claimed.Type: GrantFiled: June 2, 2017Date of Patent: September 3, 2019Assignee: Apple Inc.Inventors: Jason Wung, Joshua D. Atkins, Ramin Pishehvar, Mehrez Souden
-
Publication number: 20190172476Abstract: A number of features are extracted from a current frame of a multi-channel speech pickup and from side information that is a linear echo estimate, a diffuse signal component, or a noise estimate of the multi-channel speech pickup. A DNN-based speech presence probability is produced for the current frame, where the SPP value is produced in response to the extracted features being input to the DNN. The DNN-based SPP value is applied to configure a multi-channel filter whose input is the multi-channel speech pickup and whose output is a single audio signal. In one aspect, the system is designed to run online, at low enough latency for real time applications such voice trigger detection. Other aspects are also described and claimed.Type: ApplicationFiled: December 4, 2017Publication date: June 6, 2019Inventors: Jason Wung, Mehrez Souden, Ramin Pishehvar, Joshua D. Atkins
-
Publication number: 20180350379Abstract: A digital speech enhancement system that performs a specific chain of digital signal processing operations upon multi-channel sound pick up, to result in a single, enhanced speech signal. The operations are designed to be computationally less complex yet as a whole yield an enhanced speech signal that produces accurate voice trigger detection and low word error rates by an automatic speech recognizer. The constituent operations or components of the system have been chosen so that the overall system is robust to changing acoustic conditions, and can deliver the enhanced speech signal with low enough latency so that the system can be used online (enabling real-time, voice trigger detection and streaming ASR.) Other embodiments are also described and claimed.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Inventors: Jason Wung, Joshua D. Atkins, Ramin Pishehvar, Mehrez Souden
-
Patent number: 10074380Abstract: Method for performing speech enhancement using a Deep Neural Network (DNN)-based signal starts with training DNN offline by exciting a microphone using target training signal that includes signal approximation of clean speech. Loudspeaker is driven with a reference signal and outputs loudspeaker signal. Microphone then generates microphone signal based on at least one of: near-end speaker signal, ambient noise signal, or loudspeaker signal. Acoustic-echo-canceller (AEC) generates AEC echo-cancelled signal based on reference signal and microphone signal. Loudspeaker signal estimator generates estimated loudspeaker signal based on microphone signal and AEC echo-cancelled signal. DNN receives microphone signal, reference signal, AEC echo-cancelled signal, and estimated loudspeaker signal and generates a speech reference signal that includes signal statistics for residual echo or for noise.Type: GrantFiled: August 3, 2016Date of Patent: September 11, 2018Assignee: Apple Inc.Inventors: Jason Wung, Ramin Pishehvar, Daniele Giacobello, Joshua D. Atkins
-
Publication number: 20180040333Abstract: Method for performing speech enhancement using a Deep Neural Network (DNN)-based signal starts with training DNN offline by exciting a microphone using target training signal that includes signal approximation of clean speech. Loudspeaker is driven with a reference signal and outputs loudspeaker signal. Microphone then generates microphone signal based on at least one of: near-end speaker signal, ambient noise signal, or loudspeaker signal. Acoustic-echo-canceller (AEC) generates AEC echo-cancelled signal based on reference signal and microphone signal. Loudspeaker signal estimator generates estimated loudspeaker signal based on microphone signal and AEC echo-cancelled signal. DNN receives microphone signal, reference signal, AEC echo-cancelled signal, and estimated loudspeaker signal and generates a speech reference signal that includes signal statistics for residual echo or for noise.Type: ApplicationFiled: August 3, 2016Publication date: February 8, 2018Inventors: Jason Wung, Ramin Pishehvar, Daniele Giacobello, Joshua D. Atkins
-
Patent number: 8711015Abstract: The invention relates to compressing of sparse data sets contains sequences of data values and position information therefor. The position information may be in the form of position indices defining active positions of the data values in a sparse vector of length N. The position information is encoded into the data values by adjusting one or more of the data values within a pre-defined tolerance range, so that a pre-defined mapping function of the data values and their positions is close to a target value. In one embodiment, the mapping function is defined using a sub-set of N filler values which elements are used to fill empty positions in the input sparse data vector. At the decoder, the correct data positions are identified by searching though possible sub-sets of filler values.Type: GrantFiled: August 24, 2011Date of Patent: April 29, 2014Assignee: Her Majesty the Queen in Right of Canada as represented by the Minister of Industry, through the Communications Research Centre CanadaInventors: Frederic Mustiere, Hossein Najaf-Zadeh, Ramin Pishehvar, Hassan Lahdili, Louis Thibault, Martin Bouchard
-
Publication number: 20120316886Abstract: The invention relates to a method and apparatus for efficient encoding of media signals including audio. A 2d sparse representation, or spikegram, of one frame of a digitized audio signal is generated using an overcomplete set of kernels. The spikegram is then mapped to a non-negative matrix, which is decomposed into a 3D component matrix containing hidden components and a 3D weight matrix using a two-dimensional non-negative matrix factorization. Elements of the 3D component and weight matrices are then adaptively quantized using integer programming to determine an optimal quantization scheme, and the quantized values are the optionally encoded using an arithmetic coder.Type: ApplicationFiled: June 8, 2012Publication date: December 13, 2012Inventors: Ramin Pishehvar, Hossein Najaf-Zadeh, Frederic Mustiere, Christopher Srinivasa, Hassan Lahdili, Louis Thibault
-
Publication number: 20120053948Abstract: The invention relates to compressing of sparse data sets contains sequences of data values and position information therefor. The position information may be in the form of position indices defining active positions of the data values in a sparse vector of length N. The position information is encoded into the data values by adjusting one or more of the data values within a pre-defined tolerance range, so that a pre-defined mapping function of the data values and their positions is close to a target value. In one embodiment, the mapping function is defined using a sub-set of N filler values which elements are used to fill empty positions in the input sparse data vector. At the decoder, the correct data positions are identified by searching though possible sub-sets of filler values.Type: ApplicationFiled: August 24, 2011Publication date: March 1, 2012Inventors: Frederic Mustiere, Hossein Najaf-Zadeh, Ramin Pishehvar, Hassan Lahdili, Louis Thibault, Martin Bouchard
-
Publication number: 20120023051Abstract: The invention relates to sparse parallel signal coding using a neural network which parameters are adaptively determined in dependence on a pre-determined signal shaping characteristic. A signal is provides to a neural network encoder implementing a locally competitive algorithm for sparsely representing the signal. A plurality of interconnected nodes receive projections of the input signal, and each node generates an output once an internal potential thereof exceeds a node-dependent threshold value. The node-dependent threshold value for each of the nodes is set based upon the pre-determined shaping characteristic. In one embodiment, the invention enables to incorporate perceptual auditory masking in the sparse parallel coding of audio signals.Type: ApplicationFiled: July 22, 2011Publication date: January 26, 2012Inventors: Ramin PISHEHVAR, Christopher Srinivasa, Hossein Najaf-Zadeh, Frederic Mustiere, Hassan Lahdili, Louis Thibault
-
Publication number: 20080219466Abstract: A biologically-inspired process for universal audio coding based on neural spikes is presented. The process is based on the generation of sparse two-dimensional time-frequency representations of audio signals, called spikegrams. The spikegrams are generated by projecting the audio signal onto a set of over-complete adaptive gamma-chirp kernels. A masking model is applied to the spikegrams to remove inaudible spikes and to increase the coding efficiency. In respect of one aspect of the invention, the masked spikegram is then quantized using a genetic-algorithm-based quantizer (or its simplified linear version). The values are then differentially coded using graph based optimization and entropy coded afterwards.Type: ApplicationFiled: March 7, 2008Publication date: September 11, 2008Applicants: the Communications Research Centre CanadaInventors: Ramin Pishehvar, Hossein Najaf-Zadeh, Louis Thibault