Patents by Inventor Alireza Kenarsari Anhari

Alireza Kenarsari Anhari has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9959886
    Abstract: The various implementations described enable voice activity detection and/or pitch estimation for speech signal processing in, for example and without limitation, hearing aids, speech recognition and interpretation software, telephony, and various applications for smartphones and/or wearable devices. In particular, some implementations include systems, methods and/or devices operable to detect voice activity in an audible signal by determining a voice activity indicator value that is a normalized function of signal amplitudes associated with at least two sets of spectral locations associated with a candidate pitch. In some implementations, voice activity is considered detected when the voice activity indicator value breaches a threshold value. Additionally and/or alternatively, in some implementations, analysis of the audible signal provides a pitch estimate of detectable voice activity.
    Type: Grant
    Filed: December 6, 2013
    Date of Patent: May 1, 2018
    Assignee: Malaspina Labs (Barbados), Inc.
    Inventors: Alireza Kenarsari Anhari, Alexander Escott, Pierre Zakarauskas
  • Patent number: 9953633
    Abstract: Various implementations disclosed herein include a training module configured to produce a set of segment templates from a concurrent segmentation of a plurality of vocalization instances of a VSP vocalized by a particular speaker, who is identifiable by a corresponding set of vocal characteristics. Each segment template provides a stochastic characterization of how each of one or more portions of a VSP is vocalized by the particular speaker in accordance with the corresponding set of vocal characteristics. Additionally, in various implementations, the training module includes systems, methods and/or devices configured to produce a set of VSP segment maps that each provide a quantitative characterization of how respective segments of the plurality of vocalization instances vary in relation to a corresponding one of a set of segment templates.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: April 24, 2018
    Assignee: MALASPINA LABS (BARBADOS), INC.
    Inventors: Clarence Chu, Alireza Kenarsari Anhari
  • Patent number: 9792898
    Abstract: Various implementations disclosed herein include a training module configured to concurrently segment a plurality of vocalization instances of a voiced sound pattern (VSP) as vocalized by a particular speaker, who is identifiable by a corresponding set of vocal characteristics. Aspects of various implementations are used to determine a concurrent segmentation of multiple similar instances of a VSP using a modified hierarchical agglomerative clustering (HAC) process adapted to jointly and simultaneously segment multiple similar instances of the VSP. Information produced from multiple instances of a VSP vocalized by a particular speaker characterize how the particular speaker vocalizes the VSP and how those vocalizations may vary between instances. In turn, in some implementations, the information produced using the modified HAC process is sufficient to determine more a reliable detection (and/or matching) threshold metric(s) for detecting and matching the VSP as vocalized by the particular speaker.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: October 17, 2017
    Assignee: MALASPINA LABS (BARBADOS), INC.
    Inventors: Clarence Chu, Alireza Kenarsari Anhari
  • Publication number: 20160027438
    Abstract: Various implementations disclosed herein include a training module configured to concurrently segment a plurality of vocalization instances of a voiced sound pattern (VSP) as vocalized by a particular speaker, who is identifiable by a corresponding set of vocal characteristics. Aspects of various implementations are used to determine a concurrent segmentation of multiple similar instances of a VSP using a modified hierarchical agglomerative clustering (HAC) process adapted to jointly and simultaneously segment multiple similar instances of the VSP. Information produced from multiple instances of a VSP vocalized by a particular speaker characterize how the particular speaker vocalizes the VSP and how those vocalizations may vary between instances. In turn, in some implementations, the information produced using the modified HAC process is sufficient to determine more a reliable detection (and/or matching) threshold metric(s) for detecting and matching the VSP as vocalized by the particular speaker.
    Type: Application
    Filed: July 23, 2015
    Publication date: January 28, 2016
    Inventors: Clarence Chu, Alireza Kenarsari Anhari
  • Publication number: 20160027432
    Abstract: Various implementations disclosed herein include a training module configured to produce a set of segment templates from a concurrent segmentation of a plurality of vocalization instances of a VSP vocalized by a particular speaker, who is identifiable by a corresponding set of vocal characteristics. Each segment template provides a stochastic characterization of how each of one or more portions of a VSP is vocalized by the particular speaker in accordance with the corresponding set of vocal characteristics. Additionally, in various implementations, the training module includes systems, methods and/or devices configured to produce a set of VSP segment maps that each provide a quantitative characterization of how respective segments of the plurality of vocalization instances vary in relation to a corresponding one of a set of segment templates.
    Type: Application
    Filed: July 23, 2015
    Publication date: January 28, 2016
    Inventors: Clarence Chu, Alireza Kenarsari Anhari
  • Patent number: 9241223
    Abstract: Various implementations described herein include directional filtering of audible signals, which is provided to enable acoustic isolation and localization of a target voice source. Without limitation, various implementations are suitable for speech signal processing applications in, hearing aids, speech recognition software, voice-command responsive software and devices, telephony, and various other applications associated with mobile and non-mobile systems and devices. In particular, some implementations include systems, methods and/or devices operable to emphasize at least some of the time-frequency components of an audible signal that originate from a target direction and source, and/or deemphasizing at least some of the time-frequency components that originate from one or more other directions or sources. In some implementations, directional filtering includes applying a gain function to audible signal data received from multiple audio sensors.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: January 19, 2016
    Assignee: Malaspina Labs (Barbados) Inc.
    Inventors: Clarence S. H. Chu, Alireza Kenarsari Anhari, Alexander Escott, Shawn E. Stevenson, Pierre Zakarauskas
  • Publication number: 20150228277
    Abstract: The various implementations described enable systems, devices and methods for detecting voiced sound patterns in noisy real-valued audible signal data. In some implementations, detecting voiced sound patterns in noisy real-valued audible signal data includes imposing a respective region of interest (ROI) on at least a portion of each of one or more temporal frames of audible signal data, wherein the respective ROI is characterized by one or more relatively distinguishable features of a corresponding voiced sound pattern (VSP), determining a feature characterization set within at least the ROI imposed on the at least a portion of each of one or more temporal frames of audible signal data, and detecting whether or not the corresponding VSP is present in the one or more frames of audible signal data by determining an output of a VSP-specific RNN, trained to provide a detection output, at least based on the feature characterization set.
    Type: Application
    Filed: February 5, 2015
    Publication date: August 13, 2015
    Inventor: Alireza Kenarsari Anhari
  • Publication number: 20150222996
    Abstract: Various implementations described herein include directional filtering of audible signals, which is provided to enable acoustic isolation and localization of a target voice source. Without limitation, various implementations are suitable for speech signal processing applications in, hearing aids, speech recognition software, voice-command responsive software and devices, telephony, and various other applications associated with mobile and non-mobile systems and devices. In particular, some implementations include systems, methods and/or devices operable to emphasize at least some of the time-frequency components of an audible signal that originate from a target direction and source, and/or deemphasizing at least some of the time-frequency components that originate from one or more other directions or sources. In some implementations, directional filtering includes applying a gain function to audible signal data received from multiple audio sensors.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 6, 2015
    Applicant: Malaspina Labs (Barbados), Inc.
    Inventors: Clarence S.H. Chu, Alireza Kenarsari Anhari, Alexander Escott, Shawn E. Stevenson, Pierre Zakarauskas
  • Publication number: 20150162021
    Abstract: The various implementations described enable voice activity detection and/or pitch estimation for speech signal processing in, for example and without limitation, hearing aids, speech recognition and interpretation software, telephony, and various applications for smartphones and/or wearable devices. In particular, some implementations include systems, methods and/or devices operable to detect voice activity in an audible signal by determining a voice activity indicator value that is a normalized function of signal amplitudes associated with at least two sets of spectral locations associated with a candidate pitch. In some implementations, voice activity is considered detected when the voice activity indicator value breaches a threshold value. Additionally and/or alternatively, in some implementations, analysis of the audible signal provides a pitch estimate of detectable voice activity.
    Type: Application
    Filed: December 6, 2013
    Publication date: June 11, 2015
    Applicant: Malaspina Labs (Barbados), Inc.
    Inventors: Alireza Kenarsari Anhari, Alexander Escott, Pierre Zakarauskas