Patents by Inventor Nima Mesgarani

Nima Mesgarani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961533
    Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: April 16, 2024
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
  • Patent number: 11875813
    Abstract: Disclosed are methods, systems, device, and other implementations, including a method (performed by, for example, a hearing aid device) that includes obtaining a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and obtaining neural signals for the person, with the neural signals being indicative of one or more target sound sources, from the multiple sound sources, the person is attentive to. The method further includes determining a separation filter based, at least in part, on the neural signals obtained for the person, and applying the separation filter to a representation of the combined sound signal to derive a resultant separated signal representation associated with sound from the one or more target sound sources the person is attentive to.
    Type: Grant
    Filed: March 31, 2023
    Date of Patent: January 16, 2024
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Nima Mesgarani, Enea Ceolini, Cong Han
  • Publication number: 20240013800
    Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).
    Type: Application
    Filed: July 31, 2023
    Publication date: January 11, 2024
    Inventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
  • Publication number: 20230377595
    Abstract: Disclosed are methods, systems, device, and other implementations, including a method (performed by, for example, a hearing aid device) that includes obtaining a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and obtaining neural signals for the person, with the neural signals being indicative of one or more target sound sources, from the multiple sound sources, the person is attentive to. The method further includes determining a separation filter based, at least in part, on the neural signals obtained for the person, and applying the separation filter to a representation of the combined sound signal to derive a resultant separated signal representation associated with sound from the one or more target sound sources the person is attentive to.
    Type: Application
    Filed: March 31, 2023
    Publication date: November 23, 2023
    Applicant: The Trustees of Columbia University in the City of New York
    Inventors: Nima Mesgarani, Enea Ceolini, Cong Han
  • Patent number: 11630513
    Abstract: In one aspect of the present disclosure, method includes: receiving neural data responsive to a listener's auditory attention; receiving an acoustic signal responsive to a plurality of acoustic sources; for each of the plurality of acoustic sources: generating, from the received acoustic signal, audio data comprising one or more features of the acoustic source, forming combined data representative of the neural data and the audio data, and providing the combined data to a classification network configured to calculate a similarity score between the neural data and the acoustic source using one or more similarity metrics; and using the similarity scores calculated for each of the acoustic sources to identify, from the plurality of acoustic sources, an acoustic source associated with the listener's auditory attention.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: April 18, 2023
    Assignee: Massachusetts Institute of Technology
    Inventors: Gregory Ciccarelli, Christopher Smalt, Thomas Quatieri, Michael Brandstein, Paul Calamia, Stephanie Haro, Michael Nolan, Joseph Perricone, Nima Mesgarani, James O'Sullivan
  • Publication number: 20220392482
    Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).
    Type: Application
    Filed: May 27, 2022
    Publication date: December 8, 2022
    Inventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
  • Patent number: 11373672
    Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: June 28, 2022
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
  • Publication number: 20200201435
    Abstract: In one aspect of the present disclosure, method includes: receiving neural data responsive to a listener's auditory attention; receiving an acoustic signal responsive to a plurality of acoustic sources; for each of the plurality of acoustic sources: generating, from the received acoustic signal, audio data comprising one or more features of the acoustic source, forming combined data representative of the neural data and the audio data, and providing the combined data to a classification network configured to calculate a similarity score between the neural data and the acoustic source using one or more similarity metrics; and using the similarity scores calculated for each of the acoustic sources to identify, from the plurality of acoustic sources, an acoustic source associated with the listener's auditory attention.
    Type: Application
    Filed: December 19, 2019
    Publication date: June 25, 2020
    Inventors: Gregory Ciccarelli, Christopher Smalt, Thomas Quatieri, Michael Brandstein, Paul Calamia, Stephanie Haro, Michael Nolan, Joseph Perricone, Nima Mesgarani, James O'Sullivan
  • Publication number: 20190066713
    Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).
    Type: Application
    Filed: October 24, 2018
    Publication date: February 28, 2019
    Inventors: Nima MESGARANI, Yi LUO, James O'SULLIVAN, Zhuo CHEN
  • Patent number: 7505902
    Abstract: An audio signal (172) representative of an acoustic signal is provided to an auditory model (105). The auditory model (105) produces a high-dimensional feature set based on physiological responses, as simulated by the auditory model (105), to the acoustic signal. A multidimensional analyzer (106) orthogonalizes and truncates the feature set based on contributions by components of the orthogonal set to a cortical representation of the acoustic signal. The truncated feature set is then provided to classifier (108), where a predetermined sound is discriminated from the acoustic signal.
    Type: Grant
    Filed: July 28, 2005
    Date of Patent: March 17, 2009
    Assignee: University of Maryland
    Inventors: Nima Mesgarani, Shihab A. Shamma
  • Publication number: 20060025989
    Abstract: An audio signal (172) representative of an acoustic signal is provided to an auditory model (105). The auditory model (105) produces a high-dimensional feature set based on physiological responses, as simulated by the auditory model (105), to the acoustic signal. A multidimensional analyzer (106) orthogonalizes and truncates the feature set based on contributions by components of the orthogonal set to a cortical representation of the acoustic signal. The truncated feature set is then provided to classifier (108), where a predetermined sound is discriminated from the acoustic signal.
    Type: Application
    Filed: July 28, 2005
    Publication date: February 2, 2006
    Inventors: Nima Mesgarani, Shihab Shamma