Patents by Inventor Nima Mesgarani
Nima Mesgarani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12165670Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).Type: GrantFiled: July 31, 2023Date of Patent: December 10, 2024Assignee: The Trustees of Columbia University in the City of New YorkInventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
-
Publication number: 20240203440Abstract: Disclosed are methods, systems, device, and other implementations, including a method (performed by, for example, a hearing aid device) that includes obtaining a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and obtaining neural signals for the person, with the neural signals being indicative of one or more target sound sources, from the multiple sound sources, the person is attentive to. The method further includes determining a separation filter based, at least in part, on the neural signals obtained for the person, and applying the separation filter to a representation of the combined sound signal to derive a resultant separated signal representation associated with sound from the one or more target sound sources the person is attentive to.Type: ApplicationFiled: December 6, 2023Publication date: June 20, 2024Applicant: The Trustees of Columbia University in the City of New YorkInventors: Nima MESGARANI, Enea CEOLINI, Cong HAN
-
Patent number: 11961533Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).Type: GrantFiled: May 27, 2022Date of Patent: April 16, 2024Assignee: The Trustees of Columbia University in the City of New YorkInventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
-
Patent number: 11875813Abstract: Disclosed are methods, systems, device, and other implementations, including a method (performed by, for example, a hearing aid device) that includes obtaining a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and obtaining neural signals for the person, with the neural signals being indicative of one or more target sound sources, from the multiple sound sources, the person is attentive to. The method further includes determining a separation filter based, at least in part, on the neural signals obtained for the person, and applying the separation filter to a representation of the combined sound signal to derive a resultant separated signal representation associated with sound from the one or more target sound sources the person is attentive to.Type: GrantFiled: March 31, 2023Date of Patent: January 16, 2024Assignee: The Trustees of Columbia University in the City of New YorkInventors: Nima Mesgarani, Enea Ceolini, Cong Han
-
Publication number: 20240013800Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).Type: ApplicationFiled: July 31, 2023Publication date: January 11, 2024Inventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
-
Publication number: 20230377595Abstract: Disclosed are methods, systems, device, and other implementations, including a method (performed by, for example, a hearing aid device) that includes obtaining a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and obtaining neural signals for the person, with the neural signals being indicative of one or more target sound sources, from the multiple sound sources, the person is attentive to. The method further includes determining a separation filter based, at least in part, on the neural signals obtained for the person, and applying the separation filter to a representation of the combined sound signal to derive a resultant separated signal representation associated with sound from the one or more target sound sources the person is attentive to.Type: ApplicationFiled: March 31, 2023Publication date: November 23, 2023Applicant: The Trustees of Columbia University in the City of New YorkInventors: Nima Mesgarani, Enea Ceolini, Cong Han
-
Patent number: 11630513Abstract: In one aspect of the present disclosure, method includes: receiving neural data responsive to a listener's auditory attention; receiving an acoustic signal responsive to a plurality of acoustic sources; for each of the plurality of acoustic sources: generating, from the received acoustic signal, audio data comprising one or more features of the acoustic source, forming combined data representative of the neural data and the audio data, and providing the combined data to a classification network configured to calculate a similarity score between the neural data and the acoustic source using one or more similarity metrics; and using the similarity scores calculated for each of the acoustic sources to identify, from the plurality of acoustic sources, an acoustic source associated with the listener's auditory attention.Type: GrantFiled: December 19, 2019Date of Patent: April 18, 2023Assignee: Massachusetts Institute of TechnologyInventors: Gregory Ciccarelli, Christopher Smalt, Thomas Quatieri, Michael Brandstein, Paul Calamia, Stephanie Haro, Michael Nolan, Joseph Perricone, Nima Mesgarani, James O'Sullivan
-
Publication number: 20220392482Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).Type: ApplicationFiled: May 27, 2022Publication date: December 8, 2022Inventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
-
Patent number: 11373672Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).Type: GrantFiled: October 24, 2018Date of Patent: June 28, 2022Assignee: The Trustees of Columbia University in the City of New YorkInventors: Nima Mesgarani, Yi Luo, James O'Sullivan, Zhuo Chen
-
Publication number: 20200201435Abstract: In one aspect of the present disclosure, method includes: receiving neural data responsive to a listener's auditory attention; receiving an acoustic signal responsive to a plurality of acoustic sources; for each of the plurality of acoustic sources: generating, from the received acoustic signal, audio data comprising one or more features of the acoustic source, forming combined data representative of the neural data and the audio data, and providing the combined data to a classification network configured to calculate a similarity score between the neural data and the acoustic source using one or more similarity metrics; and using the similarity scores calculated for each of the acoustic sources to identify, from the plurality of acoustic sources, an acoustic source associated with the listener's auditory attention.Type: ApplicationFiled: December 19, 2019Publication date: June 25, 2020Inventors: Gregory Ciccarelli, Christopher Smalt, Thomas Quatieri, Michael Brandstein, Paul Calamia, Stephanie Haro, Michael Nolan, Joseph Perricone, Nima Mesgarani, James O'Sullivan
-
Publication number: 20190066713Abstract: Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method comprising obtaining, by a device, a combined sound signal for signals combined from multiple sound sources in an area in which a person is located, and applying, by the device, speech-separation processing (e.g., deep attractor network (DAN) processing, online DAN processing, LSTM-TasNet processing, Conv-TasNet processing), to the combined sound signal from the multiple sound sources to derive a plurality of separated signals that each contains signals corresponding to different groups of the multiple sound sources. The method further includes obtaining, by the device, neural signals for the person, the neural signals being indicative of one or more of the multiple sound sources the person is attentive to, and selecting one of the plurality of separated signals based on the obtained neural signals. The selected signal may then be processed (amplified, attenuated).Type: ApplicationFiled: October 24, 2018Publication date: February 28, 2019Inventors: Nima MESGARANI, Yi LUO, James O'SULLIVAN, Zhuo CHEN
-
Patent number: 7505902Abstract: An audio signal (172) representative of an acoustic signal is provided to an auditory model (105). The auditory model (105) produces a high-dimensional feature set based on physiological responses, as simulated by the auditory model (105), to the acoustic signal. A multidimensional analyzer (106) orthogonalizes and truncates the feature set based on contributions by components of the orthogonal set to a cortical representation of the acoustic signal. The truncated feature set is then provided to classifier (108), where a predetermined sound is discriminated from the acoustic signal.Type: GrantFiled: July 28, 2005Date of Patent: March 17, 2009Assignee: University of MarylandInventors: Nima Mesgarani, Shihab A. Shamma
-
Publication number: 20060025989Abstract: An audio signal (172) representative of an acoustic signal is provided to an auditory model (105). The auditory model (105) produces a high-dimensional feature set based on physiological responses, as simulated by the auditory model (105), to the acoustic signal. A multidimensional analyzer (106) orthogonalizes and truncates the feature set based on contributions by components of the orthogonal set to a cortical representation of the acoustic signal. The truncated feature set is then provided to classifier (108), where a predetermined sound is discriminated from the acoustic signal.Type: ApplicationFiled: July 28, 2005Publication date: February 2, 2006Inventors: Nima Mesgarani, Shihab Shamma