Patents by Inventor Daniel P. W. Ellis

Daniel P. W. Ellis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230342108
    Abstract: An example method includes receiving, by one or more processors of a computing device, audio data recorded by one or more microphones of the computing device; and generating, based on the audio data and by the one or more processors, one or more structured sound records, a first structured sound record of the one or more structured sound records including: a description of a first sound, the description including a descriptive label of the first sound, the descriptive label different than a text transcription of the first sound, and a time stamp indicating a time at which the first sound occurred; and outputting a graphical user interface including timeline representation of the one or more structured sound records.
    Type: Application
    Filed: August 31, 2021
    Publication date: October 26, 2023
    Inventors: Dimitri Kanevsky, Sagar Savla, Ausmus Chang, Chiawei Liu, Daniel P W Ellis, Jinho Kim, Justin Stuart Paul, Sharlene Yuan, Alex Huang, Yun Che Chung, Chelsey Fleming
  • Patent number: 9384272
    Abstract: Methods, systems, and media for identifying similar songs using jumpcodes are provided. In some embodiments, methods for a cover song from a query song are provided, the methods comprising: identifying a query song jumpcode for the query song, wherein the query song jumpcode is indicative of changes in prominent pitch over a portion of the query song; identifying a plurality of reference song jumpcodes for a reference song, wherein each of the reference song jumpcodes is indicative of changes in prominent pitch over a portion of the reference song; determining if the query song jumpcode matches any of the plurality of reference song jumpcodes; and upon determining that the query song jumpcode matches at least one of the plurality of reference song jumpcodes, generating an indication that the reference song is a cover song of the query song.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: July 5, 2016
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Thierry Bertin-Mahieux, Daniel P. W. Ellis
  • Patent number: 8788277
    Abstract: Apparatus and methods for processing compression encoded signals are provided. In some embodiments, a signal processing method is provided that includes receiving a subband of a compression encoded signal at a subband processor, generating envelope information regarding the subband of the compression encoded signal to provide changes in the dynamic range of the compression encoded signal for fixed-point digital signal processing, processing the compression encoded signal with a fixed-point companding digital signal processor using the envelope information, and producing a processed compression encoded signal at the output of the subband processor.
    Type: Grant
    Filed: September 13, 2010
    Date of Patent: July 22, 2014
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Christos Vezyrtzis, Aaron Klein, Yannis Tsividis, Daniel P. W. Ellis
  • Patent number: 8706276
    Abstract: System, methods, and media that: receive a first piece of audio content; identify a first plurality of atoms that describe at least a portion of the first piece of audio content using a Matching Pursuit algorithm; form a first group of atoms from at least a portion of the first plurality of atoms, the first group of atoms having first group parameters; form at least one first hash value for the first group of atoms based on the first group parameters; compare the at least one first hash value with at least one second hash value, wherein the at least one second hash value is based on second group parameters of a second group of atoms associated with a second piece of audio content; and identify a match between the first piece of audio content and the second piece of audio content based on the comparing.
    Type: Grant
    Filed: October 12, 2010
    Date of Patent: April 22, 2014
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Daniel P. W. Ellis, Courtenay V. Cotton
  • Publication number: 20130159756
    Abstract: Methods and systems for analyzing gross resource consumption data to determine resource consumption of individual devices are disclosed. In some embodiments, the methods and methods include the following: obtaining a first data that includes a time series of a gross resource consumption for a location; using blind signal separation techniques, identifying power-on and power-off events within the first data, the events being caused by turning particular devices at the location on and off during the time series and each of the events reflecting a power consumption signature for each of the particular devices; associating each of the events with a known device, the known device being substantially similar to one of the particular devices at the location; and determining a portion of the gross resource consumption consumed by each of the particular devices.
    Type: Application
    Filed: March 17, 2011
    Publication date: June 20, 2013
    Inventor: Daniel P.W. Ellis
  • Publication number: 20130070928
    Abstract: Methods, systems, and media for mobile audio event recognition are provided. In some embodiments, a method for recognizing audio events is provided, the method comprising: receiving an application that includes a plurality of classification models from a server, wherein each of the plurality of classification models is trained to identify one of a plurality of classes of non-speech audio events; receiving an audio signal; storing at least a portion of the audio signal; extracting, a plurality of audio features from the portion of the audio signal based on one or more criterion; comparing each of the plurality of extracted audio features with the plurality of classification models; identifying at least one class of non-speech audio events present in the portion of the audio signal based on the comparison; and providing an alert corresponding to the at least one class of identified non-speech audio events.
    Type: Application
    Filed: September 21, 2012
    Publication date: March 21, 2013
    Inventors: Daniel P. W. Ellis, Courtenay V. Cotton, Tom Friedland, Kris Esterson
  • Publication number: 20110116551
    Abstract: Apparatus and methods for processing compression encoded signals are provided. In some embodiments, a signal processing method is provided that includes receiving a subband of a compression encoded signal at a subband processor, generating envelope information regarding the subband of the compression encoded signal to provide changes in the dynamic range of the compression encoded signal for fixed-point digital signal processing, processing the compression encoded signal with a fixed-point companding digital signal processor using the envelope information, and producing a processed compression encoded signal at the output of the subband processor.
    Type: Application
    Filed: September 13, 2010
    Publication date: May 19, 2011
    Applicant: THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
    Inventors: Christos VEZYRTZIS, Aaron KLEIN, Yannis TSIVIDIS, Daniel P.W. ELLIS
  • Publication number: 20110087349
    Abstract: System, methods, and media that: receive a first piece of audio content; identify a first plurality of atoms that describe at least a portion of the first piece of audio content using a Matching Pursuit algorithm; form a first group of atoms from at least a portion of the first plurality of atoms, the first group of atoms having first group parameters; form at least one first hash value for the first group of atoms based on the first group parameters; compare the at least one first hash value with at least one second hash value, wherein the at least one second hash value is based on second group parameters of a second group of atoms associated with a second piece of audio content; and identify a match between the first piece of audio content and the second piece of audio content based on the comparing.
    Type: Application
    Filed: October 12, 2010
    Publication date: April 14, 2011
    Applicant: The Trustees of Columbia University in the City of New York
    Inventors: Daniel P.W. Ellis, Courtenay V. Cotton
  • Patent number: 7812241
    Abstract: Methods and systems for identifying similar songs are provided. In accordance with some embodiments, methods for identifying similar songs are provided, the methods comprising: identifying beats in at least a portion of a song; generating beat-level descriptors of the at least a portion of the song corresponding to the beats; comparing the beat-level descriptors to other beat-level descriptors corresponding to a plurality of songs. In accordance with some embodiments, systems for identifying similar songs are provided, the systems comprising: a digital processing device that: identifies beats in at least a portion of a song; generates beat-level descriptors of the at least a portion of the song corresponding to the beats; and compares the beat-level descriptors to other beat-level descriptors corresponding to a plurality of songs.
    Type: Grant
    Filed: September 27, 2007
    Date of Patent: October 12, 2010
    Assignee: The Trustees of Columbia University in the City of New York
    Inventor: Daniel P. W. Ellis
  • Patent number: 7672916
    Abstract: Methods, systems, and media are provided for classifying digital music. In some embodiments, methods of classifying a song are provided that include: receiving a selection of at least one seed song; receiving a label selection for at least one unlabeled song; training a support vector machine based on the at least one seed song and the label selection; and classifying a song using the support vector machine. In some embodiments, systems for classifying a song are provided that include: memory for storing at least one seed song, at least one unlabeled song, and a song; and a processor that: receives a selection of the at least one seed song; receives a label selection for the at least one unlabeled song; trains a support vector machine based on the at least one seed song and the label selection; and classifies the song using the support vector machine.
    Type: Grant
    Filed: August 16, 2006
    Date of Patent: March 2, 2010
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Graham E. Poliner, Michael I. Mandel, Daniel P. W. Ellis
  • Patent number: 7672838
    Abstract: In accordance with the present invention, computer implemented methods and systems are provided for representing and modeling the temporal structure of audio signals. In response to receiving a signal, a time-to-frequency domain transformation on at least a portion of the received signal to generate a frequency domain representation is performed. The time-to-frequency domain transformation converts the signal from a time domain representation to the frequency domain representation. A frequency domain linear prediction (FDLP) is performed on the frequency domain representation to estimate a temporal envelope of the frequency domain representation. Based on the temporal envelope, one or more speech features are generated.
    Type: Grant
    Filed: December 1, 2004
    Date of Patent: March 2, 2010
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Marios Athineos, Hynek Hermansky, Daniel P. W. Ellis
  • Patent number: 7636659
    Abstract: In accordance with the present invention, computer implemented methods and systems are provided for representing and modeling the temporal structure of audio signals. In response to receiving a signal, a time-to-frequency domain transformation on at least a portion of the received signal to generate a frequency domain representation is performed. The time-to-frequency domain transformation converts the signal from a time domain representation to the frequency domain representation. A frequency domain linear prediction (FDLP) is performed on the frequency domain representation to estimate a temporal envelope of the frequency domain representation. Based on the temporal envelope, one or more speech features are generated.
    Type: Grant
    Filed: March 25, 2005
    Date of Patent: December 22, 2009
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Marios Athineos, Daniel P. W. Ellis
  • Publication number: 20090271182
    Abstract: In accordance with the present invention, computer implemented methods and systems are provided for representing and modeling the temporal structure of audio signals. In response to receiving a signal, a time-to-frequency domain transformation on at least a portion of the received signal to generate a frequency domain representation is performed. The time-to-frequency domain transformation converts the signal from a time domain representation to the frequency domain representation. A frequency domain linear prediction (FDLP) is performed on the frequency domain representation to estimate a temporal envelope of the frequency domain representation. Based on the temporal envelope, one or more speech features are generated.
    Type: Application
    Filed: February 12, 2009
    Publication date: October 29, 2009
    Applicant: THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
    Inventors: Marios Athineos, Daniel P.W. Ellis
  • Publication number: 20080022844
    Abstract: Methods, systems, and media are provided for classifying digital music. In some embodiments, methods of classifying a song are provided that include: receiving a selection of at least one seed song; receiving a label selection for at least one unlabeled song; training a support vector machine based on the at least one seed song and the label selection; and classifying a song using the support vector machine. In some embodiments, systems for classifying a song are provided that include: memory for storing at least one seed song, at least one unlabeled song, and a song; and a processor that: receives a selection of the at least one seed song; receives a label selection for the at least one unlabeled song; trains a support vector machine based on the at least one seed song and the label selection; and classifies the song using the support vector machine.
    Type: Application
    Filed: August 16, 2006
    Publication date: January 31, 2008
    Inventors: Graham E. Poliner, Michael I. Mandel, Daniel P.W. Ellis