Patents by Inventor Paris Smaragdis

Paris Smaragdis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180010443
    Abstract: Described herein are tools, systems, and methods for detecting, classifying, and/or quantifying underground fluid flows based on acoustic signals emanating therefrom, using a plurality of acoustic sensors disposed in the wellbore in conjunction with array signal processing and systematic feature-based classification and estimation methods.
    Type: Application
    Filed: January 11, 2016
    Publication date: January 11, 2018
    Inventors: Yinghui Lu, Paris Smaragdis, Avinash Vinayak Taware, Daniel Viassolo, Clifford Lloyd Macklin
  • Publication number: 20170321540
    Abstract: Disclosed are tools, systems, and methods for detecting one or more underground acoustic sources and localizing them in depth and radial distance from a wellbore, for example, for the purpose of finding underground fluid flows, such as may result from leaks in the well barriers. In various embodiments, acoustic-source detection and localization are accomplished with an array of at least three acoustic sensors disposed in the wellbore, in conjunction with array signal processing.
    Type: Application
    Filed: January 11, 2016
    Publication date: November 9, 2017
    Applicants: DAIDO STEEL CO., LTD.
    Inventors: Yinghui Lu, Avinash Vinayak Taware, Paris Smaragdis, Nam Nguyen, David Alan Welsh, Clifford Lloyd Macklin, Daniel Viassolo
  • Patent number: 9734844
    Abstract: Embodiments of the present invention relate to detecting irregularities in audio, such as music. An input signal corresponding to an audio stream is received. The input signal is transformed from a time domain into a frequency domain to generate a plurality of frames that each comprises frequency information for a portion of the input signal. An irregular event in a portion of the input signal corresponding to a set of frames in the plurality of frames is identified based on a comparison of frequency information of the set of frames to the frequency information of other sets of frames of the plurality of frames. This allows an indication of the irregular event to be provided, or for the input signal to be automatically synchronized to a multimedia event.
    Type: Grant
    Filed: November 23, 2015
    Date of Patent: August 15, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Minje Kim, Gautham Mysore, Peter Merrill, Paris Smaragdis
  • Publication number: 20170162213
    Abstract: Embodiments of the present invention relate to enhancing sound through reverberation matching. In sonic implementations, a first sound recording recorded in a first environment is received. The first sound recording is decomposed to a first clean signal and a first reverb kernel. A second reverb kernel corresponding with a second sound recording recorded in a second environment is accessed, for example, based on a user indication to enhance the first sound recording to sound as though recorded in the second environment. An enhanced sound recording is generated based on the first clean signal and the second reverb kernel. The enhanced sound recording is a modification of the first sound recording to sound as though recorded in the second environment.
    Type: Application
    Filed: December 8, 2015
    Publication date: June 8, 2017
    Inventors: Ramin Anushiravani, Paris Smaragdis, Gautham Mysore
  • Publication number: 20170148468
    Abstract: Embodiments of the present invention relate to detecting irregularities in audio, such as music. An input signal corresponding to an audio stream is received. The input signal is transformed from a time domain into a frequency domain to generate a plurality of frames that each comprises frequency information for a portion of the input signal. An irregular event in a portion of the input signal corresponding to a set of frames in the plurality of frames is identified based on a comparison of frequency information of the set of frames to the frequency information of other sets of frames of the plurality of frames. This allows an indication of the irregular event to be provided, or for the input signal to be automatically synchronized to a multimedia event.
    Type: Application
    Filed: November 23, 2015
    Publication date: May 25, 2017
    Inventors: Minje Kim, Gautham Mysore, Peter Merrill, Paris Smaragdis
  • Patent number: 9514722
    Abstract: Techniques are disclosed for automatic detection of dense ornamentation in music. Input data representing a piece of digitally encoded music in a time domain is converted into a spectrogram representing time-frequency coefficients in a frequency domain. The spectrogram includes column vectors of the time-frequency coefficients that correspond to time periods spanning different portions of the piece of music. A one-dimensional onset detection array is calculated based on a subset of the column vectors. Using the spectrogram and the onset detection array, a two-dimensional self-similarity matrix (SSM) is calculated based on pair-wise comparisons of elements in the onset detection array. As a result, an irregular pattern score representing the presence of dense ornamentation in the piece of music can be calculated based on a magnitude difference between a beat pattern in the music and each column of the slim SSM.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: December 6, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Minje Kim, Gautham J. Mysore, Paris Smaragdis, Peter Merrill
  • Patent number: 9451304
    Abstract: Sound feature priority alignment techniques are described. In one or more implementations, features of sound data are identified from a plurality of recordings. Values are calculated for frames of the sound data from the plurality of recordings. The values are based on similarity of the frames of the sound data from the plurality of recordings to each other, the similarity based on the identified features and a priority that is assigned based on the identified features of respective frames. The sound data from the plurality of recordings is then aligned based at least in part on the calculated values.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: September 20, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Brian John King, Gautham J. Mysore, Paris Smaragdis
  • Patent number: 9449085
    Abstract: Pattern matching of sound data using hashing is described. In one or more implementations, a query formed from one or more spectrograms of sound data is hashed and used to locate one or more labels in a database of sound signals. Each of the labels is located using a hash of an entry in the database. At least one of the located one or more labels is chosen as corresponding to the query.
    Type: Grant
    Filed: November 14, 2013
    Date of Patent: September 20, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Minje Kim, Paris Smaragdis, Gautham J. Mysore
  • Patent number: 9355649
    Abstract: Sound alignment techniques that employ timing information are described. In one or more implementations, features and timing information of sound data generated from a first sound signal are identified and used to identify features of sound data generated from a second sound signal. The identified features may then be utilized to align portions of the sound data from the first and second sound signals to each other.
    Type: Grant
    Filed: November 13, 2012
    Date of Patent: May 31, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Brian John King, Gautham J. Mysore, Paris Smaragdis
  • Patent number: 9351093
    Abstract: Multichannel sound source identification and location techniques are described. In one or more implementations, source separation is performed using a collaborative technique for a plurality of sound data that was captured by respective ones of a plurality of sound capture devices of an audio scene. The source separation is performed by recognizing spectral and temporal aspects from the plurality of sound data and sharing the recognized spectral and temporal aspects, one with another, to identify one or more sound sources in the audio scene. A relative position of the identified one or more sounds sources to the plurality of sound capture devices is determined based on the source separation.
    Type: Grant
    Filed: December 24, 2013
    Date of Patent: May 24, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Minje Kim, Gautham J. Mysore, Paris Smaragdis
  • Patent number: 9215539
    Abstract: Sound data identification techniques are described. In one or more implementations, common sound data and uncommon sound data are identified from a plurality of sound data from a plurality of recordings of an audio source using a collaborative technique. The identification may include recognition of spectral and temporal aspects of the plurality of the sound data from the plurality of the recordings and sharing of the recognized spectral and temporal aspects to identify the common sound data as common to the plurality of recordings and the uncommon sound data as not common to the plurality of recordings.
    Type: Grant
    Filed: November 19, 2012
    Date of Patent: December 15, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Minje Kim, Paris Smaragdis
  • Patent number: 9201580
    Abstract: Sound alignment user interface techniques are described. In one or more implementations, a user interface is output having a first representation of sound data generated from a first sound signal and a second representation of sound data generated from a second sound signal. One or more inputs are received, via interaction with the user interface, that indicate that a first point in time in the first representation corresponds to a second point in time in the second representation. Aligned sound data is generated from the sound data from the first and second sound signals based at least in part on correspondence of the first point in time in the sound data generated from the first sound signal to the second point in time in the sound data generated from the second sound signal.
    Type: Grant
    Filed: November 13, 2012
    Date of Patent: December 1, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Brian John King, Gautham J. Mysore, Paris Smaragdis
  • Publication number: 20150312663
    Abstract: An approach to separating multiple sources exploits the observation that each source is associated with a linear-circular phase characteristic in which the relative phase between pairs of microphones follows a linear (modulo) pattern. In some examples, a modified RANSAC (Random Sample Consensus) approach is used to identify the frequency/phase samples that are attributed to each source. In some examples, either in combination with the modified RANSAC approach or using other approaches, a wrapped variable representation is used to represent a probability density of phase, thereby avoiding a need to “unwrap” phase in applying probabilistic techniques to estimating delay between sources.
    Type: Application
    Filed: September 17, 2013
    Publication date: October 29, 2015
    Applicant: ANALOG DEVICES, INC.
    Inventors: Johannes Traa, Paris Smaragdis
  • Patent number: 9165565
    Abstract: A sound mixture may be received that includes a plurality of sources. A model may be received that includes a dictionary of spectral basis vectors for the plurality of sources. A weight may be estimated for each of the plurality of sources in the sound mixture based on the model. In some examples, such weight estimation may be performed using a source separation technique without actually separating the sources.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: October 20, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Gautham J. Mysore, Paris Smaragdis, Juhan Nam
  • Publication number: 20150181359
    Abstract: Multichannel sound source identification and location techniques are described. In one or more implementations, source separation is performed using a collaborative technique for a plurality of sound data that was captured by respective ones of a plurality of sound capture devices of an audio scene. The source separation is performed by recognizing spectral and temporal aspects from the plurality of sound data and sharing the recognized spectral and temporal aspects, one with another, to identify one or more sound sources in the audio scene. A relative position of the identified one or more sounds sources to the plurality of sound capture devices is determined based on the source separation.
    Type: Application
    Filed: December 24, 2013
    Publication date: June 25, 2015
    Inventors: Minje Kim, Gautham J. Mysore, Paris Smaragdis
  • Patent number: 9047867
    Abstract: Methods and systems for recognition of concurrent, superimposed, or otherwise overlapping signals are described. A Markov Selection Model is introduced that, together with probabilistic decomposition methods, enable recognition of simultaneously emitted signals from various sources. For example, a signal mixture may include overlapping speech from different persons. In some instances, recognition may be performed without the need to separate signals or sources. As such, some of the techniques described herein may be useful in automatic transcription, noise reduction, teaching, electronic games, audio search and retrieval, medical and scientific applications, etc.
    Type: Grant
    Filed: February 21, 2011
    Date of Patent: June 2, 2015
    Assignee: Adobe Systems Incorporated
    Inventor: Paris Smaragdis
  • Publication number: 20150142433
    Abstract: Pattern identification using convolution is described. In one or more implementations, a representation of a pattern is obtained that is described using data points that include frequency coordinates, time coordinates, and energy values. An identification is made as to whether sound data described using irregularly positioned data points includes the pattern, the identifying including use of a convolution of the frequency or time coordinates to determine correspondence with the representation of the pattern.
    Type: Application
    Filed: November 20, 2013
    Publication date: May 21, 2015
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Minje Kim, Paris Smaragdis, Gautham J. Mysore
  • Publication number: 20150134691
    Abstract: Pattern matching of sound data using hashing is described. In one or more implementations, a query formed from one or more spectrograms of sound data is hashed and used to locate one or more labels in a database of sound signals. Each of the labels is located using a hash of an entry in the database. At least one of the located one or more labels is chosen as corresponding to the query.
    Type: Application
    Filed: November 14, 2013
    Publication date: May 14, 2015
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Minje Kim, Paris Smaragdis, Gautham J. Mysore
  • Patent number: 8965832
    Abstract: A sound mixture may be received that includes a plurality of sources. A model may be received for one of the source that includes a dictionary of spectral basis vectors corresponding to that one source. At least one feature of the one source in the sound mixture may be estimated based on the model. In some examples, the estimation may be constrained according to temporal data.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: February 24, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Paris Smaragdis, Gautham J. Mysore
  • Patent number: 8954175
    Abstract: A system and method are described for selecting a target sound object from a sound mixture. In embodiments, a sound mixture comprises a plurality of sound objects superimposed in time. A user can select one of these sound objects by providing reference audio data corresponding to a reference sound object. The system analyzes the audio data and the reference audio data to identify a portion of the audio data corresponding to a target sound object in the mixture that is most similar to the reference sound object. The analysis may include decomposing the reference audio data into a plurality of reference components and the sound mixture into a plurality of components guided by the reference components. The target sound object can be re-synthesized from the target components.
    Type: Grant
    Filed: August 26, 2009
    Date of Patent: February 10, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Paris Smaragdis, Gautham J. Mysore