Patents by Inventor Miquel Espi Marques

Miquel Espi Marques has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11941968
    Abstract: An electronic device includes a processor, and a memory containing instructions that, when executed by the processor, cause the electronic device to learn a sound emitted by a legacy device and to issue an output when the electronic device subsequently hears the sound. For example, the electronic device can receive a training input and extract a compact representation of a sound in the training input, which the device stores. The device can receive an audio signal corresponding to an observed acoustic scene and extract a representation of the observed acoustic scene from the audio signal. The electronic device can determine whether the sound is present in the observed acoustic scene at least in part from a comparison of the representation of the observed acoustic scene with the representation of the sound. The electronic device emits a selected output responsive to determining that the sound is present in the acoustic scene.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: March 26, 2024
    Assignee: Apple Inc.
    Inventors: Hyung-Suk Kim, Daniel C. Klingler, Miquel Espi Marques, Carlos M. Avendano
  • Publication number: 20230360641
    Abstract: The subject disclosure provides systems and methods for generating and storing learned embeddings of audio inputs to an electronic device. The electronic device may generate and store encoded versions of audio inputs and learned embeddings of the audio inputs. When a new audio input is obtained, the electronic device can generate an encoded version of the new audio input, compare the encoded version of the new audio input to the stored encoded versions of prior audio inputs, and if the encoded version of the new audio input matches one of the stored encoded versions of the prior audio inputs, the electronic device can provide a stored learned embedding that corresponds to the one of the stored encoded versions of the prior audio inputs to a detection model at the electronic device. The cached embeddings can be provided to locally trained models for detecting individual sounds using electronic devices.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 9, 2023
    Inventors: Daniel C. KLINGLER, Carlos M. AVENDANO, Jonathan HUANG, Miquel ESPI MARQUES
  • Patent number: 11721355
    Abstract: A first device obtains, from the array, several audio signals and processes the audio signals to produce a speech signal and one or more ambient signals. The first device processes the ambient signals to produce a sound-object sonic descriptor that has metadata describing a sound object within an acoustic environment. The first device transmits, over a communication data link, the speech signal and the descriptor to a second electronic device that is configured to spatially reproduce the sound object using the descriptor mixed with the speech signal, to produce several mixed signals to drive several speakers.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: August 8, 2023
    Assignee: Apple Inc.
    Inventors: Christopher T. Eubank, Lance Jabr, Matthew S. Connolly, Robert D. Silfvast, Sean A. Ramprashad, Carlos Avendano, Miquel Espi Marques
  • Publication number: 20230186904
    Abstract: An electronic device has one or more microphones that pick up a sound. At least one feature extractor processes the audio signals from the microphones, that contain the picked up the sound, to determine several features for the sound. The electronic device also includes a classifier that has a machine learning model which is configured to determine a sound classification, such as artificial versus natural for the sound, based upon at least one of the determined features. Other aspects are also described and claimed.
    Type: Application
    Filed: November 22, 2022
    Publication date: June 15, 2023
    Inventors: Daniel C. Klingler, Carlos M. Avendano, Hyung-Suk Kim, Miquel Espi Marques
  • Publication number: 20230177942
    Abstract: An electronic device includes a processor, and a memory containing instructions that, when executed by the processor, cause the electronic device to learn a sound emitted by a legacy device and to issue an output when the electronic device subsequently hears the sound. For example, the electronic device can receive a training input and extract a compact representation of a sound in the training input, which the device stores. The device can receive an audio signal corresponding to an observed acoustic scene and extract a representation of the observed acoustic scene from the audio signal. The electronic device can determine whether the sound is present in the observed acoustic scene at least in part from a comparison of the representation of the observed acoustic scene with the representation of the sound. The electronic device emits a selected output responsive to determining that the sound is present in the acoustic scene.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 8, 2023
    Inventors: Hyung-Suk KIM, Daniel C. KLINGLER, Miquel ESPI MARQUES, Carlos M. AVENDANO
  • Patent number: 11568731
    Abstract: An electronic device includes a processor, and a memory containing instructions that, when executed by the processor, cause the electronic device to learn a sound emitted by a legacy device and to issue an output when the electronic device subsequently hears the sound. For example, the electronic device can receive a training input and extract a compact representation of a sound in the training input, which the device stores. The device can receive an audio signal corresponding to an observed acoustic scene and extract a representation of the observed acoustic scene from the audio signal. The electronic device can determine whether the sound is present in the observed acoustic scene at least in part from a comparison of the representation of the observed acoustic scene with the representation of the sound. The electronic device emits a selected output responsive to determining that the sound is present in the acoustic scene.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: January 31, 2023
    Assignee: APPLE INC.
    Inventors: Hyung-Suk Kim, Daniel C. Klingler, Miquel Espi Marques, Carlos M. Avendano
  • Publication number: 20220391758
    Abstract: The subject disclosure provides systems and methods for providing locally trained models for detecting individual sounds using electronic devices. Local detection of individual sounds with a detection model at an electronic device can be provided by obtaining training samples for the detection model with the electronic device, and generating additional negative and positive training samples based on the obtained training samples. A two-stage detection process may be provided, in which a trigger model at a device compares an audio input to a reference sound to trigger a detection model at the device. The detection of individual sounds with a detection model at an electronic device can also leverage audio capture capabilities of multiple devices in an acoustic scene to capture multiple concurrent training samples.
    Type: Application
    Filed: May 4, 2022
    Publication date: December 8, 2022
    Inventors: Jonathan HUANG, Miquel ESPI MARQUES, Carlos M. AVENDANO, Kevin M. DURAND, David FINDLAY, Vasudha KOWTHA, Daniel C. KLINGLER, Yichi ZHANG
  • Patent number: 11521598
    Abstract: An electronic device has one or more microphones that pick up a sound. At least one feature extractor processes the audio signals from the microphones, that contain the picked up the sound, to determine several features for the sound. The electronic device also includes a classifier that has a machine learning model which is configured to determine a sound classification, such as artificial versus natural for the sound, based upon at least one of the determined features. Other aspects are also described and claimed.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: December 6, 2022
    Assignee: APPLE INC.
    Inventors: Daniel C. Klingler, Carlos M. Avendano, Hyung-Suk Kim, Miquel Espi Marques
  • Publication number: 20220180889
    Abstract: A first device obtains, from the array, several audio signals and processes the audio signals to produce a speech signal and one or more ambient signals. The first device processes the ambient signals to produce a sound-object sonic descriptor that has metadata describing a sound object within an acoustic environment. The first device transmits, over a communication data link, the speech signal and the descriptor to a second electronic device that is configured to spatially reproduce the sound object using the descriptor mixed with the speech signal, to produce several mixed signals to drive several speakers.
    Type: Application
    Filed: February 22, 2022
    Publication date: June 9, 2022
    Inventors: Christopher T. Eubank, Lance Jabr, Matthew S. Connolly, Robert D. Silfvast, Sean A. Ramprashad, Carlos Avendano, Miquel Espi Marques
  • Patent number: 11295754
    Abstract: A first device obtains, from the array, several audio signals and processes the audio signals to produce a speech signal and one or more ambient signals. The first device processes the ambient signals to produce a sound-object sonic descriptor that has metadata describing a sound object within an acoustic environment. The first device transmits, over a communication data link, the speech signal and the descriptor to a second electronic device that is configured to spatially reproduce the sound object using the descriptor mixed with the speech signal, to produce several mixed signals to drive several speakers.
    Type: Grant
    Filed: July 28, 2020
    Date of Patent: April 5, 2022
    Assignee: APPLE INC.
    Inventors: Christopher T. Eubank, Lance Jabr, Matthew S. Connolly, Robert D. Silfvast, Sean A. Ramprashad, Carlos Avendano, Miquel Espi Marques
  • Publication number: 20210035597
    Abstract: A first device obtains, from the array, several audio signals and processes the audio signals to produce a speech signal and one or more ambient signals. The first device processes the ambient signals to produce a sound-object sonic descriptor that has metadata describing a sound object within an acoustic environment. The first device transmits, over a communication data link, the speech signal and the descriptor to a second electronic device that is configured to spatially reproduce the sound object using the descriptor mixed with the speech signal, to produce several mixed signals to drive several speakers.
    Type: Application
    Filed: July 28, 2020
    Publication date: February 4, 2021
    Inventors: Christopher T. Eubank, Lance Jabr, Matthew S. Connolly, Robert D. Silfvast, Sean A. Ramprashad, Carlos Avendano, Miquel Espi Marques
  • Publication number: 20210020018
    Abstract: An electronic device includes a processor, and a memory containing instructions that, when executed by the processor, cause the electronic device to learn a sound emitted by a legacy device and to issue an output when the electronic device subsequently hears the sound. For example, the electronic device can receive a training input and extract a compact representation of a sound in the training input, which the device stores. The device can receive an audio signal corresponding to an observed acoustic scene and extract a representation of the observed acoustic scene from the audio signal. The electronic device can determine whether the sound is present in the observed acoustic scene at least in part from a comparison of the representation of the observed acoustic scene with the representation of the sound. The electronic device emits a selected output responsive to determining that the sound is present in the acoustic scene.
    Type: Application
    Filed: May 11, 2020
    Publication date: January 21, 2021
    Inventors: Hyung-Suk Kim, Daniel C. Klingler, Miquel Espi Marques, Carlos M. Avendano
  • Publication number: 20200090644
    Abstract: An electronic device has one or more microphones that pick up a sound. At least one feature extractor processes the audio signals from the microphones, that contain the picked up the sound, to determine several features for the sound. The electronic device also includes a classifier that has a machine learning model which is configured to determine a sound classification, such as artificial versus natural for the sound, based upon at least one of the determined features. Other aspects are also described and claimed.
    Type: Application
    Filed: September 9, 2019
    Publication date: March 19, 2020
    Inventors: Daniel C. Klingler, Carlos M. Avendano, Hyung-Suk Kim, Miquel Espi Marques