Patents by Inventor Raymond W.M. NG

Raymond W.M. NG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11378977
    Abstract: A robotic system is controlled. Audiovisual data representing an environment in which at least part of the robotic system is located is received via at least one camera and at least one microphone. The audiovisual data comprises a visual data component representing a visible part of the environment and an audio data component representing an audible part of the environment. A location of a sound source that emits sound that is represented in the audio data component of the audiovisual data is identified based on the audio data component of the audiovisual data. The sound source is outside the visible part of the environment and is not represented in the visual data component of the audiovisual data. Operation of a controllable element located in the environment is controlled based on the identified location of the sound source.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: July 5, 2022
    Assignee: Emotech Ltd
    Inventors: Ondrej Miksik, Pawel Swietojanski, Srikanth Reddy Bethi, Raymond W. M. Ng
  • Patent number: 11170788
    Abstract: A speaker recognition system comprises (i) at least one microphone operable to output data representing speech of a speaker and (ii) a controller. The controller is operable to: (a) receive the data output from the at least one microphone; (b) process the received data using a first artificial neural network to obtain first output data, the first artificial neural network having been trained based on outputs of a second artificial neural network, the second artificial neural network having been trained to perform speaker recognition; and (c) identify the speaker using the first output data. The first artificial neural network comprises fewer layers and/or fewer parameters than the second artificial neural network. The first artificial neural network is configured to emulate a result derivable using an output of the second artificial neural network.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: November 9, 2021
    Assignee: Emotech Ltd.
    Inventors: Raymond W. M. Ng, Xuechen Liu, Pawel Swietojanski
  • Publication number: 20200019184
    Abstract: A robotic system is controlled. Audiovisual data representing an environment in which at least part of the robotic system is located is received via at least one camera and at least one microphone. The audiovisual data comprises a visual data component representing a visible part of the environment and an audio data component representing an audible part of the environment. A location of a sound source that emits sound that is represented in the audio data component of the audiovisual data is identified based on the audio data component of the audiovisual data. The sound source is outside the visible part of the environment and is not represented in the visual data component of the audiovisual data. Operation of a controllable element located in the environment is controlled based on the identified location of the sound source.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 16, 2020
    Inventors: Ondrej MIKSIK, Pawel SWIETOJANSKI, Srikanth REDDY, Raymond W.M. NG
  • Publication number: 20190355366
    Abstract: A speaker recognition system comprises (i) at least one microphone operable to output data representing speech of a speaker and (ii) a controller. The controller is operable to: (a) receive the data output from the at least one microphone; (b) process the received data using a first artificial neural network to obtain first output data, the first artificial neural network having been trained based on outputs of a second artificial neural network, the second artificial neural network having been trained to perform speaker recognition; and (c) identify the speaker using the first output data. The first artificial neural network comprises fewer layers and/or fewer parameters than the second artificial neural network. The first artificial neural network is configured to emulate a result derivable using an output of the second artificial neural network.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 21, 2019
    Inventors: Raymond W.M. NG, Xuechen LIU, Pawel SWIETOJANSKI