Patents Assigned to Emotech Ltd.
  • Patent number: 11806862
    Abstract: A robot obtains image data representative of an environment comprising a first region and a second region. A microphone receives sound from the environment. The robot determines, using the image data and audio data derived based on the received sound, whether the sound is received from the first region, and outputs a control signal for controlling the robot based on the audio data. Sounds received from the first region are processed as voice commands on the basis that one of the first region and the second region comprises a predetermined type of inanimate object. Sounds received from the second region are processed in a different manner.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: November 7, 2023
    Assignee: Emotech Ltd.
    Inventors: Pawel Swietojanski, Tomasz Franciszek Wierzchowski, Pedro Antonio Martinez Mediano
  • Patent number: 11396102
    Abstract: A robot comprises a first part and a second part moveable relative to the first part. The robot operates in an audio output mode in which the first and second parts are in a first configuration relative to each other and in which the robot outputs audio from an array of speakers using an audio output technique. The robot operates in a user interaction mode in which the first and second parts are in a second, different configuration relative to each other and in which the robot interacts with a user. The robot is configured to change from the audio output mode to the user interaction mode in response to the robot detecting a trigger event. Changing from the audio output mode to the user interaction mode comprises causing the first part to lift up relative to the second part.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: July 26, 2022
    Assignee: Emotech Ltd.
    Inventors: Hongbin Zhuang, Szu-Hung Lee, Martin Riddiford, Patrick Hunt, Graham Brett, Robert Butterworth, Douglas Morton, Ben Mahon
  • Patent number: 11378977
    Abstract: A robotic system is controlled. Audiovisual data representing an environment in which at least part of the robotic system is located is received via at least one camera and at least one microphone. The audiovisual data comprises a visual data component representing a visible part of the environment and an audio data component representing an audible part of the environment. A location of a sound source that emits sound that is represented in the audio data component of the audiovisual data is identified based on the audio data component of the audiovisual data. The sound source is outside the visible part of the environment and is not represented in the visual data component of the audiovisual data. Operation of a controllable element located in the environment is controlled based on the identified location of the sound source.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: July 5, 2022
    Assignee: Emotech Ltd
    Inventors: Ondrej Miksik, Pawel Swietojanski, Srikanth Reddy Bethi, Raymond W. M. Ng
  • Patent number: 11170788
    Abstract: A speaker recognition system comprises (i) at least one microphone operable to output data representing speech of a speaker and (ii) a controller. The controller is operable to: (a) receive the data output from the at least one microphone; (b) process the received data using a first artificial neural network to obtain first output data, the first artificial neural network having been trained based on outputs of a second artificial neural network, the second artificial neural network having been trained to perform speaker recognition; and (c) identify the speaker using the first output data. The first artificial neural network comprises fewer layers and/or fewer parameters than the second artificial neural network. The first artificial neural network is configured to emulate a result derivable using an output of the second artificial neural network.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: November 9, 2021
    Assignee: Emotech Ltd.
    Inventors: Raymond W. M. Ng, Xuechen Liu, Pawel Swietojanski