Patents by Inventor Amoldas Jasonas

Amoldas Jasonas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11250877
    Abstract: A method for generating a health indicator for at least one person of a group of people, the method comprising: receiving, at a processor, captured sound, where the captured sound is sound captured from the group of people; comparing the captured sound to a plurality of sound models to detect at least one non-speech sound event in the captured sound, each of the plurality of sound models associated with a respective health-related sound type; determining metadata associated with the at least one non-speech sound event; assigning the at least one non-speech sound event and the metadata to at least one person of the group of people; and outputting a message identifying the at least one non-speech event and the metadata to a health indicator generator module to generate a health indicator for the at least one person to whom the at least one non-speech sound event is assigned.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: February 15, 2022
    Assignee: AUDIO ANALYTIC LTD
    Inventors: Christopher Mitchell, Joe Patrick Lynas, Sacha Krstulovic, Amoldas Jasonas, Julian Harris
  • Patent number: 11133020
    Abstract: A device or system is provided which is configured to detect one or more sound events and/or scenes associated with a predetermined context, and to provide an assistive output on fulfilment of that context.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: September 28, 2021
    Assignee: AUDIO ANALYTIC LTD
    Inventors: Christopher James Mitchell, Sacha Krstulovic, Cagdas Bilen, Neil Cooper, Julian Harris, Amoldas Jasonas, Joe Patrick Lynas
  • Publication number: 20210104230
    Abstract: A method for recognising at least one of a non-verbal sound event and a scene in an audio signal comprising a sequence of frames of audio data, the method comprising: for each frame of the sequence: processing the frame of audio data to extract multiple acoustic features for the frame of audio data; and classifying the acoustic features to classify the frame by determining, for each of a set of sound classes, a score that the frame represents the sound class; processing the sound class scores for multiple frames of the sequence of frames to generate, for each frame, a sound class decision for each frame; and processing the sound class decisions for the sequence of frames to recognise the at least one of a non-verbal sound event and a scene.
    Type: Application
    Filed: October 7, 2019
    Publication date: April 8, 2021
    Inventors: Christopher James Mitchell, Sacha Krstulovic, Cagdas Bilen, Juan Azcarreta Ortiz, Giacomo Ferroni, Amoldas Jasonas, Francesco Tuveri