Patents by Inventor Anthony Knittel

Anthony Knittel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240116544
    Abstract: A method of predicting actions of one or more actor agent in a scenario is implemented by an ego agent in the scenario. A plurality of agent models are used to generate a set of candidate futures, each candidate future providing an expected action of the actor agent. A weighting function is applied to each candidate future to indicate its relevance in the scenario. A group of candidate futures is selected for each actor agent based on the indicated relevance, wherein the plurality of agent models comprises a first model representing a rational goal directed behaviour inferable from the vehicular scene, and at least one second model representing an alternate behaviour not inferable from the vehicular scene.
    Type: Application
    Filed: February 25, 2022
    Publication date: April 11, 2024
    Applicant: Five AI Limited
    Inventor: Anthony Knittel
  • Patent number: 11468296
    Abstract: The present disclosure provides a method of recognising a first action. The method comprises determining a weighted combination of a first plurality of feature responses associated with a sequence of second actions over a period of time using a set of weights at a particular time instance in the period of time. The method then recognises the first action by processing, using a neural network, the weighted combination of the first plurality of feature responses and temporal position values of each of the first plurality of feature responses associated with the sequence of second actions.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: October 11, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Anthony Knittel
  • Patent number: 11270209
    Abstract: A system and method of training an artificial neural network. The method comprises determining an activation value for each node in a set of nodes of the artificial neural network, the activation values being determined by applying training data to the artificial neural network, and scaling the determined activation values for each of a plurality of the nodes in a portion of the artificial neural network. Each scaled activation value is determined using a scaling factor associated with a corresponding one of the plurality of nodes. Each scaling factor is determined based on a rank of the corresponding node. The method further comprises updating weights associated with each of the plurality of nodes in the portion of the artificial neural network using the determined scaled activation values to train the neural network.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: March 8, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Anthony Knittel
  • Patent number: 11106947
    Abstract: A method of classifying an action or event using an artificial neural network. The method comprises obtaining a first and a second plurality of feature responses, corresponding to point data in a first channel and a second channel respectively. Each of the first and second plurality of feature responses have associated temporal and spatial position values, the first and second plurality of feature responses relating to a plurality of objects. The method also comprises generating a third plurality of feature responses based on one of the first plurality of feature responses and one of the second plurality of feature responses, and a weighted combination of associated temporal and spatial position values of the corresponding one of the first and second plurality of feature responses; and classifying an action or event relating to the objects using the artificial neural network based on the third plurality of feature responses.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: August 31, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Anthony Knittel
  • Patent number: 11048944
    Abstract: A method of determining a spatio-temporal feature value for frames of a sequence of video. A first frame and second frame from the sequence of video are received. Spatial feature values in each of the first and second frames are determined according to a plurality of spatial feature functions. For each of the spatial feature functions, a change in the spatial feature values between the first and second frames is determined. The spatio-temporal feature value is determined by combining the determined change in spatial feature values for each of the spatial feature functions.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: June 29, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Anthony Knittel
  • Patent number: 10776698
    Abstract: A method of training an artificial neural network The artificial neural network comprising a plurality of connections connecting nodes arranged in at least an initial and a subsequent layer. The method comprises receiving a training example, the training example having input values, a target, and an associated training salience value indicating an importance of the input values to determining the target. The method further comprises determining salience values for nodes of the initial layer from the training salience value; determining an activation value for at least one node of the subsequent layer by propagating the input values using a subset of the connections selected based on the determined salience values for the nodes of the initial layer; and training the artificial neural network using the activation value, the trained artificial neural network configured to determine a relationship between the input data values and the target.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: September 15, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Anthony Knittel, Tuan Hue Thi
  • Publication number: 20190188482
    Abstract: A method of determining a spatio-temporal feature value for frames of a sequence of video. A first frame and second frame from the sequence of video are received. Spatial feature values in each of the first and second frames are determined according to a plurality of spatial feature functions. For each of the spatial feature functions, a change in the spatial feature values between the first and second frames is determined. The spatio-temporal feature value is determined by combining the determined change in spatial feature values for each of the spatial feature functions.
    Type: Application
    Filed: December 5, 2018
    Publication date: June 20, 2019
    Inventor: ANTHONY KNITTEL
  • Publication number: 20190180149
    Abstract: A method of classifying an action or event using an artificial neural network. The method comprises obtaining a first and a second plurality of feature responses, corresponding to point data in a first channel and a second channel respectively. Each of the first and second plurality of feature responses have associated temporal and spatial position values, the first and second plurality of feature responses relating to a plurality of objects. The method also comprises generating a third plurality of feature responses based on one of the first plurality of feature responses and one of the second plurality of feature responses, and a weighted combination of associated temporal and spatial position values of the corresponding one of the first and second plurality of feature responses; and classifying an action or event relating to the objects using the artificial neural network based on the third plurality of feature responses.
    Type: Application
    Filed: December 13, 2017
    Publication date: June 13, 2019
    Inventor: Anthony Knittel
  • Publication number: 20190034787
    Abstract: The present disclosure provides a method of recognising a first action. The method comprises determining a weighted combination of a first plurality of feature responses associated with a sequence of second actions over a period of time using a set of weights at a particular time instance in the period of time. The method then recognises the first action by processing, using a neural network, the weighted combination of the first plurality of feature responses and temporal position values of each of the first plurality of feature responses associated with the sequence of second actions.
    Type: Application
    Filed: July 23, 2018
    Publication date: January 31, 2019
    Inventor: Anthony Knittel
  • Publication number: 20180174051
    Abstract: A system and method of training an artificial neural network. The method comprises determining an activation value for each node in a set of nodes of the artificial neural network, the activation values being determined by applying training data to the artificial neural network, and scaling the determined activation values for each of a plurality of the nodes in a portion of the artificial neural network. Each scaled activation value is determined using a scaling factor associated with a corresponding one of the plurality of nodes. Each scaling factor is determined based on a rank of the corresponding node. The method further comprises updating weights associated with each of the plurality of nodes in the portion of the artificial neural network using the determined scaled activation values to train the neural network.
    Type: Application
    Filed: December 15, 2017
    Publication date: June 21, 2018
    Inventor: ANTHONY KNITTEL
  • Publication number: 20170032246
    Abstract: A method of training an artificial neural network. The method comprises: accessing the artificial neural network, the artificial neural network comprising a plurality of connections connecting nodes arranged in at least an initial and a subsequent layer, and receiving a training example for the artificial neural network, the training example having input values, a target, and an associated training salience value indicating an importance of the input values to determining the target of the artificial neural network.
    Type: Application
    Filed: July 27, 2016
    Publication date: February 2, 2017
    Inventors: ANTHONY KNITTEL, TUAN HUE THI
  • Patent number: 3972689
    Abstract: A method and a device for vapor growing crystals. The crystals are grown in an evacuated ampoule from a liquified sample source material, such that the source material is separated from the growing crystal by one or more capillaries or the like which provide the only pathway between the sample source and the growing crystal. There is a temperature gradient between the sample source and the growing crystal such that the growing crystal is at a lower temperature than the sample source.
    Type: Grant
    Filed: November 25, 1974
    Date of Patent: August 3, 1976
    Assignee: Unisearch Limited
    Inventor: Anthony Knittel