Patents by Inventor Nagita Mehrseresht

Nagita Mehrseresht has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11062128
    Abstract: A method of classifying an interaction captured in a sequence of video. A plurality of people in the video sequence is identified. An action of a first one of the people at a first time is determined. An action of a second one of the people at a second time is determined, the action of the second person being after the action of the first person. A role for the second person at the second time is determined, the role being independent of the determined actions of the first and second person. An interaction between the first person and the second person is classified based on the determined role of the second person and the determined actions of the first and second person.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: July 13, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Nagita Mehrseresht
  • Patent number: 10565454
    Abstract: A method and associated imaging system for classifying at least one concept type in a video segment is disclosed. The method associates an object concept type in the video segment with a spatio-temporal segment of the video segment. The method then associates a plurality of action concept types with the spatio-temporal segment, where each action concept type of the plurality of action concept types is associated with a subset of the spatio-temporal segment associated with the object concept type. The method then classifies the action concept types and the object concept types associated with the video segment using a conditional Markov random field (CRF) model where the CRF model is structured with the plurality of action concept types being independent and indirectly linked via a global concept type assigned to the video segment, and the object concept type is linked to the global concept type.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: February 18, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Nagita Mehrseresht, Barry James Drake
  • Patent number: 10529077
    Abstract: A system and method of detecting an interaction between a plurality of objects. The method comprises receiving tracking information for the plurality of objects in a scene; generating a plurality of frames, each of the plurality of frames comprising an activation for each of the plurality of objects and representing a relative spatial relationship between the plurality objects in the scene determined from the received tracking information, the frames encoding properties of the objects using properties of the corresponding activations; determining, using a trained neural network, features associated with the plurality of objects from the generated plurality of frames using the activations and the relative spatial relationship between the objects, the features representing changes in the relative spatial relationship between the objects over time relating to the interaction; and detecting time localization of the interaction in the plurality of frames using the determined features.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: January 7, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Nagita Mehrseresht
  • Patent number: 10445582
    Abstract: A method of determining a composite action from a video clip, using a conditional random field (CRF), the method includes determining a plurality of features from the video clip, each of the features having a corresponding temporal segment from the video clip. The method may continue by determining, for each of the temporal segments corresponding to one of the features, an initial estimate of an action unit label from a corresponding unary potential function, the corresponding unary potential function having as ordered input the plurality of features from a current temporal segment and at least one other of the temporal segments. The method may further include determining the composite action by jointly optimizing the initial estimate of the action unit labels.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: October 15, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventor: Nagita Mehrseresht
  • Patent number: 10380173
    Abstract: A method of jointly classifying a plurality of objects in an image using a feature type selected from a plurality of feature types determines classification information for each of the plurality of objects in the image by applying a predetermined joint classifier to at least one feature of a first type. The feature is generated from the image using a first feature extractor, the classification information being based on a probability of each of a plurality of possible classifications. The method estimates, for each of the feature types, an improvement in an accuracy of classification for each of the plurality of objects. The method selects features of a further type, from the plurality of feature types, according to the estimated improvement in the accuracy of the classification of each of the objects, and classifies the plurality of objects in the image using the selected features of the further type.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: August 13, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventor: Nagita Mehrseresht
  • Publication number: 20190188866
    Abstract: A system and method of detecting an interaction between a plurality of objects. The method comprises receiving tracking information for the plurality of objects in a scene; generating a plurality of frames, each of the plurality of frames comprising an activation for each of the plurality of objects and representing a relative spatial relationship between the plurality objects in the scene determined from the received tracking information, the frames encoding properties of the objects using properties of the corresponding activations; determining, using a trained neural network, features associated with the plurality of objects from the generated plurality of frames using the activations and the relative spatial relationship between the objects, the features representing changes in the relative spatial relationship between the objects over time relating to the interaction; and detecting time localization of the interaction in the plurality of frames using the determined features.
    Type: Application
    Filed: December 19, 2017
    Publication date: June 20, 2019
    Inventor: NAGITA MEHRSERESHT
  • Patent number: 10152637
    Abstract: A method of segmenting a video sequence. A segment score is determined for each of a plurality of fixed length segments of the video sequence. Each of the segment scores provide a score for a plurality of actions associated with a corresponding fixed length segment. A current segment is selected from the segments of the video sequence. The segment score is selected for a further one of the segments, the further segment being disjoint with the current segment and being used to provide information about actions that were classified outside the current segment. A further segment score is determined for the current segment according to the selected segment score. The video is segmented based on the determined further segment score.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: December 11, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventor: Nagita Mehrseresht
  • Publication number: 20180349704
    Abstract: A method of classifying an interaction captured in a sequence of video. A plurality of people in the video sequence is identified. An action of a first one of the people at a first time is determined. An action of a second one of the people at a second time is determined, the action of the second person being after the action of the first person. A role for the second person at the second time is determined, the role being independent of the determined actions of the first and second person. An interaction between the first person and the second person is classified based on the determined role of the second person and the determined actions of the first and second person.
    Type: Application
    Filed: June 2, 2017
    Publication date: December 6, 2018
    Inventor: Nagita Mehrseresht
  • Publication number: 20180173955
    Abstract: A method of determining a composite action from a video clip, using a conditional random field (CRF), the method includes determining a plurality of features from the video clip, each of the features having a corresponding temporal segment from the video clip. The method may continue by determining, for each of the temporal segments corresponding to one of the features, an initial estimate of an action unit label from a corresponding unary potential function, the corresponding unary potential function having as ordered input the plurality of features from a current temporal segment and at least one other of the temporal segments. The method may further include determining the composite action by jointly optimising the initial estimate of the action unit labels.
    Type: Application
    Filed: December 20, 2016
    Publication date: June 21, 2018
    Inventor: NAGITA MEHRSERESHT
  • Publication number: 20180075306
    Abstract: A method of segmenting a video sequence. A segment score is determined for each of a plurality of fixed length segments of the video sequence. Each of the segment scores provide a score for a plurality of actions associated with a corresponding fixed length segment. A current segment is selected from the segments of the video sequence. The segment score is selected for a further one of the segments, the further segment being disjoint with the current segment and being used to provide information about actions that were classified outside the current segment. A further segment score is determined for the current segment according to the selected segment score. The video is segmented based on the determined further segment score.
    Type: Application
    Filed: September 14, 2016
    Publication date: March 15, 2018
    Inventor: NAGITA MEHRSERESHT
  • Publication number: 20170177943
    Abstract: A method and associated imaging system for classifying at least one concept type in a video segment is disclosed. The method associates an object concept type in the video segment with a spatio-temporal segment of the video segment. The method then associates a plurality of action concept types with the spatio-temporal segment, where each action concept type of the plurality of action concept types is associated with a subset of the spatio-temporal segment associated with the object concept type. The method then classifies the action concept types and the object concept types associated with the video segment using a conditional Markov random field (CRF) model where the CRF model is structured with the plurality of action concept types being independent and indirectly linked via a global concept type assigned to the video segment, and the object concept type is linked to the global concept type.
    Type: Application
    Filed: December 19, 2016
    Publication date: June 22, 2017
    Inventors: NAGITA MEHRSERESHT, BARRY JAMES DRAKE
  • Publication number: 20160063358
    Abstract: A method of jointly classifying a plurality of objects in an image using a feature type selected from a plurality of feature types determines classification information for each of the plurality of objects in the image by applying a predetermined joint classifier to at least one feature of a first type. The feature is generated from the image using a first feature extractor, the classification information being based on a probability of each of a plurality of possible classifications. The method estimates, for each of the feature types, an improvement in an accuracy of classification for each of the plurality of objects. The method selects features of a further type, from the plurality of feature types, according to the estimated improvement in the accuracy of the classification of each of the objects, and classifies the plurality of objects in the image using the selected features of the further type.
    Type: Application
    Filed: August 24, 2015
    Publication date: March 3, 2016
    Inventor: NAGITA MEHRSERESHT
  • Patent number: 8606835
    Abstract: A method of determining interpolation coefficients (607, 609, 610, 611) of a symmetric interpolation kernel (608) is disclosed. The method comprises determining a first interpolation coefficient (611) from the symmetric interpolation kernel (608) and storing the first interpolation coefficient in a memory (506). The method then determines the value of an intermediate function (310) from symmetrically opposed segments (201, 204) of the kernel, and determines a subsequent interpolation coefficient dependent upon the first interpolation coefficient and the value of the intermediate function.
    Type: Grant
    Filed: December 20, 2007
    Date of Patent: December 10, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Nagita Mehrseresht, Alan Valev Tonisson
  • Patent number: 8260089
    Abstract: A method (700) of determining an image value at a sample position of an output image, is disclosed. The method (700) comprises the steps of determining orientation of an isophote (e.g., 1010) passing through the output sample position and determining a period of intensity variation along the isophote (1010). The method (700) determines the image value at the sample position of the output image based on the period of intensity variation and outputs the determined image value at the sample position of the output image.
    Type: Grant
    Filed: November 18, 2008
    Date of Patent: September 4, 2012
    Assignee: Canon Kabushiki Kaisha
    Inventors: Alan Valev Tonisson, Nagita Mehrseresht, Andrew James Dorrell
  • Publication number: 20090161990
    Abstract: A method (700) of determining an image value at a sample position of an output image, is disclosed. The method (700) comprises the steps of determining orientation of an isophote (e.g., 1010) passing through the output sample position and determining a period of intensity variation along the isophote (1010). The method (700) determines the image value at the sample position of the output image based on the period of intensity variation and outputs the determined image value at the sample position of the output image.
    Type: Application
    Filed: November 18, 2008
    Publication date: June 25, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Alan Valev Tonisson, Nagita Mehrseresht, Andrew James Dorrell
  • Publication number: 20090087119
    Abstract: Disclosed is a method for re-sampling an input image comprising input samples to produce an output image comprising output samples. The method includes the following steps. A set of kernel values (130) is determined (110) based on a position (120) of an input sample (100), each kernel value in the set corresponding to a distinct output sample position. Each kernel value in the set is multiplied (105) by the value of the input sample to form a contribution. Each contribution corresponds to a distinct output sample and is first added (145) to a value in a corresponding storage location in an output accumulator (140), the result of the first addition replacing the contents of the storage location in the output accumulator. Each kernel value is also second added (135) to a storage location in a sliding kernel accumulator (150), the result of the second addition replacing the contents of the storage location in the sliding kernel accumulator.
    Type: Application
    Filed: September 24, 2008
    Publication date: April 2, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Andrew James Dorrell, Nagita Mehrseresht, Alan Valev Tonisson
  • Publication number: 20080155000
    Abstract: A method of determining interpolation coefficients (607, 609, 610, 611) of a symmetric interpolation kernel (608) is disclosed. The method comprises determining a first interpolation coefficient (611) from the symmetric interpolation kernel (608) and storing the first interpolation coefficient in a memory (506). The method then determines the value of an intermediate function (310) from symmetrically opposed segments (201, 204) of the kernel, and determines a subsequent interpolation coefficient dependent upon the first interpolation coefficient and the value of the intermediate function.
    Type: Application
    Filed: December 20, 2007
    Publication date: June 26, 2008
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Nagita Mehrseresht, Alan Valev Tonisson