Patents by Inventor Nagita Mehrseresht
Nagita Mehrseresht has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11062128Abstract: A method of classifying an interaction captured in a sequence of video. A plurality of people in the video sequence is identified. An action of a first one of the people at a first time is determined. An action of a second one of the people at a second time is determined, the action of the second person being after the action of the first person. A role for the second person at the second time is determined, the role being independent of the determined actions of the first and second person. An interaction between the first person and the second person is classified based on the determined role of the second person and the determined actions of the first and second person.Type: GrantFiled: June 2, 2017Date of Patent: July 13, 2021Assignee: Canon Kabushiki KaishaInventor: Nagita Mehrseresht
-
Patent number: 10565454Abstract: A method and associated imaging system for classifying at least one concept type in a video segment is disclosed. The method associates an object concept type in the video segment with a spatio-temporal segment of the video segment. The method then associates a plurality of action concept types with the spatio-temporal segment, where each action concept type of the plurality of action concept types is associated with a subset of the spatio-temporal segment associated with the object concept type. The method then classifies the action concept types and the object concept types associated with the video segment using a conditional Markov random field (CRF) model where the CRF model is structured with the plurality of action concept types being independent and indirectly linked via a global concept type assigned to the video segment, and the object concept type is linked to the global concept type.Type: GrantFiled: December 19, 2016Date of Patent: February 18, 2020Assignee: Canon Kabushiki KaishaInventors: Nagita Mehrseresht, Barry James Drake
-
Patent number: 10529077Abstract: A system and method of detecting an interaction between a plurality of objects. The method comprises receiving tracking information for the plurality of objects in a scene; generating a plurality of frames, each of the plurality of frames comprising an activation for each of the plurality of objects and representing a relative spatial relationship between the plurality objects in the scene determined from the received tracking information, the frames encoding properties of the objects using properties of the corresponding activations; determining, using a trained neural network, features associated with the plurality of objects from the generated plurality of frames using the activations and the relative spatial relationship between the objects, the features representing changes in the relative spatial relationship between the objects over time relating to the interaction; and detecting time localization of the interaction in the plurality of frames using the determined features.Type: GrantFiled: December 19, 2017Date of Patent: January 7, 2020Assignee: Canon Kabushiki KaishaInventor: Nagita Mehrseresht
-
Patent number: 10445582Abstract: A method of determining a composite action from a video clip, using a conditional random field (CRF), the method includes determining a plurality of features from the video clip, each of the features having a corresponding temporal segment from the video clip. The method may continue by determining, for each of the temporal segments corresponding to one of the features, an initial estimate of an action unit label from a corresponding unary potential function, the corresponding unary potential function having as ordered input the plurality of features from a current temporal segment and at least one other of the temporal segments. The method may further include determining the composite action by jointly optimizing the initial estimate of the action unit labels.Type: GrantFiled: December 20, 2016Date of Patent: October 15, 2019Assignee: Canon Kabushiki KaishaInventor: Nagita Mehrseresht
-
Patent number: 10380173Abstract: A method of jointly classifying a plurality of objects in an image using a feature type selected from a plurality of feature types determines classification information for each of the plurality of objects in the image by applying a predetermined joint classifier to at least one feature of a first type. The feature is generated from the image using a first feature extractor, the classification information being based on a probability of each of a plurality of possible classifications. The method estimates, for each of the feature types, an improvement in an accuracy of classification for each of the plurality of objects. The method selects features of a further type, from the plurality of feature types, according to the estimated improvement in the accuracy of the classification of each of the objects, and classifies the plurality of objects in the image using the selected features of the further type.Type: GrantFiled: August 24, 2015Date of Patent: August 13, 2019Assignee: Canon Kabushiki KaishaInventor: Nagita Mehrseresht
-
Publication number: 20190188866Abstract: A system and method of detecting an interaction between a plurality of objects. The method comprises receiving tracking information for the plurality of objects in a scene; generating a plurality of frames, each of the plurality of frames comprising an activation for each of the plurality of objects and representing a relative spatial relationship between the plurality objects in the scene determined from the received tracking information, the frames encoding properties of the objects using properties of the corresponding activations; determining, using a trained neural network, features associated with the plurality of objects from the generated plurality of frames using the activations and the relative spatial relationship between the objects, the features representing changes in the relative spatial relationship between the objects over time relating to the interaction; and detecting time localization of the interaction in the plurality of frames using the determined features.Type: ApplicationFiled: December 19, 2017Publication date: June 20, 2019Inventor: NAGITA MEHRSERESHT
-
Patent number: 10152637Abstract: A method of segmenting a video sequence. A segment score is determined for each of a plurality of fixed length segments of the video sequence. Each of the segment scores provide a score for a plurality of actions associated with a corresponding fixed length segment. A current segment is selected from the segments of the video sequence. The segment score is selected for a further one of the segments, the further segment being disjoint with the current segment and being used to provide information about actions that were classified outside the current segment. A further segment score is determined for the current segment according to the selected segment score. The video is segmented based on the determined further segment score.Type: GrantFiled: September 14, 2016Date of Patent: December 11, 2018Assignee: Canon Kabushiki KaishaInventor: Nagita Mehrseresht
-
Publication number: 20180349704Abstract: A method of classifying an interaction captured in a sequence of video. A plurality of people in the video sequence is identified. An action of a first one of the people at a first time is determined. An action of a second one of the people at a second time is determined, the action of the second person being after the action of the first person. A role for the second person at the second time is determined, the role being independent of the determined actions of the first and second person. An interaction between the first person and the second person is classified based on the determined role of the second person and the determined actions of the first and second person.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Inventor: Nagita Mehrseresht
-
Publication number: 20180173955Abstract: A method of determining a composite action from a video clip, using a conditional random field (CRF), the method includes determining a plurality of features from the video clip, each of the features having a corresponding temporal segment from the video clip. The method may continue by determining, for each of the temporal segments corresponding to one of the features, an initial estimate of an action unit label from a corresponding unary potential function, the corresponding unary potential function having as ordered input the plurality of features from a current temporal segment and at least one other of the temporal segments. The method may further include determining the composite action by jointly optimising the initial estimate of the action unit labels.Type: ApplicationFiled: December 20, 2016Publication date: June 21, 2018Inventor: NAGITA MEHRSERESHT
-
Publication number: 20180075306Abstract: A method of segmenting a video sequence. A segment score is determined for each of a plurality of fixed length segments of the video sequence. Each of the segment scores provide a score for a plurality of actions associated with a corresponding fixed length segment. A current segment is selected from the segments of the video sequence. The segment score is selected for a further one of the segments, the further segment being disjoint with the current segment and being used to provide information about actions that were classified outside the current segment. A further segment score is determined for the current segment according to the selected segment score. The video is segmented based on the determined further segment score.Type: ApplicationFiled: September 14, 2016Publication date: March 15, 2018Inventor: NAGITA MEHRSERESHT
-
Publication number: 20170177943Abstract: A method and associated imaging system for classifying at least one concept type in a video segment is disclosed. The method associates an object concept type in the video segment with a spatio-temporal segment of the video segment. The method then associates a plurality of action concept types with the spatio-temporal segment, where each action concept type of the plurality of action concept types is associated with a subset of the spatio-temporal segment associated with the object concept type. The method then classifies the action concept types and the object concept types associated with the video segment using a conditional Markov random field (CRF) model where the CRF model is structured with the plurality of action concept types being independent and indirectly linked via a global concept type assigned to the video segment, and the object concept type is linked to the global concept type.Type: ApplicationFiled: December 19, 2016Publication date: June 22, 2017Inventors: NAGITA MEHRSERESHT, BARRY JAMES DRAKE
-
Publication number: 20160063358Abstract: A method of jointly classifying a plurality of objects in an image using a feature type selected from a plurality of feature types determines classification information for each of the plurality of objects in the image by applying a predetermined joint classifier to at least one feature of a first type. The feature is generated from the image using a first feature extractor, the classification information being based on a probability of each of a plurality of possible classifications. The method estimates, for each of the feature types, an improvement in an accuracy of classification for each of the plurality of objects. The method selects features of a further type, from the plurality of feature types, according to the estimated improvement in the accuracy of the classification of each of the objects, and classifies the plurality of objects in the image using the selected features of the further type.Type: ApplicationFiled: August 24, 2015Publication date: March 3, 2016Inventor: NAGITA MEHRSERESHT
-
Patent number: 8606835Abstract: A method of determining interpolation coefficients (607, 609, 610, 611) of a symmetric interpolation kernel (608) is disclosed. The method comprises determining a first interpolation coefficient (611) from the symmetric interpolation kernel (608) and storing the first interpolation coefficient in a memory (506). The method then determines the value of an intermediate function (310) from symmetrically opposed segments (201, 204) of the kernel, and determines a subsequent interpolation coefficient dependent upon the first interpolation coefficient and the value of the intermediate function.Type: GrantFiled: December 20, 2007Date of Patent: December 10, 2013Assignee: Canon Kabushiki KaishaInventors: Nagita Mehrseresht, Alan Valev Tonisson
-
Patent number: 8260089Abstract: A method (700) of determining an image value at a sample position of an output image, is disclosed. The method (700) comprises the steps of determining orientation of an isophote (e.g., 1010) passing through the output sample position and determining a period of intensity variation along the isophote (1010). The method (700) determines the image value at the sample position of the output image based on the period of intensity variation and outputs the determined image value at the sample position of the output image.Type: GrantFiled: November 18, 2008Date of Patent: September 4, 2012Assignee: Canon Kabushiki KaishaInventors: Alan Valev Tonisson, Nagita Mehrseresht, Andrew James Dorrell
-
Publication number: 20090161990Abstract: A method (700) of determining an image value at a sample position of an output image, is disclosed. The method (700) comprises the steps of determining orientation of an isophote (e.g., 1010) passing through the output sample position and determining a period of intensity variation along the isophote (1010). The method (700) determines the image value at the sample position of the output image based on the period of intensity variation and outputs the determined image value at the sample position of the output image.Type: ApplicationFiled: November 18, 2008Publication date: June 25, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Alan Valev Tonisson, Nagita Mehrseresht, Andrew James Dorrell
-
Publication number: 20090087119Abstract: Disclosed is a method for re-sampling an input image comprising input samples to produce an output image comprising output samples. The method includes the following steps. A set of kernel values (130) is determined (110) based on a position (120) of an input sample (100), each kernel value in the set corresponding to a distinct output sample position. Each kernel value in the set is multiplied (105) by the value of the input sample to form a contribution. Each contribution corresponds to a distinct output sample and is first added (145) to a value in a corresponding storage location in an output accumulator (140), the result of the first addition replacing the contents of the storage location in the output accumulator. Each kernel value is also second added (135) to a storage location in a sliding kernel accumulator (150), the result of the second addition replacing the contents of the storage location in the sliding kernel accumulator.Type: ApplicationFiled: September 24, 2008Publication date: April 2, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Andrew James Dorrell, Nagita Mehrseresht, Alan Valev Tonisson
-
Publication number: 20080155000Abstract: A method of determining interpolation coefficients (607, 609, 610, 611) of a symmetric interpolation kernel (608) is disclosed. The method comprises determining a first interpolation coefficient (611) from the symmetric interpolation kernel (608) and storing the first interpolation coefficient in a memory (506). The method then determines the value of an intermediate function (310) from symmetrically opposed segments (201, 204) of the kernel, and determines a subsequent interpolation coefficient dependent upon the first interpolation coefficient and the value of the intermediate function.Type: ApplicationFiled: December 20, 2007Publication date: June 26, 2008Applicant: CANON KABUSHIKI KAISHAInventors: Nagita Mehrseresht, Alan Valev Tonisson