Patents by Inventor Ashley John Partis

Ashley John Partis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10096117
    Abstract: A method for classifying a visual element in a video frame as either foreground or background, the video frame being characterized using a scene model comprising a plurality of modes, the method comprising the steps of: determining a blob boundary characteristic for a blob comprising at least the visual element; identifying a mode matched to the visual element; classifying the visual element and the matched mode as foreground dependent upon a match mode boundary characteristic of the matched mode; and updating the scene model dependent upon the blob boundary characteristic and the matched mode boundary characteristic.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: October 9, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See
  • Patent number: 9922425
    Abstract: Disclosed is a method of classifying visual elements in a region of a video as either foreground or background. The method classifies each visual element in the region as either foreground or background using a first classifier, and expands spatially at least one of the visual elements classified as foreground to form a spatially expanded area. The method then classifies the visual elements in the spatially expanded area as either foreground or background using a second classifier that is more sensitive to foreground than the first classifier.
    Type: Grant
    Filed: December 1, 2015
    Date of Patent: March 20, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ashley John Partis, Amit Kumar Gupta, David Kenji See
  • Patent number: 9846810
    Abstract: A method of tracking objects of a scene is disclosed. The method determines two or more tracks which have merged. Each track is associated with at least one of the objects and having a corresponding graph structure. Each graph structure comprising at least one node representing the corresponding track. A new node representing the merged tracks is created. The graph structures are added as children nodes of the new node to create a merged graph structure. A split between the objects associated with one of the tracks represented by the nodes of the merged graph structure is determined. Similarity between one or more of the nodes in the merged graph structure and foreground areas corresponding to split objects is determined. One of the nodes in the merged graph structure is selected based on the determined similarity. A new graph structure for tracking the objects is created, the new graph structure having the selected node at the root of the new graph structure.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: December 19, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventor: Ashley John Partis
  • Patent number: 9609233
    Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: March 28, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
  • Publication number: 20160189388
    Abstract: A method for classifying a visual element in a video frame as either foreground or background, the video frame being characterised using a scene model comprising a plurality of modes, the method comprising the steps of: determining a blob boundary characteristic for a blob comprising at least the visual element; identifying a mode matched to the visual element; classifying the visual element and the matched mode as foreground dependent upon a match mode boundary characteristic of the matched mode; and updating the scene model dependent upon the blob boundary characteristic and the matched mode boundary characteristic.
    Type: Application
    Filed: December 18, 2015
    Publication date: June 30, 2016
    Inventors: AMIT KUMAR GUPTA, ASHLEY JOHN PARTIS, DAVID KENJI SEE
  • Publication number: 20160155024
    Abstract: Disclosed is a method of classifying visual elements in a region of a video as either foreground or background. The method classifies each visual element in the region as either foreground or background using a first classifier, and expands spatially at least one of the visual elements classified as foreground to form a spatially expanded area. The method then classifies the visual elements in the spatially expanded area as either foreground or background using a second classifier that is more sensitive to foreground than the first classifier.
    Type: Application
    Filed: December 1, 2015
    Publication date: June 2, 2016
    Inventors: ASHLEY JOHN PARTIS, AMIT KUMAR GUPTA, DAVID KENJI SEE
  • Publication number: 20140321704
    Abstract: A method of tracking objects of a scene is disclosed. The method determines two or more tracks which have merged. Each track is associated with at least one of the objects and having a corresponding graph structure. Each graph structure comprising at least one node representing the corresponding track. A new node representing the merged tracks is created. The graph structures are added as children nodes of the new node to create a merged graph structure. A split between the objects associated with one of the tracks represented by the nodes of the merged graph structure is determined. Similarity between one or more of the nodes in the merged graph structure and foreground areas corresponding to split objects is determined. One of the nodes in the merged graph structure is selected based on the determined similarity. A new graph structure for tracking the objects is created, the new graph structure having the selected node at the root of the new graph structure.
    Type: Application
    Filed: April 29, 2014
    Publication date: October 30, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Ashley John Partis
  • Publication number: 20140301604
    Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.
    Type: Application
    Filed: October 19, 2012
    Publication date: October 9, 2014
    Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski