Patents by Inventor David Kenji See

David Kenji See has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11159732
    Abstract: A method of capturing an image of a scene. A current location of a plurality of objects in a frame of a video capturing the scene having one or more events of interest, is determined. For at least one of the events of interest, a time and a location for each of the plurality of objects associated with the event of interest is predicted based on the current location of the plurality of objects. A frame subset score is determined for each of a plurality of frame subsets in the frame, each of the plurality of frame subsets including one or more of the plurality of objects based on the predicted time and the predicted location for the event of interest. One of the determined plurality of frame subsets is selected based on the determined frame subset score. An image of the event of interest is captured using a camera, based on a camera orientation setting for the selected frame subset, where the captured image comprises the selected frame subset.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: October 26, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Amit Kumar Gupta, David Kenji See, Jeroen Vendrig
  • Publication number: 20200267321
    Abstract: A method of capturing an image of a scene. A current location of a plurality of objects in a frame of a video capturing the scene having one or more events of interest, is determined. For at least one of the events of interest, a time and a location for each of the plurality of objects associated with the event of interest is predicted based on the current location of the plurality of objects. A frame subset score is determined for each of a plurality of frame subsets in the frame, each of the plurality of frame subsets including one or more of the plurality of objects based on the predicted time and the predicted location for the event of interest. One of the determined plurality of frame subsets is selected based on the determined frame subset score. An image of the event of interest is captured using a camera, based on a camera orientation setting for the selected frame subset, where the captured image comprises the selected frame subset.
    Type: Application
    Filed: February 14, 2020
    Publication date: August 20, 2020
    Inventors: AMIT KUMAR GUPTA, DAVID KENJI SEE, JEROEN VENDRIG
  • Patent number: 10096117
    Abstract: A method for classifying a visual element in a video frame as either foreground or background, the video frame being characterized using a scene model comprising a plurality of modes, the method comprising the steps of: determining a blob boundary characteristic for a blob comprising at least the visual element; identifying a mode matched to the visual element; classifying the visual element and the matched mode as foreground dependent upon a match mode boundary characteristic of the matched mode; and updating the scene model dependent upon the blob boundary characteristic and the matched mode boundary characteristic.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: October 9, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See
  • Patent number: 9922425
    Abstract: Disclosed is a method of classifying visual elements in a region of a video as either foreground or background. The method classifies each visual element in the region as either foreground or background using a first classifier, and expands spatially at least one of the visual elements classified as foreground to form a spatially expanded area. The method then classifies the visual elements in the spatially expanded area as either foreground or background using a second classifier that is more sensitive to foreground than the first classifier.
    Type: Grant
    Filed: December 1, 2015
    Date of Patent: March 20, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ashley John Partis, Amit Kumar Gupta, David Kenji See
  • Patent number: 9609233
    Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: March 28, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
  • Publication number: 20160189388
    Abstract: A method for classifying a visual element in a video frame as either foreground or background, the video frame being characterised using a scene model comprising a plurality of modes, the method comprising the steps of: determining a blob boundary characteristic for a blob comprising at least the visual element; identifying a mode matched to the visual element; classifying the visual element and the matched mode as foreground dependent upon a match mode boundary characteristic of the matched mode; and updating the scene model dependent upon the blob boundary characteristic and the matched mode boundary characteristic.
    Type: Application
    Filed: December 18, 2015
    Publication date: June 30, 2016
    Inventors: AMIT KUMAR GUPTA, ASHLEY JOHN PARTIS, DAVID KENJI SEE
  • Publication number: 20160155024
    Abstract: Disclosed is a method of classifying visual elements in a region of a video as either foreground or background. The method classifies each visual element in the region as either foreground or background using a first classifier, and expands spatially at least one of the visual elements classified as foreground to form a spatially expanded area. The method then classifies the visual elements in the spatially expanded area as either foreground or background using a second classifier that is more sensitive to foreground than the first classifier.
    Type: Application
    Filed: December 1, 2015
    Publication date: June 2, 2016
    Inventors: ASHLEY JOHN PARTIS, AMIT KUMAR GUPTA, DAVID KENJI SEE
  • Publication number: 20140301604
    Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.
    Type: Application
    Filed: October 19, 2012
    Publication date: October 9, 2014
    Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
  • Patent number: 8837781
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: September 16, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Publication number: 20140056477
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.
    Type: Application
    Filed: October 29, 2013
    Publication date: February 27, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Patent number: 8611590
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.
    Type: Grant
    Filed: December 23, 2009
    Date of Patent: December 17, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Publication number: 20100157089
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.
    Type: Application
    Filed: December 23, 2009
    Publication date: June 24, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Patent number: 7430204
    Abstract: A method and apparatus for processing IP packets is disclosed. The method comprises defining sets of packet fields referred to as templates (in method steps 1101-1102), storing the templates in a memory, determining (in a step 1104) if a current IP packet is intended to be processed, identifying (in the step 1104) the process to be applied to the current IP packet, selecting, depending on an attribute of the identified process, at least one of the stored templates, and operating (in a step 1107) upon the current IP packet, using the templates, to form a processed IP packet.
    Type: Grant
    Filed: March 24, 2005
    Date of Patent: September 30, 2008
    Assignee: Canon Kabushiki Kaisha
    Inventor: David Kenji See