Patents by Inventor David Kenji See
David Kenji See has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11159732Abstract: A method of capturing an image of a scene. A current location of a plurality of objects in a frame of a video capturing the scene having one or more events of interest, is determined. For at least one of the events of interest, a time and a location for each of the plurality of objects associated with the event of interest is predicted based on the current location of the plurality of objects. A frame subset score is determined for each of a plurality of frame subsets in the frame, each of the plurality of frame subsets including one or more of the plurality of objects based on the predicted time and the predicted location for the event of interest. One of the determined plurality of frame subsets is selected based on the determined frame subset score. An image of the event of interest is captured using a camera, based on a camera orientation setting for the selected frame subset, where the captured image comprises the selected frame subset.Type: GrantFiled: February 14, 2020Date of Patent: October 26, 2021Assignee: Canon Kabushiki KaishaInventors: Amit Kumar Gupta, David Kenji See, Jeroen Vendrig
-
Publication number: 20200267321Abstract: A method of capturing an image of a scene. A current location of a plurality of objects in a frame of a video capturing the scene having one or more events of interest, is determined. For at least one of the events of interest, a time and a location for each of the plurality of objects associated with the event of interest is predicted based on the current location of the plurality of objects. A frame subset score is determined for each of a plurality of frame subsets in the frame, each of the plurality of frame subsets including one or more of the plurality of objects based on the predicted time and the predicted location for the event of interest. One of the determined plurality of frame subsets is selected based on the determined frame subset score. An image of the event of interest is captured using a camera, based on a camera orientation setting for the selected frame subset, where the captured image comprises the selected frame subset.Type: ApplicationFiled: February 14, 2020Publication date: August 20, 2020Inventors: AMIT KUMAR GUPTA, DAVID KENJI SEE, JEROEN VENDRIG
-
Patent number: 10096117Abstract: A method for classifying a visual element in a video frame as either foreground or background, the video frame being characterized using a scene model comprising a plurality of modes, the method comprising the steps of: determining a blob boundary characteristic for a blob comprising at least the visual element; identifying a mode matched to the visual element; classifying the visual element and the matched mode as foreground dependent upon a match mode boundary characteristic of the matched mode; and updating the scene model dependent upon the blob boundary characteristic and the matched mode boundary characteristic.Type: GrantFiled: December 18, 2015Date of Patent: October 9, 2018Assignee: Canon Kabushiki KaishaInventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See
-
Patent number: 9922425Abstract: Disclosed is a method of classifying visual elements in a region of a video as either foreground or background. The method classifies each visual element in the region as either foreground or background using a first classifier, and expands spatially at least one of the visual elements classified as foreground to form a spatially expanded area. The method then classifies the visual elements in the spatially expanded area as either foreground or background using a second classifier that is more sensitive to foreground than the first classifier.Type: GrantFiled: December 1, 2015Date of Patent: March 20, 2018Assignee: Canon Kabushiki KaishaInventors: Ashley John Partis, Amit Kumar Gupta, David Kenji See
-
Patent number: 9609233Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.Type: GrantFiled: October 19, 2012Date of Patent: March 28, 2017Assignee: Canon Kabushiki KaishaInventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
-
Publication number: 20160189388Abstract: A method for classifying a visual element in a video frame as either foreground or background, the video frame being characterised using a scene model comprising a plurality of modes, the method comprising the steps of: determining a blob boundary characteristic for a blob comprising at least the visual element; identifying a mode matched to the visual element; classifying the visual element and the matched mode as foreground dependent upon a match mode boundary characteristic of the matched mode; and updating the scene model dependent upon the blob boundary characteristic and the matched mode boundary characteristic.Type: ApplicationFiled: December 18, 2015Publication date: June 30, 2016Inventors: AMIT KUMAR GUPTA, ASHLEY JOHN PARTIS, DAVID KENJI SEE
-
Publication number: 20160155024Abstract: Disclosed is a method of classifying visual elements in a region of a video as either foreground or background. The method classifies each visual element in the region as either foreground or background using a first classifier, and expands spatially at least one of the visual elements classified as foreground to form a spatially expanded area. The method then classifies the visual elements in the spatially expanded area as either foreground or background using a second classifier that is more sensitive to foreground than the first classifier.Type: ApplicationFiled: December 1, 2015Publication date: June 2, 2016Inventors: ASHLEY JOHN PARTIS, AMIT KUMAR GUPTA, DAVID KENJI SEE
-
Publication number: 20140301604Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.Type: ApplicationFiled: October 19, 2012Publication date: October 9, 2014Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
-
Patent number: 8837781Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.Type: GrantFiled: October 29, 2013Date of Patent: September 16, 2014Assignee: Canon Kabushiki KaishaInventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
-
Publication number: 20140056477Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.Type: ApplicationFiled: October 29, 2013Publication date: February 27, 2014Applicant: CANON KABUSHIKI KAISHAInventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
-
Patent number: 8611590Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.Type: GrantFiled: December 23, 2009Date of Patent: December 17, 2013Assignee: Canon Kabushiki KaishaInventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
-
Publication number: 20100157089Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.Type: ApplicationFiled: December 23, 2009Publication date: June 24, 2010Applicant: CANON KABUSHIKI KAISHAInventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
-
Patent number: 7430204Abstract: A method and apparatus for processing IP packets is disclosed. The method comprises defining sets of packet fields referred to as templates (in method steps 1101-1102), storing the templates in a memory, determining (in a step 1104) if a current IP packet is intended to be processed, identifying (in the step 1104) the process to be applied to the current IP packet, selecting, depending on an attribute of the identified process, at least one of the stored templates, and operating (in a step 1107) upon the current IP packet, using the templates, to form a processed IP packet.Type: GrantFiled: March 24, 2005Date of Patent: September 30, 2008Assignee: Canon Kabushiki KaishaInventor: David Kenji See