Patents by Inventor Peter Jan Pakulski
Peter Jan Pakulski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10198794Abstract: A system and computer-implemented method of altering perceptibility of depth in an image. The method comprises receiving a desired change in the perceptibility of depth in the image; receiving a depth-map corresponding to the image; and determining at least one characteristic of the image. The method further comprises applying an image process to the image, the image process varying in strength according to the depth map, and in accordance with a non-linear predetermined mapping relating a strength of the applied image process to a change in the perceptibility of depth, the mapping being determined with respect to the identified characteristic.Type: GrantFiled: December 16, 2016Date of Patent: February 5, 2019Assignee: Canon Kabushiki KaishaInventors: Nicolas Pierre Marie Frederic Bonnier, Anna Wong, Clement Fredembach, Peter Jan Pakulski, Steven Richard Irrgang
-
Publication number: 20170178298Abstract: A system and computer-implemented method of altering perceptibility of depth in an image. The method comprises receiving a desired change in the perceptibility of depth in the image; receiving a depth-map corresponding to the image; and determining at least one characteristic of the image. The method further comprises applying an image process to the image, the image process varying in strength according to the depth map, and in accordance with a non-linear predetermined mapping relating a strength of the applied image process to a change in the perceptibility of depth, the mapping being determined with respect to the identified characteristic.Type: ApplicationFiled: December 16, 2016Publication date: June 22, 2017Inventors: Nicolas Pierre Marie Frederic Bonnier, Anna Wong, Clement Fredembach, Peter Jan Pakulski, Steven Richard Irrgang
-
Patent number: 9609233Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.Type: GrantFiled: October 19, 2012Date of Patent: March 28, 2017Assignee: Canon Kabushiki KaishaInventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
-
Patent number: 9031280Abstract: Disclosed herein are a system, method, and computer program product for updating a scene model (230) used for object detection in a video sequence by defining a relationship between a pair of mode models relating to different visual elements of said scene model (230). The method includes the steps of: determining whether the pair of mode models have a temporal correlation with each other, dependent upon a predetermined criterion (745); determining a classification of each mode model in the pair of mode models (740); modifying the relationship between the pair of mode models, dependent upon the determination of the temporal correlation and the determination of the classification (760); and updating the scene model based upon the modified relationship (770).Type: GrantFiled: December 15, 2011Date of Patent: May 12, 2015Assignee: Canon Kabushiki KaishaInventor: Peter Jan Pakulski
-
Publication number: 20140301604Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.Type: ApplicationFiled: October 19, 2012Publication date: October 9, 2014Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
-
Patent number: 8837781Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.Type: GrantFiled: October 29, 2013Date of Patent: September 16, 2014Assignee: Canon Kabushiki KaishaInventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
-
Patent number: 8837823Abstract: Disclosed herein are a system and method for performing foreground/background separation on an input image. The method pre-classifies (1010, 1020) an input visual element in the input image as one of a first element type and a second element type, dependent upon a predetermined characteristic. The method performs a first foreground/background separation (1030) on the input visual element that has been pre-classified as the first element type, wherein the first foreground/background separation step is based on color data and brightness data of the input visual element. The method performs a second foreground/background separation (1040) on the input visual element that has been pre-classified as the second element type, wherein the second foreground/background separation step is based on color data, brightness data, and texture of the input visual element.Type: GrantFiled: October 25, 2011Date of Patent: September 16, 2014Assignee: Canon Kabushiki KaishaInventors: Ashley Partis, Amit Kumar Gupta, Peter Jan Pakulski, Qianlu Lin
-
Publication number: 20140056477Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.Type: ApplicationFiled: October 29, 2013Publication date: February 27, 2014Applicant: CANON KABUSHIKI KAISHAInventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
-
Patent number: 8611590Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.Type: GrantFiled: December 23, 2009Date of Patent: December 17, 2013Assignee: Canon Kabushiki KaishaInventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
-
Patent number: 8553086Abstract: Disclosed is a method (101), in relation to a current video frame (300) comprising a visual element (320) associated with a location in a scene captured in the frame (300), said visual element (320) being associated with a plurality of mode models (350), said method matching (140) one of said plurality of mode models (350) to the visual element (320), said method comprising, for each said mode model (350), the steps of determining (420) a visual support value depending upon visual similarity between the visual element (320) and the mode model (350), determining (440) a spatial support value depending upon similarity of temporal characteristics of the mode models (350) associated with the visual element and mode models (385) of one or more other visual elements (331); and identifying (450) a matching one of said plurality of mode models (350) depending upon the visual support value and the spatial support value.Type: GrantFiled: February 25, 2009Date of Patent: October 8, 2013Assignee: Canon Kabushiki KaishaInventors: Jarrad Michael Springett, Peter Jan Pakulski, Jeroen Vendrig
-
Publication number: 20130002865Abstract: A method and system for updating a visual element model of a scene model associated with a scene, the visual element model including a set of mode models for a visual element for a location of the scene. The method receives an incoming visual element of a frame of the image sequence and, for each mode model, classifies the respective mode model as either a matching mode model or a distant mode model, by comparing an appearance of the incoming visual element and a set of visual characteristics of the respective mode model. The method removes a distant mode model from the visual element model, based upon a first temporal characteristic of a matching mode model exceeding a maturity threshold and a second temporal characteristic of the distant mode model being below a stability threshold.Type: ApplicationFiled: June 27, 2012Publication date: January 3, 2013Applicant: CANON KABUSHIKI KAISHAInventors: Peter Jan Pakulski, Amit Kumar Gupta
-
Publication number: 20120163658Abstract: Disclosed herein are a system, method, and computer program product for updating a scene model (230) used for object detection in a video sequence by defining a relationship between a pair of mode models relating to different visual elements of said scene model (230). The method includes the steps of: determining whether the pair of mode models have a temporal correlation with each other, dependent upon a predetermined criterion (745); determining a classification of each mode model in the pair of mode models (740); modifying the relationship between the pair of mode models, dependent upon the determination of the temporal correlation and the determination of the classification (760); and updating the scene model based upon the modified relationship (770).Type: ApplicationFiled: December 15, 2011Publication date: June 28, 2012Applicant: CANON KABUSHIKI KAISHAInventor: Peter Jan PAKULSKI
-
Publication number: 20120106837Abstract: Disclosed herein are a system and method for performing foreground/background separation on an input image. The method pre-classifies (1010, 1020) an input visual element in the input image as one of a first element type and a second element type, dependent upon a predetermined characteristic. The method performs a first foreground/background separation (1030) on the input visual element that has been pre-classified as the first element type, wherein the first foreground/background separation step is based on colour data and brightness data of the input visual element. The method performs a second foreground/background separation (1040) on the input visual element that has been pre-classified as the second element type, wherein the second foreground/background separation step is based on colour data, brightness data, and texture of the input visual element.Type: ApplicationFiled: October 25, 2011Publication date: May 3, 2012Applicant: CANON KABUSHIKI KAISHAInventors: Ashley Partis, Amit Kumar Gupta, Peter Jan Pakulski, Qianlu Lin
-
Publication number: 20110043699Abstract: Disclosed is a method (101), in relation to a current video frame (300) comprising a visual element (320) associated with a location in a scene captured in the frame (300), said visual element (320) being associated with a plurality of mode models (350), said method matching (140) one of said plurality of mode models (350) to the visual element (320), said method comprising, for each said mode model (350), the steps of determining (420) a visual support value depending upon visual similarity between the visual element (320) and the mode model (350), determining (440) a spatial support value depending upon similarity of temporal characteristics of the mode models (350) associated with the visual element and mode models (385) of one or more other visual elements (331); and identifying (450) a matching one of said plurality of mode models (350) depending upon the visual support value and the spatial support valueType: ApplicationFiled: February 25, 2009Publication date: February 24, 2011Applicant: CANON KABUSHIKI KAISHAInventors: Jarrad Michael Springett, Peter Jan Pakulski, Jeroen Vendrig
-
Publication number: 20100157089Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.Type: ApplicationFiled: December 23, 2009Publication date: June 24, 2010Applicant: CANON KABUSHIKI KAISHAInventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See