Patents by Inventor Peter Jan Pakulski

Peter Jan Pakulski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10198794
    Abstract: A system and computer-implemented method of altering perceptibility of depth in an image. The method comprises receiving a desired change in the perceptibility of depth in the image; receiving a depth-map corresponding to the image; and determining at least one characteristic of the image. The method further comprises applying an image process to the image, the image process varying in strength according to the depth map, and in accordance with a non-linear predetermined mapping relating a strength of the applied image process to a change in the perceptibility of depth, the mapping being determined with respect to the identified characteristic.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: February 5, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Nicolas Pierre Marie Frederic Bonnier, Anna Wong, Clement Fredembach, Peter Jan Pakulski, Steven Richard Irrgang
  • Publication number: 20170178298
    Abstract: A system and computer-implemented method of altering perceptibility of depth in an image. The method comprises receiving a desired change in the perceptibility of depth in the image; receiving a depth-map corresponding to the image; and determining at least one characteristic of the image. The method further comprises applying an image process to the image, the image process varying in strength according to the depth map, and in accordance with a non-linear predetermined mapping relating a strength of the applied image process to a change in the perceptibility of depth, the mapping being determined with respect to the identified characteristic.
    Type: Application
    Filed: December 16, 2016
    Publication date: June 22, 2017
    Inventors: Nicolas Pierre Marie Frederic Bonnier, Anna Wong, Clement Fredembach, Peter Jan Pakulski, Steven Richard Irrgang
  • Patent number: 9609233
    Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: March 28, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
  • Patent number: 9031280
    Abstract: Disclosed herein are a system, method, and computer program product for updating a scene model (230) used for object detection in a video sequence by defining a relationship between a pair of mode models relating to different visual elements of said scene model (230). The method includes the steps of: determining whether the pair of mode models have a temporal correlation with each other, dependent upon a predetermined criterion (745); determining a classification of each mode model in the pair of mode models (740); modifying the relationship between the pair of mode models, dependent upon the determination of the temporal correlation and the determination of the classification (760); and updating the scene model based upon the modified relationship (770).
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: May 12, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventor: Peter Jan Pakulski
  • Publication number: 20140301604
    Abstract: Disclosed are a method and apparatus for adjusting a set of luminance values associated with a set of visual elements in a current frame (310) of a video sequence for object detection (370). The method determines (410,430,450), for each of a plurality of scenarios (SO, SI, S2), a set of adjusted luminance values based on a corresponding luminance compensation value, and accumulates (460), for each scenario, a set of brightness counts and darkness counts of the current frame based on the set of adjusted luminance values. A metric (470) is calculated for each scenario based on the set of brightness counts and darkness counts and one of scenarios is selected based on the calculated metric. The method then selects (350) the adjusted luminance value associated with the selected scenario as an adjusted set of luminance values associated with the current frame of the video sequence.
    Type: Application
    Filed: October 19, 2012
    Publication date: October 9, 2014
    Inventors: Amit Kumar Gupta, Ashley John Partis, David Kenji See, Hiroshi Tojo, Peter Jan Pakulski
  • Patent number: 8837781
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: September 16, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Patent number: 8837823
    Abstract: Disclosed herein are a system and method for performing foreground/background separation on an input image. The method pre-classifies (1010, 1020) an input visual element in the input image as one of a first element type and a second element type, dependent upon a predetermined characteristic. The method performs a first foreground/background separation (1030) on the input visual element that has been pre-classified as the first element type, wherein the first foreground/background separation step is based on color data and brightness data of the input visual element. The method performs a second foreground/background separation (1040) on the input visual element that has been pre-classified as the second element type, wherein the second foreground/background separation step is based on color data, brightness data, and texture of the input visual element.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: September 16, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ashley Partis, Amit Kumar Gupta, Peter Jan Pakulski, Qianlu Lin
  • Publication number: 20140056477
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.
    Type: Application
    Filed: October 29, 2013
    Publication date: February 27, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Patent number: 8611590
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.
    Type: Grant
    Filed: December 23, 2009
    Date of Patent: December 17, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Patent number: 8553086
    Abstract: Disclosed is a method (101), in relation to a current video frame (300) comprising a visual element (320) associated with a location in a scene captured in the frame (300), said visual element (320) being associated with a plurality of mode models (350), said method matching (140) one of said plurality of mode models (350) to the visual element (320), said method comprising, for each said mode model (350), the steps of determining (420) a visual support value depending upon visual similarity between the visual element (320) and the mode model (350), determining (440) a spatial support value depending upon similarity of temporal characteristics of the mode models (350) associated with the visual element and mode models (385) of one or more other visual elements (331); and identifying (450) a matching one of said plurality of mode models (350) depending upon the visual support value and the spatial support value.
    Type: Grant
    Filed: February 25, 2009
    Date of Patent: October 8, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Jarrad Michael Springett, Peter Jan Pakulski, Jeroen Vendrig
  • Publication number: 20130002865
    Abstract: A method and system for updating a visual element model of a scene model associated with a scene, the visual element model including a set of mode models for a visual element for a location of the scene. The method receives an incoming visual element of a frame of the image sequence and, for each mode model, classifies the respective mode model as either a matching mode model or a distant mode model, by comparing an appearance of the incoming visual element and a set of visual characteristics of the respective mode model. The method removes a distant mode model from the visual element model, based upon a first temporal characteristic of a matching mode model exceeding a maturity threshold and a second temporal characteristic of the distant mode model being below a stability threshold.
    Type: Application
    Filed: June 27, 2012
    Publication date: January 3, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Peter Jan Pakulski, Amit Kumar Gupta
  • Publication number: 20120163658
    Abstract: Disclosed herein are a system, method, and computer program product for updating a scene model (230) used for object detection in a video sequence by defining a relationship between a pair of mode models relating to different visual elements of said scene model (230). The method includes the steps of: determining whether the pair of mode models have a temporal correlation with each other, dependent upon a predetermined criterion (745); determining a classification of each mode model in the pair of mode models (740); modifying the relationship between the pair of mode models, dependent upon the determination of the temporal correlation and the determination of the classification (760); and updating the scene model based upon the modified relationship (770).
    Type: Application
    Filed: December 15, 2011
    Publication date: June 28, 2012
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Peter Jan PAKULSKI
  • Publication number: 20120106837
    Abstract: Disclosed herein are a system and method for performing foreground/background separation on an input image. The method pre-classifies (1010, 1020) an input visual element in the input image as one of a first element type and a second element type, dependent upon a predetermined characteristic. The method performs a first foreground/background separation (1030) on the input visual element that has been pre-classified as the first element type, wherein the first foreground/background separation step is based on colour data and brightness data of the input visual element. The method performs a second foreground/background separation (1040) on the input visual element that has been pre-classified as the second element type, wherein the second foreground/background separation step is based on colour data, brightness data, and texture of the input visual element.
    Type: Application
    Filed: October 25, 2011
    Publication date: May 3, 2012
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Ashley Partis, Amit Kumar Gupta, Peter Jan Pakulski, Qianlu Lin
  • Publication number: 20110043699
    Abstract: Disclosed is a method (101), in relation to a current video frame (300) comprising a visual element (320) associated with a location in a scene captured in the frame (300), said visual element (320) being associated with a plurality of mode models (350), said method matching (140) one of said plurality of mode models (350) to the visual element (320), said method comprising, for each said mode model (350), the steps of determining (420) a visual support value depending upon visual similarity between the visual element (320) and the mode model (350), determining (440) a spatial support value depending upon similarity of temporal characteristics of the mode models (350) associated with the visual element and mode models (385) of one or more other visual elements (331); and identifying (450) a matching one of said plurality of mode models (350) depending upon the visual support value and the spatial support value
    Type: Application
    Filed: February 25, 2009
    Publication date: February 24, 2011
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Jarrad Michael Springett, Peter Jan Pakulski, Jeroen Vendrig
  • Publication number: 20100157089
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.
    Type: Application
    Filed: December 23, 2009
    Publication date: June 24, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See