Patents by Inventor Daniel John Wedge

Daniel John Wedge has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9524448
    Abstract: Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object (713) tracked in a first field of view and a second object tracked (753) in a second field of view. The method determines a first area (711) in the first field of view, based on the location and size of the first object (713). The method utilizes a predetermined area relationship between the first area (711) in the first field of view and at least one area (751) in the second field of view to determine a second area (751) in the second field of view. In one embodiment, the method determines the second area (751) in the second field of view by comparing predetermined area relationships between the first area (711) and any areas (751) in the second field to determine a best match.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: December 20, 2016
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Daniel John Wedge
  • Patent number: 8837781
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: September 16, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Publication number: 20140072174
    Abstract: Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object (713) tracked in a first field of view and a second object tracked (753) in a second field of view. The method determines a first area (711) in the first field of view, based on the location and size of the first object (713). The method utilises a predetermined area relationship between the first area (711) in the first field of view and at least one area (751) in the second field of view to determine a second area (751) in the second field of view. In one embodiment, the method determines the second area (751) in the second field of view by comparing predetermined area relationships between the first area (711) and any areas (751) in the second field to determine a best match.
    Type: Application
    Filed: November 12, 2013
    Publication date: March 13, 2014
    Applicant: c/o CANON KABUSHIKI KAISHA
    Inventor: Daniel John Wedge
  • Publication number: 20140056477
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection based on the similarity measure.
    Type: Application
    Filed: October 29, 2013
    Publication date: February 27, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Patent number: 8649556
    Abstract: Disclosed herein are a method and system for appearance-invariant tracking of an object in an image sequence. A track is associated with the image sequence, wherein the track has an associated track signature comprising at least one mode. The method detects the object in a frame of the image sequence (1020). A representative signature is associated with the detected object. The method determines a spatial difference measure between the track and the detected object, and determines, for each mode of the track signature, a visual difference (1410) between the mode of the track signature and the representative signature to obtain a lowest determined visual difference (1420). The method then utilises the spatial difference measure and the lowest determined visual difference to perform at least one of the following steps of: (i) associating the detected object with the track (1440), and (ii) adding a new mode to the track signature (1460), based on the representative signature.
    Type: Grant
    Filed: December 23, 2009
    Date of Patent: February 11, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Daniel John Wedge
  • Patent number: 8615106
    Abstract: Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object (713) tracked in a first field of view and a second object tracked (753) in a second field of view. The method determines a first area (711) in the first field of view, based on the location and size of the first object (713). The method utilizes a predetermined area relationship between the first area (711) in the first field of view and at least one area (751) in the second field of view to determine a second area (751) in the second field of view. In one embodiment, the method determines the second area (751) in the second field of view by comparing predetermined area relationships between the first area (711) and any areas (751) in the second field to determine a best match.
    Type: Grant
    Filed: December 2, 2010
    Date of Patent: December 24, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventor: Daniel John Wedge
  • Patent number: 8611590
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.
    Type: Grant
    Filed: December 23, 2009
    Date of Patent: December 17, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See
  • Publication number: 20110135154
    Abstract: Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object (713) tracked in a first field of view and a second object tracked (753) in a second field of view. The method determines a first area (711) in the first field of view, based on the location and size of the first object (713). The method utilises a predetermined area relationship between the first area (711) in the first field of view and at least one area (751) in the second field of view to determine a second area (751) in the second field of view. In one embodiment, the method determines the second area (751) in the second field of view by comparing predetermined area relationships between the first area (711) and any areas (751) in the second field to determine a best match.
    Type: Application
    Filed: December 2, 2010
    Publication date: June 9, 2011
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Daniel John Wedge
  • Publication number: 20100166262
    Abstract: Disclosed herein are a method and system for appearance-invariant tracking of an object in an image sequence. A track is associated with the image sequence, wherein the track has an associated track signature comprising at least one mode. The method detects the object in a frame of the image sequence (1020). A representative signature is associated with the detected object. The method determines a spatial difference measure between the track and the detected object, and determines, for each mode of the track signature, a visual difference (1410) between the mode of the track signature and the representative signature to obtain a lowest determined visual difference (1420). The method then utilises the spatial difference measure and the lowest determined visual difference to perform at least one of the following steps of: (i) associating the detected object with the track (1440), and (ii) adding a new mode to the track signature (1460), based on the representative signature.
    Type: Application
    Filed: December 23, 2009
    Publication date: July 1, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Daniel John Wedge
  • Publication number: 20100157089
    Abstract: Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation (820) for the detection based on at least one previous frame, generates a spatial representation (810) of the detection, and extends the spatial representation (810) to obtain an extended spatial representation (830), based on the expected spatial representation (820). The method determines a similarity measure between the extended spatial representation (830) and the expected spatial representation (820), and then determines the current spatial representation for the detection to based on the similarity measure.
    Type: Application
    Filed: December 23, 2009
    Publication date: June 24, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Peter Jan Pakulski, Daniel John Wedge, Ashley Partis, David Kenji See