Patents by Inventor Andrew James Dorrell

Andrew James Dorrell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11323745
    Abstract: A system and method of decoding a set of greatest coded line index values for a precinct of video data from a video bitstream, the precinct of video data including one or more subbands. The method comprises decoding a greatest coded line index prediction mode for each subband from the video bitstream; decoding a plurality of greatest coded line index delta values for each subband from the video bitstream using the greatest coded line index prediction mode for the subband; and producing the greatest coded line index values for each subband using the plurality of greatest coded line index delta values and the greatest coded line index prediction mode for the subband.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: May 3, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventors: Christopher James Rosewarne, Andrew James Dorrell
  • Publication number: 20220116600
    Abstract: A system and method of decoding a coding unit in an image frame from a bitstream. The method comprises determining a size of the coding unit from the bitstream; and dividing the image frame into a plurality of equally sized processing regions, each of the equally sized processing regions being a block processed during a single stage of a pipeline decoding the bitstream. If the coding unit overlaps a boundary between the determined processing regions, the method comprises selecting a transform size for the coding unit from a plurality of transform sizes, the transform size being selected to fit within the coding unit and being different in size to the processing regions; and decoding the coding unit by applying an inverse transform to residual coefficients of each transform unit in the coding unit, each of the transform units being of the selected transform size.
    Type: Application
    Filed: June 25, 2019
    Publication date: April 14, 2022
    Inventors: Christopher James ROSEWARNE, Andrew James DORRELL
  • Patent number: 11172231
    Abstract: A method of encoding video data into a video bitstream having a plurality of precincts. The method comprises generating a plurality of coding cost estimates for a current precinct by testing a corresponding candidate coefficient truncation level for the current precinct, each of the coding cost estimates being an over estimate of an encoded data size for coding the current precinct at the candidate truncation level and being determined using a most significant bit plane index, wherein each of the coding cost estimates is independent of a value of coefficient bits in the current precinct. The method includes selecting one of the candidate truncation levels according to the corresponding coding cost estimate and a budgeted coding cost for the current precinct, the budgeted coding cost representing an allowable size of encoding the precinct; and encoding the current precinct of video data into the video bitstream to generate the video bitstream.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: November 9, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Christopher James Rosewarne, Andrew James Dorrell
  • Publication number: 20210306679
    Abstract: A method of decoding a coding unit in an image frame from a bitstream by determining a size of the coding unit from the bitstream; and dividing the image frame into a plurality of equally sized processing regions, each of the equally sized processing regions being smaller than a largest available coding unit size. The method also comprises selecting a motion vector corresponding to the coding unit from a list of candidate motion vectors, selecting the motion vector comprising (i) decoding a merge index if the coding unit is greater than or equal to a size than one of the determined processing regions, or (ii) decoding a skip flag to decode that the merge index if the coding unit is not greater than or equal to the size than one of the determined processing regions; and decoding the coding unit according to the selected motion vector for the coding unit.
    Type: Application
    Filed: June 25, 2019
    Publication date: September 30, 2021
    Inventors: CHRISTOPHER JAMES ROSEWARNE, ANDREW JAMES DORRELL
  • Publication number: 20200128274
    Abstract: A method of encoding video data into a video bitstream having a plurality of precincts. The method comprises generating a plurality of coding cost estimates (1106) for a current precinct of the plurality of precincts by testing a corresponding candidate coefficient truncation level for the current precinct, each of the coding cost estimates being an over estimate of an encoded data size for coding the current precinct at the candidate truncation level and being determined using a most significant bit plane index, wherein each of the coding cost estimates is independent of a value of coefficient bits in the current precinct.
    Type: Application
    Filed: July 3, 2018
    Publication date: April 23, 2020
    Inventors: CHRISTOPHER JAMES ROSEWARNE, ANDREW JAMES DORRELL
  • Publication number: 20200014956
    Abstract: A system and method of decoding a set of greatest coded line index values for a precinct of video data from a video bitstream, the precinct of video data including one or more subbands. The method comprises decoding a greatest coded line index prediction mode for each subband from the video bitstream; decoding a plurality of greatest coded line index delta values for each subband from the video bitstream using the greatest coded line index prediction mode for the subband; and producing the greatest coded line index values for each subband using the plurality of greatest coded line index delta values and the greatest coded line index prediction mode for the subband.
    Type: Application
    Filed: February 9, 2018
    Publication date: January 9, 2020
    Inventors: CHRISTOPHER JAMES ROSEWARNE, ANDREW JAMES DORRELL
  • Patent number: 10405011
    Abstract: At least one method, apparatus, system and readable medium for generating two video streams from a received video stream are provided herein. A scene captured by the received video stream is divided into a first region and a second region by placing a virtual plane in the scene. The regions of the scene represented in the received video stream are assigned to at least one of two output video streams, the first region being assigned to at least a first output video stream and the second region to at least a second output video stream according to the virtual plane. Pixel data assigned to each output video stream is encoded to generate the two video streams.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: September 3, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventor: Andrew James Dorrell
  • Patent number: 10389935
    Abstract: A computer-implemented method of configuring a virtual camera. A first and second object in a scene are detected, each object having at least one motion attribute. An interaction point in the scene is determined based on the motion attributes of the first and second objects. A shape envelope of the first and second objects is determined, the shape envelope including an area corresponding to the first and second objects at the determined interaction point. The virtual camera is configured based on the determined shape envelope to capture, in a field of view of the virtual camera, the first and second objects.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: August 20, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Belinda Margaret Yee, Andrew James Dorrell
  • Publication number: 20190020904
    Abstract: At least one method, apparatus, system and readable medium for generating two video streams from a received video stream are provided herein. A scene captured by the received video stream is divided into a first region and a second region by placing a virtual plane in the scene. The regions of the scene represented in the received video stream are assigned to at least one of two output video streams, the first region being assigned to at least a first output video stream and the second region to at least a second output video stream according to the virtual plane. Pixel data assigned to each output video stream is encoded to generate the two video streams.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventor: Andrew James Dorrell
  • Publication number: 20180167553
    Abstract: A computer-implemented method of configuring a virtual camera. A first and second object in a scene are detected, each object having at least one motion attribute. An interaction point in the scene is determined based on the motion attributes of the first and second objects. A shape envelope of the first and second objects is determined, the shape envelope including an area corresponding to the first and second objects at the determined interaction point. The virtual camera is configured based on the determined shape envelope to capture, in a field of view of the virtual camera, the first and second objects.
    Type: Application
    Filed: December 13, 2016
    Publication date: June 14, 2018
    Inventors: Belinda Margaret Yee, Andrew James Dorrell
  • Publication number: 20170069354
    Abstract: A method of generating a position marker in video images. The video images are displayed on an interactive display device. An interaction with the interactive display device on the displayed video images is determined during the display of the video images. The position marker is generated, where the position marker is associated with at least one time value determined from the interaction relative to at least one of the video images and is labelled with a graphical representation of the interaction. The graphical representation indicates relative spatial position of the determined interaction on the video image.
    Type: Application
    Filed: September 6, 2016
    Publication date: March 9, 2017
    Inventors: IJ ERIC WANG, ANDREW JAMES DORRELL
  • Publication number: 20130271615
    Abstract: A method of removing an artefact from an image captured with a motion invariant camera is disclosed. The captured image is de-blurred using a spatially invariant blur kernel. An edge filter with a fixed offset is applied to the de-blurred image to identify the location of at least one artefact. A parameter is estimated based on a region either side of the identified location. The at least one artefact is removed from the de-blurred image using the parameter.
    Type: Application
    Filed: May 31, 2013
    Publication date: October 17, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Steven David Webster, Andrew James Dorrell, Axel Lakus-Becker
  • Patent number: 8520083
    Abstract: A method of removing an artefact from an image captured with a motion invariant camera is disclosed. The captured image is de-blurred using a spatially invariant blur kernel. An edge filter with a fixed offset is applied to the de-blurred image to identify the location of at least one artefact. A parameter is estimated based on a region either side of the identified location. The at least one artefact is removed from the de-blurred image using the parameter.
    Type: Grant
    Filed: March 26, 2010
    Date of Patent: August 27, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Steven David Webster, Andrew James Dorrell, Axel Lakus-Becker
  • Patent number: 8260089
    Abstract: A method (700) of determining an image value at a sample position of an output image, is disclosed. The method (700) comprises the steps of determining orientation of an isophote (e.g., 1010) passing through the output sample position and determining a period of intensity variation along the isophote (1010). The method (700) determines the image value at the sample position of the output image based on the period of intensity variation and outputs the determined image value at the sample position of the output image.
    Type: Grant
    Filed: November 18, 2008
    Date of Patent: September 4, 2012
    Assignee: Canon Kabushiki Kaisha
    Inventors: Alan Valev Tonisson, Nagita Mehrseresht, Andrew James Dorrell
  • Patent number: 7873222
    Abstract: Methods of capturing image pixel data representing a scene may be practiced using a camera and may be implemented as software, such as an application program executing within the camera. The methods are particularly advantageous where user modification to the camera control parameters leads to a difference between the values of the pre-capture control parameters for the camera when compared to the pre-capture control parameters that the camera would determine in fully automatic mode. The measured difference may be compared to one or more predetermined threshold values.
    Type: Grant
    Filed: July 1, 2009
    Date of Patent: January 18, 2011
    Assignee: Canon Kabushiki Kaisha
    Inventors: Woei Chan, Andrew James Dorrell
  • Publication number: 20100245602
    Abstract: A method of removing an artefact from an image captured with a motion invariant camera is disclosed. The captured image is de-blurred using a spatially invariant blur kernel. An edge filter with a fixed offset is applied to the de-blurred image to identify the location of at least one artefact. A parameter is estimated based on a region either side of the identified location. The at least one artefact is removed from the de-blurred image using the parameter.
    Type: Application
    Filed: March 26, 2010
    Publication date: September 30, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Steven David Webster, Andrew James Dorrell, Axel Lakus-Becker
  • Publication number: 20090322899
    Abstract: Methods of capturing image pixel data representing a scene may be practiced using a camera and may be implemented as software, such as an application program executing within the camera. The methods are particularly advantageous where user modification to the camera control parameters leads to a difference between the values of the pre-capture control parameters for the camera when compared to the pre-capture control parameters that the camera would determine in fully automatic mode. The measured difference may be compared to one or more predetermined threshold values.
    Type: Application
    Filed: July 1, 2009
    Publication date: December 31, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Woei CHAN, Andrew James DORRELL
  • Patent number: 7590290
    Abstract: Methods of capturing image pixel data representing a scene may be practiced using a camera and may be implemented as software, such as an application program executing within the camera. The methods are particularly advantageous where user modification to the camera control parameters leads to a difference between the values of the pre-capture control parameters for the camera when compared to the pre-capture control parameters that the camera would determine in fully automatic mode. The measured difference may be compared to one or more predetermined threshold values.
    Type: Grant
    Filed: July 20, 2005
    Date of Patent: September 15, 2009
    Assignee: Canon Kabushiki Kaisha
    Inventors: Woei Chan, Andrew James Dorrell
  • Publication number: 20090161990
    Abstract: A method (700) of determining an image value at a sample position of an output image, is disclosed. The method (700) comprises the steps of determining orientation of an isophote (e.g., 1010) passing through the output sample position and determining a period of intensity variation along the isophote (1010). The method (700) determines the image value at the sample position of the output image based on the period of intensity variation and outputs the determined image value at the sample position of the output image.
    Type: Application
    Filed: November 18, 2008
    Publication date: June 25, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Alan Valev Tonisson, Nagita Mehrseresht, Andrew James Dorrell
  • Patent number: 7551797
    Abstract: A method (100) of generating a digital image of a scene is disclosed. The method (100) is particularly advantageous in situations where a light source illuminating the scene is unknown. The method (100) allows post-capture control over flash illuminant and ambient illuminant used in generating the image. The method (100) may also be used to provide a synthetic fill flash effect. The method (100) is particularly advantageous in situations where an ambient light source illuminating the scene differs in spectral character from that of a flash illuminant used to capture an image of the scene.
    Type: Grant
    Filed: July 28, 2005
    Date of Patent: June 23, 2009
    Assignee: Canon Kabushiki Kaisha
    Inventors: Andrew James Dorrell, Stuart William Perry, Woei Chan