Patents by Inventor MATTHEW RAPHAEL ARNISON

MATTHEW RAPHAEL ARNISON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10916033
    Abstract: A system and method determining a camera pose. The method comprises receiving a first image and a second image, the first and second images being associated with a camera pose and a height map for pixels in each corresponding image, and determining a mapping between the first image and the second image using the corresponding height maps, the camera pose and a mapping of the second image to an orthographic view. The method further comprises determining alignment data between the first image transformed using the determined mapping and the second image and determining a refined camera pose based on the determined alignment data and alignment data associated with at least one other camera pose.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: February 9, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Peter Alleine Fletcher, David Peter Morgan-Mar, Matthew Raphael Arnison, Timothy Stephen Mason
  • Patent number: 10853990
    Abstract: A method for processing a three-dimensional graphic object. The method comprises receiving a query point and an associated query region, the query point being positioned within a reference fragment of a texture image of the three-dimensional graphic object; determining reference points on a boundary of the reference fragment using the query region, the reference points associated with target points on a boundary of a target fragment of the texture image, the reference points and the query point forming a reference angle; and determining a portion of the target fragment covered by the query region using an anchor point located outside the target fragment. The anchor point is determined using the target points and the reference angle. Angles between the anchor point and the target points correspond to angles between the query and reference points. The three-dimensional graphic object is processed using the determined portion of the target fragment.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: December 1, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: David Karlov, Colin Eric Druitt, Matthew Raphael Arnison
  • Patent number: 10547786
    Abstract: One or more embodiments of an apparatus, system and method of compensating image data for phase fluctuations caused by a wave deforming medium, and storage or recording mediums for use therewith, are provided herein. At least one embodiment of the method comprises capturing, by a sensor of an imaging system, first image data and second image data for each of a plurality of pixel positions of the sensor, the sensor capturing an object through a wave deforming medium causing a defocus disparity between the first image data and second image data; and determining the defocus disparity between the first image data and the second image data, the defocus disparity corresponding to a defocus wavefront deviation of the wave deforming medium. The method may further comprise compensating the image data captured by the sensor for phase fluctuations caused by the wave deforming medium using the determined defocus disparity.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: January 28, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ruimin Pan, Matthew Raphael Arnison, David Robert James Monaghan
  • Patent number: 10540810
    Abstract: A method of rendering a graphical object comprises accessing a mapping relating a mesoscale structure and a light scattering parameter of a material to a perceptual appearance characteristic; determining a perceptual appearance characteristic of the graphical object, the graphical object reproduced on an interface to represent an object formed from the material, the perceptual appearance characteristic determined in accordance with the mapping using an initial mesoscale structure and a light scattering parameter of the material; receiving a signal indicating a modification in structure relating to the initial mesoscale structure; and determining, using the mapping, an adjustment of the light scattering parameter preserving the determined perceptual appearance characteristic, based on the modification of the initial mesoscale structure.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: January 21, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Steven Richard Irrgang, Thai Quan Huynh-Thu, Juno Kim, Vanessa Jeanie Honson, Matthew Raphael Arnison
  • Publication number: 20190347854
    Abstract: A method for processing a three-dimensional graphic object. The method comprises receiving a query point and an associated query region, the query point being positioned within a reference fragment of a texture image of the three-dimensional graphic object; determining reference points on a boundary of the reference fragment using the query region, the reference points associated with target points on a boundary of a target fragment of the texture image, the reference points and the query point forming a reference angle; and determining a portion of the target fragment covered by the query region using an anchor point located outside the target fragment. The anchor point is determined using the target points and the reference angle. Angles between the anchor point and the target points correspond to angles between the query and reference points. The three-dimensional graphic object is processed using the determined portion of the target fragment.
    Type: Application
    Filed: May 3, 2019
    Publication date: November 14, 2019
    Inventors: DAVID KARLOV, COLIN ERIC DRUITT, MATTHEW RAPHAEL ARNISON
  • Publication number: 20190266788
    Abstract: A system and method of rendering an image of a surface. The method includes receiving a user input modifying a material appearance parameter of the surface related to perceived gloss; determining a weighting coefficient for each of a plurality of pixel values of the surface using a corresponding normal, viewing angle and a position of a light source, wherein the pixel values are determined using the modified material appearance parameter; and determining perceived coverage of the surface by specular highlights based on the pixel values weighted using the corresponding weighting coefficients. The method also includes rendering the image using colour properties adjusted based on the determined coverage, to maintain perceived colour properties and update perceived gloss based on the modification.
    Type: Application
    Filed: February 25, 2019
    Publication date: August 29, 2019
    Inventors: THAI QUAN HUYNH-THU, MATTHEW RAPHAEL ARNISON, ZOEY ISHERWOOD, JUNO KIM, VANESSA JEANIE HONSON
  • Publication number: 20190188871
    Abstract: A method of combining object data captured from an object, the method comprising: receiving first object data and second object data, the first and second object data comprising intensity image data and three-dimensional geometry data of the object; synthesising a first fused image of the object and a second fused image of the object by fusing the respective intensity image data and the respective three-dimensional geometry data of the object illuminated by a directional lighting arrangement produced by a directional light source, the directional lighting arrangement produced by the directional light source being different to a lighting arrangement used to capture at least one of the first object data and the second object data; aligning the first fused image and the second fused image; and combining the first object data and the second object data.
    Type: Application
    Filed: December 6, 2018
    Publication date: June 20, 2019
    Inventors: PETER ALLEINE FLETCHER, MATTHEW RAPHAEL ARNISON, TIMOTHY STEPHEN MASON
  • Publication number: 20190073792
    Abstract: A system and method determining a camera pose. The method comprises receiving a first image and a second image, the first and second images being associated with a camera pose and a height map for pixels in each corresponding image, and determining a mapping between the first image and the second image using the corresponding height maps, the camera pose and a mapping of the second image to an orthographic view. The method further comprises determining alignment data between the first image transformed using the determined mapping and the second image and determining a refined camera pose based on the determined alignment data and alignment data associated with at least one other camera pose.
    Type: Application
    Filed: August 29, 2018
    Publication date: March 7, 2019
    Inventors: Peter Alleine Fletcher, David Peter Morgan-Mar, Matthew Raphael Arnison, Timothy Stephen Mason
  • Publication number: 20190005710
    Abstract: A method of rendering a graphical object comprises accessing a mapping relating a mesoscale structure and a light scattering parameter of a material to a perceptual appearance characteristic; determining a perceptual appearance characteristic of the graphical object, the graphical object reproduced on an interface to represent an object formed from the material, the perceptual appearance characteristic determined in accordance with the mapping using an initial mesoscale structure and a light scattering parameter of the material; receiving a signal indicating a modification in structure relating to the initial mesoscale structure; and determining, using the mapping, an adjustment of the light scattering parameter preserving the determined perceptual appearance characteristic, based on the modification of the initial mesoscale structure.
    Type: Application
    Filed: June 14, 2018
    Publication date: January 3, 2019
    Inventors: Steven Richard Irrgang, Thai Quan Huynh-Thu, Juno Kim, Vanessa Jeanie Honson, Matthew Raphael Arnison
  • Publication number: 20180324359
    Abstract: One or more embodiments of an apparatus, system and method of compensating image data for phase fluctuations caused by a wave deforming medium, and storage or recording mediums for use therewith, are provided herein. At least one embodiment of the method comprises capturing, by a sensor of an imaging system, first image data and second image data for each of a plurality of pixel positions of the sensor, the sensor capturing an object through a wave deforming medium causing a defocus disparity between the first image data and second image data; and determining the defocus disparity between the first image data and the second image data, the defocus disparity corresponding to a defocus wavefront deviation of the wave deforming medium. The method may further comprise compensating the image data captured by the sensor for phase fluctuations caused by the wave deforming medium using the determined defocus disparity.
    Type: Application
    Filed: April 18, 2018
    Publication date: November 8, 2018
    Inventors: Ruimin Pan, Matthew Raphael Arnison, David Robert James Monaghan
  • Patent number: 10026183
    Abstract: A method of determining at least two motion values of an object moving axially in a scene. A first and second image of the object in the scene is captured with an image capture device. The object is axially displaced in the scene between the captured images with respect to a sensor plane of the image capture device. A variation in blur between the first and second captured images is determined. A scale change of the object between the first and second captured images is determined. Using the determined scale change and variation in blur, at least two motion values of the object in the scene are determined. The motion values identify the depths of the object in the first and second captured images and axial motion of the object in the scene.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: July 17, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: David Peter Morgan-Mar, Matthew Raphael Arnison
  • Patent number: 10019810
    Abstract: A method of determining a depth value of a fine structure pixel in a first image of a scene using a second image of the scene is disclosed. A gradient orientation for each of a plurality of fine structure pixels in the first image is determined. Difference images are generated from the second image and a series of blurred images formed from the first image, each difference image corresponding to one of a plurality of depth values. Each of the difference images is smoothed, in accordance with the determined gradient orientations, to generate smoothed difference images having increased coherency of fine structure. For each of a plurality of fine structure pixels in the first image, one of the smoothed difference images is selected. The depth value of the fine structure pixel corresponding to the selected smoothed difference image is determined.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: July 10, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Matthew Raphael Arnison, Ernest Yiu Cheong Wan
  • Patent number: 9639948
    Abstract: Methods, apparatuses, and computer readable storage media are provided for determining a depth measurement of a scene using an optical blur difference between two images of the scene. Each image is captured using an image capture device with different image capture device parameters. A corresponding image patch is identified from each of the captured images, motion blur being present in each of the image patches. A kernel of the motion blur in each of the image patches is determined. The kernel of the motion blur in at least one images patch is used to generate a difference convolution kernel. A selected first image patch is convolved with the generated difference convolution kernel to generate a modified image patch. A depth measurement of the scene is determined from an optical blur difference between the modified image patch and the remaining image patch.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: May 2, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: David Peter Morgan-Mar, Matthew Raphael Arnison
  • Patent number: 9552641
    Abstract: Disclosed is method of measuring the displacement between a reference region of a reference image and a sample region of a sample image. The method spatially varies the reference region using a one-dimensional filter having complex kernel values, wherein a length (radius) and direction (angle or tangent segment) of the filter is a function of position in the reference region. The method then measures a displacement between the reference region and the sample region by comparing the spatially varied reference region and the sample region.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: January 24, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: Peter Alleine Fletcher, Eric Wai-Shing Chong, Matthew Raphael Arnison, Yoichi Kazama, Allen Peter Courtney
  • Publication number: 20160321819
    Abstract: A method of determining at least two motion values of an object moving axially in a scene. A first and second image of the object in the scene is captured with an image capture device. The object is axially displaced in the scene between the captured images with respect to a sensor plane of the image capture device. A variation in blur between the first and second captured images is determined. A scale change of the object between the first and second captured images is determined. Using the determined scale change and variation in blur, at least two motion values of the object in the scene are determined. The motion values identify the depths of the object in the first and second captured images and axial motion of the object in the scene.
    Type: Application
    Filed: April 29, 2016
    Publication date: November 3, 2016
    Inventors: DAVID PETER MORGAN-MAR, MATTHEW RAPHAEL ARNISON
  • Patent number: 9117277
    Abstract: Methods for determining a depth measurement of a scene which involve capturing at least two images of the scene with different camera parameters, and selecting corresponding image patches in each scene. A first approach calculates a plurality of complex responses for each image patch using a plurality of different quadrature filters, each complex response having a magnitude and a phase, assigns, for each quadrature filter, a weighting to the complex responses in the corresponding image patches, the weighting being determined by a relationship of the phases of the complex responses, and determines the depth measurement of the scene from a combination of the weighted complex responses.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: August 25, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: David Peter Morgan-Mar, Kieran Gerard Larkin, Matthew Raphael Arnison, Peter Alleine Fletcher, Tuan Quang Pham
  • Publication number: 20150178935
    Abstract: Methods, apparatuses, and computer readable storage media are provided for determining a depth measurement of a scene using an optical blur difference between two images of the scene. Each image is captured using an image capture device with different image capture device parameters. A corresponding image patch is identified from each of the captured images, motion blur being present in each of the image patches. A kernel of the motion blur in each of the image patches is determined. The kernel of the motion blur in at least one images patch is used to generate a difference convolution kernel. A selected first image patch is convolved with the generated difference convolution kernel to generate a modified image patch. A depth measurement of the scene is determined from an optical blur difference between the modified image patch and the remaining image patch.
    Type: Application
    Filed: December 22, 2014
    Publication date: June 25, 2015
    Inventors: David Peter Morgan-Mar, Matthew Raphael Arnison
  • Publication number: 20150146994
    Abstract: A method of determining a depth value of a fine structure pixel in a first image of a scene using a second image of the scene is disclosed. A gradient orientation for each of a plurality of fine structure pixels in the first image is determined. Difference images are generated from the second image and a series of blurred images formed from the first image, each difference image corresponding to one of a plurality of depth values. Each of the difference images is smoothed, in accordance with the determined gradient orientations, to generate smoothed difference images having increased coherency of fine structure. For each of a plurality of fine structure pixels in the first image, one of the smoothed difference images is selected. The depth value of the fine structure pixel corresponding to the selected smoothed difference image is determined.
    Type: Application
    Filed: November 24, 2014
    Publication date: May 28, 2015
    Inventors: Matthew Raphael ARNISON, ERNEST YIU CHEONG WAN
  • Patent number: 8989517
    Abstract: A method of modifying the blur in at least a part of an image of a scene captures at least two images of the scene with different camera parameters to produce a different amount of blur in each image. A corresponding patch in each of the captured images is selected each having an initial amount of blur is used to calculate a set of frequency domain pixel values from a function of transforms of the patches. Each of the pixel values in the set are raised to a predetermined power, forming an amplified set of frequency domain pixel values. The amplified set of frequency domain pixel values is combined with the pixels of the patch in one of the captured images to produce an output image patch with blur modified relative to the initial amount of blur in the image patch.
    Type: Grant
    Filed: November 13, 2013
    Date of Patent: March 24, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: David Peter Morgan-Mar, Kieran Gerard Larkin, Matthew Raphael Arnison
  • Publication number: 20140152886
    Abstract: A method of modifying the blur in at least a part of an image of a scene captures at least two images of the scene with different camera parameters to produce a different amount of blur in each image. A corresponding patch in each of the captured images is selected each having an initial amount of blur is used to calculate a set of frequency domain pixel values from a function of transforms of the patches. Each of the pixel values in the set are raised to a predetermined power, forming an amplified set of frequency domain pixel values. The amplified set of frequency domain pixel values is combined with the pixels of the patch in one of the captured images to produce an output image patch with blur modified relative to the initial amount of blur in the image patch.
    Type: Application
    Filed: November 13, 2013
    Publication date: June 5, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: David Peter MORGAN-MAR, Kieran Gerard LARKIN, Matthew Raphael ARNISON