Patents by Inventor MATTHEW RAPHAEL ARNISON
MATTHEW RAPHAEL ARNISON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10916033Abstract: A system and method determining a camera pose. The method comprises receiving a first image and a second image, the first and second images being associated with a camera pose and a height map for pixels in each corresponding image, and determining a mapping between the first image and the second image using the corresponding height maps, the camera pose and a mapping of the second image to an orthographic view. The method further comprises determining alignment data between the first image transformed using the determined mapping and the second image and determining a refined camera pose based on the determined alignment data and alignment data associated with at least one other camera pose.Type: GrantFiled: August 29, 2018Date of Patent: February 9, 2021Assignee: Canon Kabushiki KaishaInventors: Peter Alleine Fletcher, David Peter Morgan-Mar, Matthew Raphael Arnison, Timothy Stephen Mason
-
Patent number: 10853990Abstract: A method for processing a three-dimensional graphic object. The method comprises receiving a query point and an associated query region, the query point being positioned within a reference fragment of a texture image of the three-dimensional graphic object; determining reference points on a boundary of the reference fragment using the query region, the reference points associated with target points on a boundary of a target fragment of the texture image, the reference points and the query point forming a reference angle; and determining a portion of the target fragment covered by the query region using an anchor point located outside the target fragment. The anchor point is determined using the target points and the reference angle. Angles between the anchor point and the target points correspond to angles between the query and reference points. The three-dimensional graphic object is processed using the determined portion of the target fragment.Type: GrantFiled: May 3, 2019Date of Patent: December 1, 2020Assignee: CANON KABUSHIKI KAISHAInventors: David Karlov, Colin Eric Druitt, Matthew Raphael Arnison
-
Patent number: 10547786Abstract: One or more embodiments of an apparatus, system and method of compensating image data for phase fluctuations caused by a wave deforming medium, and storage or recording mediums for use therewith, are provided herein. At least one embodiment of the method comprises capturing, by a sensor of an imaging system, first image data and second image data for each of a plurality of pixel positions of the sensor, the sensor capturing an object through a wave deforming medium causing a defocus disparity between the first image data and second image data; and determining the defocus disparity between the first image data and the second image data, the defocus disparity corresponding to a defocus wavefront deviation of the wave deforming medium. The method may further comprise compensating the image data captured by the sensor for phase fluctuations caused by the wave deforming medium using the determined defocus disparity.Type: GrantFiled: April 18, 2018Date of Patent: January 28, 2020Assignee: Canon Kabushiki KaishaInventors: Ruimin Pan, Matthew Raphael Arnison, David Robert James Monaghan
-
Patent number: 10540810Abstract: A method of rendering a graphical object comprises accessing a mapping relating a mesoscale structure and a light scattering parameter of a material to a perceptual appearance characteristic; determining a perceptual appearance characteristic of the graphical object, the graphical object reproduced on an interface to represent an object formed from the material, the perceptual appearance characteristic determined in accordance with the mapping using an initial mesoscale structure and a light scattering parameter of the material; receiving a signal indicating a modification in structure relating to the initial mesoscale structure; and determining, using the mapping, an adjustment of the light scattering parameter preserving the determined perceptual appearance characteristic, based on the modification of the initial mesoscale structure.Type: GrantFiled: June 14, 2018Date of Patent: January 21, 2020Assignee: CANON KABUSHIKI KAISHAInventors: Steven Richard Irrgang, Thai Quan Huynh-Thu, Juno Kim, Vanessa Jeanie Honson, Matthew Raphael Arnison
-
Publication number: 20190347854Abstract: A method for processing a three-dimensional graphic object. The method comprises receiving a query point and an associated query region, the query point being positioned within a reference fragment of a texture image of the three-dimensional graphic object; determining reference points on a boundary of the reference fragment using the query region, the reference points associated with target points on a boundary of a target fragment of the texture image, the reference points and the query point forming a reference angle; and determining a portion of the target fragment covered by the query region using an anchor point located outside the target fragment. The anchor point is determined using the target points and the reference angle. Angles between the anchor point and the target points correspond to angles between the query and reference points. The three-dimensional graphic object is processed using the determined portion of the target fragment.Type: ApplicationFiled: May 3, 2019Publication date: November 14, 2019Inventors: DAVID KARLOV, COLIN ERIC DRUITT, MATTHEW RAPHAEL ARNISON
-
Publication number: 20190266788Abstract: A system and method of rendering an image of a surface. The method includes receiving a user input modifying a material appearance parameter of the surface related to perceived gloss; determining a weighting coefficient for each of a plurality of pixel values of the surface using a corresponding normal, viewing angle and a position of a light source, wherein the pixel values are determined using the modified material appearance parameter; and determining perceived coverage of the surface by specular highlights based on the pixel values weighted using the corresponding weighting coefficients. The method also includes rendering the image using colour properties adjusted based on the determined coverage, to maintain perceived colour properties and update perceived gloss based on the modification.Type: ApplicationFiled: February 25, 2019Publication date: August 29, 2019Inventors: THAI QUAN HUYNH-THU, MATTHEW RAPHAEL ARNISON, ZOEY ISHERWOOD, JUNO KIM, VANESSA JEANIE HONSON
-
Publication number: 20190188871Abstract: A method of combining object data captured from an object, the method comprising: receiving first object data and second object data, the first and second object data comprising intensity image data and three-dimensional geometry data of the object; synthesising a first fused image of the object and a second fused image of the object by fusing the respective intensity image data and the respective three-dimensional geometry data of the object illuminated by a directional lighting arrangement produced by a directional light source, the directional lighting arrangement produced by the directional light source being different to a lighting arrangement used to capture at least one of the first object data and the second object data; aligning the first fused image and the second fused image; and combining the first object data and the second object data.Type: ApplicationFiled: December 6, 2018Publication date: June 20, 2019Inventors: PETER ALLEINE FLETCHER, MATTHEW RAPHAEL ARNISON, TIMOTHY STEPHEN MASON
-
Publication number: 20190073792Abstract: A system and method determining a camera pose. The method comprises receiving a first image and a second image, the first and second images being associated with a camera pose and a height map for pixels in each corresponding image, and determining a mapping between the first image and the second image using the corresponding height maps, the camera pose and a mapping of the second image to an orthographic view. The method further comprises determining alignment data between the first image transformed using the determined mapping and the second image and determining a refined camera pose based on the determined alignment data and alignment data associated with at least one other camera pose.Type: ApplicationFiled: August 29, 2018Publication date: March 7, 2019Inventors: Peter Alleine Fletcher, David Peter Morgan-Mar, Matthew Raphael Arnison, Timothy Stephen Mason
-
Publication number: 20190005710Abstract: A method of rendering a graphical object comprises accessing a mapping relating a mesoscale structure and a light scattering parameter of a material to a perceptual appearance characteristic; determining a perceptual appearance characteristic of the graphical object, the graphical object reproduced on an interface to represent an object formed from the material, the perceptual appearance characteristic determined in accordance with the mapping using an initial mesoscale structure and a light scattering parameter of the material; receiving a signal indicating a modification in structure relating to the initial mesoscale structure; and determining, using the mapping, an adjustment of the light scattering parameter preserving the determined perceptual appearance characteristic, based on the modification of the initial mesoscale structure.Type: ApplicationFiled: June 14, 2018Publication date: January 3, 2019Inventors: Steven Richard Irrgang, Thai Quan Huynh-Thu, Juno Kim, Vanessa Jeanie Honson, Matthew Raphael Arnison
-
Publication number: 20180324359Abstract: One or more embodiments of an apparatus, system and method of compensating image data for phase fluctuations caused by a wave deforming medium, and storage or recording mediums for use therewith, are provided herein. At least one embodiment of the method comprises capturing, by a sensor of an imaging system, first image data and second image data for each of a plurality of pixel positions of the sensor, the sensor capturing an object through a wave deforming medium causing a defocus disparity between the first image data and second image data; and determining the defocus disparity between the first image data and the second image data, the defocus disparity corresponding to a defocus wavefront deviation of the wave deforming medium. The method may further comprise compensating the image data captured by the sensor for phase fluctuations caused by the wave deforming medium using the determined defocus disparity.Type: ApplicationFiled: April 18, 2018Publication date: November 8, 2018Inventors: Ruimin Pan, Matthew Raphael Arnison, David Robert James Monaghan
-
Patent number: 10026183Abstract: A method of determining at least two motion values of an object moving axially in a scene. A first and second image of the object in the scene is captured with an image capture device. The object is axially displaced in the scene between the captured images with respect to a sensor plane of the image capture device. A variation in blur between the first and second captured images is determined. A scale change of the object between the first and second captured images is determined. Using the determined scale change and variation in blur, at least two motion values of the object in the scene are determined. The motion values identify the depths of the object in the first and second captured images and axial motion of the object in the scene.Type: GrantFiled: April 29, 2016Date of Patent: July 17, 2018Assignee: Canon Kabushiki KaishaInventors: David Peter Morgan-Mar, Matthew Raphael Arnison
-
Patent number: 10019810Abstract: A method of determining a depth value of a fine structure pixel in a first image of a scene using a second image of the scene is disclosed. A gradient orientation for each of a plurality of fine structure pixels in the first image is determined. Difference images are generated from the second image and a series of blurred images formed from the first image, each difference image corresponding to one of a plurality of depth values. Each of the difference images is smoothed, in accordance with the determined gradient orientations, to generate smoothed difference images having increased coherency of fine structure. For each of a plurality of fine structure pixels in the first image, one of the smoothed difference images is selected. The depth value of the fine structure pixel corresponding to the selected smoothed difference image is determined.Type: GrantFiled: November 24, 2014Date of Patent: July 10, 2018Assignee: CANON KABUSHIKI KAISHAInventors: Matthew Raphael Arnison, Ernest Yiu Cheong Wan
-
Patent number: 9639948Abstract: Methods, apparatuses, and computer readable storage media are provided for determining a depth measurement of a scene using an optical blur difference between two images of the scene. Each image is captured using an image capture device with different image capture device parameters. A corresponding image patch is identified from each of the captured images, motion blur being present in each of the image patches. A kernel of the motion blur in each of the image patches is determined. The kernel of the motion blur in at least one images patch is used to generate a difference convolution kernel. A selected first image patch is convolved with the generated difference convolution kernel to generate a modified image patch. A depth measurement of the scene is determined from an optical blur difference between the modified image patch and the remaining image patch.Type: GrantFiled: December 22, 2014Date of Patent: May 2, 2017Assignee: Canon Kabushiki KaishaInventors: David Peter Morgan-Mar, Matthew Raphael Arnison
-
Patent number: 9552641Abstract: Disclosed is method of measuring the displacement between a reference region of a reference image and a sample region of a sample image. The method spatially varies the reference region using a one-dimensional filter having complex kernel values, wherein a length (radius) and direction (angle or tangent segment) of the filter is a function of position in the reference region. The method then measures a displacement between the reference region and the sample region by comparing the spatially varied reference region and the sample region.Type: GrantFiled: November 30, 2012Date of Patent: January 24, 2017Assignee: Canon Kabushiki KaishaInventors: Peter Alleine Fletcher, Eric Wai-Shing Chong, Matthew Raphael Arnison, Yoichi Kazama, Allen Peter Courtney
-
Publication number: 20160321819Abstract: A method of determining at least two motion values of an object moving axially in a scene. A first and second image of the object in the scene is captured with an image capture device. The object is axially displaced in the scene between the captured images with respect to a sensor plane of the image capture device. A variation in blur between the first and second captured images is determined. A scale change of the object between the first and second captured images is determined. Using the determined scale change and variation in blur, at least two motion values of the object in the scene are determined. The motion values identify the depths of the object in the first and second captured images and axial motion of the object in the scene.Type: ApplicationFiled: April 29, 2016Publication date: November 3, 2016Inventors: DAVID PETER MORGAN-MAR, MATTHEW RAPHAEL ARNISON
-
Patent number: 9117277Abstract: Methods for determining a depth measurement of a scene which involve capturing at least two images of the scene with different camera parameters, and selecting corresponding image patches in each scene. A first approach calculates a plurality of complex responses for each image patch using a plurality of different quadrature filters, each complex response having a magnitude and a phase, assigns, for each quadrature filter, a weighting to the complex responses in the corresponding image patches, the weighting being determined by a relationship of the phases of the complex responses, and determines the depth measurement of the scene from a combination of the weighted complex responses.Type: GrantFiled: April 2, 2013Date of Patent: August 25, 2015Assignee: Canon Kabushiki KaishaInventors: David Peter Morgan-Mar, Kieran Gerard Larkin, Matthew Raphael Arnison, Peter Alleine Fletcher, Tuan Quang Pham
-
Publication number: 20150178935Abstract: Methods, apparatuses, and computer readable storage media are provided for determining a depth measurement of a scene using an optical blur difference between two images of the scene. Each image is captured using an image capture device with different image capture device parameters. A corresponding image patch is identified from each of the captured images, motion blur being present in each of the image patches. A kernel of the motion blur in each of the image patches is determined. The kernel of the motion blur in at least one images patch is used to generate a difference convolution kernel. A selected first image patch is convolved with the generated difference convolution kernel to generate a modified image patch. A depth measurement of the scene is determined from an optical blur difference between the modified image patch and the remaining image patch.Type: ApplicationFiled: December 22, 2014Publication date: June 25, 2015Inventors: David Peter Morgan-Mar, Matthew Raphael Arnison
-
Publication number: 20150146994Abstract: A method of determining a depth value of a fine structure pixel in a first image of a scene using a second image of the scene is disclosed. A gradient orientation for each of a plurality of fine structure pixels in the first image is determined. Difference images are generated from the second image and a series of blurred images formed from the first image, each difference image corresponding to one of a plurality of depth values. Each of the difference images is smoothed, in accordance with the determined gradient orientations, to generate smoothed difference images having increased coherency of fine structure. For each of a plurality of fine structure pixels in the first image, one of the smoothed difference images is selected. The depth value of the fine structure pixel corresponding to the selected smoothed difference image is determined.Type: ApplicationFiled: November 24, 2014Publication date: May 28, 2015Inventors: Matthew Raphael ARNISON, ERNEST YIU CHEONG WAN
-
Patent number: 8989517Abstract: A method of modifying the blur in at least a part of an image of a scene captures at least two images of the scene with different camera parameters to produce a different amount of blur in each image. A corresponding patch in each of the captured images is selected each having an initial amount of blur is used to calculate a set of frequency domain pixel values from a function of transforms of the patches. Each of the pixel values in the set are raised to a predetermined power, forming an amplified set of frequency domain pixel values. The amplified set of frequency domain pixel values is combined with the pixels of the patch in one of the captured images to produce an output image patch with blur modified relative to the initial amount of blur in the image patch.Type: GrantFiled: November 13, 2013Date of Patent: March 24, 2015Assignee: Canon Kabushiki KaishaInventors: David Peter Morgan-Mar, Kieran Gerard Larkin, Matthew Raphael Arnison
-
Publication number: 20140152886Abstract: A method of modifying the blur in at least a part of an image of a scene captures at least two images of the scene with different camera parameters to produce a different amount of blur in each image. A corresponding patch in each of the captured images is selected each having an initial amount of blur is used to calculate a set of frequency domain pixel values from a function of transforms of the patches. Each of the pixel values in the set are raised to a predetermined power, forming an amplified set of frequency domain pixel values. The amplified set of frequency domain pixel values is combined with the pixels of the patch in one of the captured images to produce an output image patch with blur modified relative to the initial amount of blur in the image patch.Type: ApplicationFiled: November 13, 2013Publication date: June 5, 2014Applicant: CANON KABUSHIKI KAISHAInventors: David Peter MORGAN-MAR, Kieran Gerard LARKIN, Matthew Raphael ARNISON