Patents by Inventor James Austin Besley

James Austin Besley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10887519
    Abstract: A method of stabilising frames of a captured video sequence. First reference patch alignment data is received for each of a plurality of reference patch locations. A first stable frame and a subsequent stable frame are determined from a first plurality of frames based on the first plurality of reference patch locations and reference patch alignment data. A second plurality of reference patch locations is determined using image data from the first stable frame, the second plurality of reference patch locations being determined concurrently with determining the subsequent stable frame from the first plurality of frames. Image data for the determined second plurality of reference patch locations is extracted from the subsequent stable frame. A second plurality of stable frames of the captured video sequence is determined with respect to the reference frame using the second plurality of reference patch locations and the extracted image data.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: January 5, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: James Austin Besley, Iain Bruce Templeton
  • Patent number: 10878577
    Abstract: A method of segmenting an image of a scene captured using one of a plurality of cameras in a network. A mask of an image of a scene captured by a first one of said cameras is received. A set of pixels in the mask likely to be in a foreground of an image captured by a second one of said cameras is determined based on the received mask, calibration information, and a geometry of the scene. A set of background pixels for the second camera is generated based on the determined set of pixels. The generated set of background pixels is transmitted to the second camera. The image of the scene captured by the second camera is segmented using the transmitted background pixels.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: December 29, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Paul William Morrison, James Austin Besley
  • Patent number: 10859809
    Abstract: A system for forming an image (110) of a substantially translucent specimen (102) has an illuminator (108) configured to variably illuminate the specimen from a plurality of angles of illumination such that (a) when each angle (495) at a given point on the specimen is mapped to a point (445) on a plane (420) perpendicular to an optical axis (490), the points on the plane have an increasing density (e.g. FIGS. 4, 11C, 11E, 12C, 12E, 13A, 14A, 14C, 14E, 15A, 15C, 15E) towards an axial position on the plane; or (b) the illumination angles are arranged with a substantially regular pattern in a polar coordinate system (FIG. 13A,13B) defined by a radial coordinate that depends on the magnitude of the distance from an optical axis and an angular coordinate corresponding to the orientation of the angle relative to the optical axis.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: December 8, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: James Austin Besley
  • Patent number: 10839594
    Abstract: A system and method of generating a virtual view of a scene captured by a network of cameras. The method comprises simultaneously capturing images of the scene using a plurality of cameras of the network; determining, using the captured images, a model of atmospheric conditions in the scene; and defining a virtual camera relative to the scene. The method further comprises rendering the scene from a viewpoint of the virtual camera by adjusting pixels of the captured images corresponding to the viewpoint, the adjusting based on a three-dimensional model of the scene, locations of the plurality of cameras relative to the scene, the viewpoint of the virtual camera, and the geometric model of atmospheric conditions.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: November 17, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: James Austin Besley, Paul William Morrison
  • Publication number: 20200236283
    Abstract: A method of stabilising frames of a captured video sequence. First reference patch alignment data is received for each of a plurality of reference patch locations. A first stable frame and a subsequent stable frame are determined from a first plurality of frames based on the first plurality of reference patch locations and reference patch alignment data. A second plurality of reference patch locations is determined using image data from the first stable frame, the second plurality of reference patch locations being determined concurrently with determining the subsequent stable frame from the first plurality of frames. Image data for the determined second plurality of reference patch locations is extracted from the subsequent stable frame. A second plurality of stable frames of the captured video sequence is determined with respect to the reference frame using the second plurality of reference patch locations and the extracted image data.
    Type: Application
    Filed: April 2, 2020
    Publication date: July 23, 2020
    Inventors: JAMES AUSTIN BESLEY, IAIN BRUCE TEMPLETON
  • Publication number: 20200193611
    Abstract: A method of segmenting an image of a scene captured using one of a plurality of cameras in a network. A mask of an image of a scene captured by a first one of said cameras is received. A set of pixels in the mask likely to be in a foreground of an image captured by a second one of said cameras is determined based on the received mask, calibration information, and a geometry of the scene. A set of background pixels for the second camera is generated based on the determined set of pixels. The generated set of background pixels is transmitted to the second camera. The image of the scene captured by the second camera is segmented using the transmitted background pixels.
    Type: Application
    Filed: December 14, 2018
    Publication date: June 18, 2020
    Inventors: PAUL WILLIAM MORRISON, JAMES AUSTIN BESLEY
  • Publication number: 20200184710
    Abstract: A system and method of generating a virtual view of a scene captured by a network of cameras. The method comprises simultaneously capturing images of the scene using a plurality of cameras of the network; determining, using the captured images, a model of atmospheric conditions in the scene; and defining a virtual camera relative to the scene. The method further comprises rendering the scene from a viewpoint of the virtual camera by adjusting pixels of the captured images corresponding to the viewpoint, the adjusting based on a three-dimensional model of the scene, locations of the plurality of cameras relative to the scene, the viewpoint of the virtual camera, and the geometric model of atmospheric conditions.
    Type: Application
    Filed: December 11, 2018
    Publication date: June 11, 2020
    Inventors: James Austin Besley, Paul William Morrison
  • Publication number: 20200160560
    Abstract: A method of stabilising a captured video. The method includes receiving a set of reference patch locations in a reference frame and reference patch alignment data for each location and receiving a frame of captured video. A first offset and a first subset of patch locations are selected from the set of reference patch locations. A second offset more than a pre-determined distance from the first offset and a second subset of patch locations from the reference patch locations are selected. The method further includes analysing image data for the frame of captured video at each patch location of the first and second subsets according to the corresponding reference patch alignment data to generate corresponding shift estimates; analysing the shift estimates to generate corresponding alignment transforms and confidence metrics; and selecting one of the alignment transforms according to the confidence metrics for stabilising the video.
    Type: Application
    Filed: November 19, 2018
    Publication date: May 21, 2020
    Inventors: JAMES AUSTIN BESLEY, IAIN BRUCE TEMPLETON
  • Patent number: 10645290
    Abstract: A method of stabilising frames of a captured video sequence. First reference patch alignment data is received for each of a plurality of reference patch locations. A first stable frame and a subsequent stable frame are determined from a first plurality of frames based on the first plurality of reference patch locations and reference patch alignment data. A second plurality of reference patch locations is determined using image data from the first stable frame, the second plurality of reference patch locations being determined concurrently with determining the subsequent stable frame from the first plurality of frames. Image data for the determined second plurality of reference patch locations is extracted from the subsequent stable frame. A second plurality of stable frames of the captured video sequence is determined with respect to the reference frame using the second plurality of reference patch locations and the extracted image data.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: May 5, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: James Austin Besley, Iain Bruce Templeton
  • Patent number: 10502946
    Abstract: A system for forming an image (110) of a substantially translucent specimen (102) has an illuminator (108) configured to variably illuminate the specimen from a plurality of angles of illumination such that (a) when each angle (495) at a given point on the specimen is mapped to a point (445) on a plane (420) perpendicular to an optical axis (490), the points on the plane have an increasing density (e.g. FIGS. 4, 11C, 11E, 12C, 12E, 13A, 14A, 14C, 14E, 15A, 15C, 15E) towards an axial position on the plane; or (b) the illumination angles are arranged with a substantially regular pattern in a polar coordinate system (FIG. 13A,13B) defined by a radial coordinate that depends on the magnitude of the distance from an optical axis and an angular coordinate corresponding to the orientation of the angle relative to the optical axis.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: December 10, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: James Austin Besley
  • Publication number: 20190174122
    Abstract: A method for synthesising a viewpoint, comprising: capturing a scene using a network of cameras, the cameras defining a system volume of the scene, wherein a sensor of one of the cameras has an output frame rate for the system volume below a predetermined frame rate; selecting a portion of the system volume as an operational volume based on the sensor output frame rate, the predetermined frame rate and a region of interest, the operational volume being a portion of the system volume from which image data for the viewpoint can be synthesised at the predetermined frame rate, wherein a frame rate for synthesising a viewpoint outside the operational volume is limited by the output frame rate; reading, from the sensors at the predetermined frame rate, image data corresponding to the operational volume; and synthesising the viewpoint at the predetermined frame rate using the image data.
    Type: Application
    Filed: December 4, 2017
    Publication date: June 6, 2019
    Inventor: James Austin Besley
  • Publication number: 20190158746
    Abstract: A method of stabilising frames of a captured video sequence. First reference patch alignment data is received for each of a plurality of reference patch locations. A first stable frame and a subsequent stable frame are determined from a first plurality of frames based on the first plurality of reference patch locations and reference patch alignment data. A second plurality of reference patch locations is determined using image data from the first stable frame, the second plurality of reference patch locations being determined concurrently with determining the subsequent stable frame from the first plurality of frames. Image data for the determined second plurality of reference patch locations is extracted from the subsequent stable frame. A second plurality of stable frames of the captured video sequence is determined with respect to the reference frame using the second plurality of reference patch locations and the extracted image data.
    Type: Application
    Filed: October 16, 2018
    Publication date: May 23, 2019
    Inventors: JAMES AUSTIN BESLEY, IAIN BRUCE TEMPLETON
  • Patent number: 10176567
    Abstract: A method for processing microscopy images captures, for each of a first microscopy slide and a second microscopy slide, a plurality of partial spectrum images under multiple optical configurations where each image captures a different portion of the spectrum of the slide. First and second partial spectrum images associated with different optical configuration are selected and used to reconstruct respectively a combined spectrum image of part of the first microscopy slide and a combined spectrum image of at least part of the second microscopy slide, thereby forming a first pair of partial spectrum images. The method determines a distortion map by aligning images derived from the first pair of the partial spectrum images.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: January 8, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: James Austin Besley, Peter Alleine Fletcher, Steven David Webster
  • Publication number: 20170371141
    Abstract: A system for forming an image (110) of a substantially translucent specimen (102) has an illuminator (108) configured to variably illuminate the specimen from a plurality of angles of illumination such that (a) when each angle (495) at a given point on the specimen is mapped to a point (445) on a plane (420) perpendicular to an optical axis (490), the points on the plane have an increasing density (e.g. FIGS. 4, 11C, 11E, 12C, 12E, 13A, 14A, 14C, 14E, 15A, 15C, 15E) towards an axial position on the plane; or (b) the illumination angles are arranged with a substantially regular pattern in a polar coordinate system (FIG. 13A,13B) defined by a radial coordinate that depends on the magnitude of the distance from an optical axis and an angular coordinate corresponding to the orientation of the angle relative to the optical axis.
    Type: Application
    Filed: December 11, 2015
    Publication date: December 28, 2017
    Inventor: JAMES AUSTIN BESLEY
  • Publication number: 20170363853
    Abstract: A method of generating an image of a substantially translucent specimen includes illuminating and imaging the specimen based on light filtered by an optical element. A plurality of variably-illuminated relatively low resolution intensity images of the specimen are acquired for which content of the images corresponds to partially overlapping regions in frequency space. A relatively higher resolution image of the specimen is then reconstructed by iteratively updating overlapping regions of the relatively higher resolution image in Fourier space with the plurality of variably-illuminated, relatively lower resolution intensity images. The iterative updating processes the plurality of relatively lower resolution intensity images in a first sequence which progresses from a centre region of the relatively higher resolution image in increasing spatial frequency followed by a second sequence which progresses towards the centre region in decreasing spatial frequency.
    Type: Application
    Filed: December 9, 2015
    Publication date: December 21, 2017
    Applicant: Canon Kabushiki Kaisha
    Inventor: JAMES AUSTIN BESLEY
  • Publication number: 20170178317
    Abstract: A method for processing microscopy images captures, for each of a first microscopy slide and a second microscopy slide, a plurality of partial spectrum images under multiple optical configurations where each image captures a different portion of the spectrum of the slide. First and second partial spectrum images associated with different optical configuration are selected and used to reconstruct respectively a combined spectrum image of part of the first microscopy slide and a combined spectrum image of at least part of the second microscopy slide, thereby forming a first pair of partial spectrum images. The method determines a distortion map by aligning images derived from the first pair of the partial spectrum images.
    Type: Application
    Filed: December 21, 2015
    Publication date: June 22, 2017
    Inventors: JAMES AUSTIN BESLEY, PETER ALLEINE FLETCHER, STEVEN DAVID WEBSTER
  • Patent number: 9607384
    Abstract: A method of determining a coordinate transform between a first image and a second image, said method comprising the steps of: determining a rate of change of pixel values for locations on the first image to identify candidate alignment patches in the first image; specifying subsets of patches from the set of candidate alignment patches based on an error metric, selecting a subset of candidate alignment patches from said plurality of subsets of candidate alignment patches based upon a predetermined criterion; estimating, for each patch in the selected subset, a shift between the patch and a corresponding patch in the second image, the location of the corresponding patch in the second image being determined from the location of the patch in the first image; and determining the coordinate transform between the first image and second image based on the estimated shifts.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: March 28, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: Andrew Docherty, James Austin Besley
  • Publication number: 20160282598
    Abstract: A method calibrates a microscope using a test pattern by capturing a plurality of images of the test pattern with the microscope. The test pattern has a plurality of uniquely identifiable positions across a plurality of repeating and overlapping 2D sub-patterns. The method determines an image contrast metric from the captured image in a selected patch of the test pattern and a reference contrast metric in the corresponding region; and (iii) determines a normalised contrast metric using the reference contrast metric and the image contrast metric. The method estimates depths of the two captured images at the plurality of positions using the normalised contrast metrics and a set of predetermined calibration data for a stack of images captured using the test pattern at a range of depths, and calibrates the microscope using a comparison of the determined depth estimates for the at least two images.
    Type: Application
    Filed: November 6, 2014
    Publication date: September 29, 2016
    Inventors: JAMES AUSTIN BESLEY, ANDREW DOCHERTY
  • Patent number: 9066036
    Abstract: Disclosed is a method of generating an electronic document having an enclosed region with a fill color. The method receives a digital representation of a source document, the source document containing an enclosed region and at least one corresponding background region, the enclosed region overlapping at least a portion of the background region. A fill color is determined for the enclosed region from the digital representation as is a reference background color for the enclosed region from the corresponding background region of the digital representation. The method assigns a transparent fill color to the enclosed region based on a comparison of the determined fill color for the enclosed region with the reference background color and stores enclosed region with the transparent fill color to generate an electronic document.
    Type: Grant
    Filed: December 14, 2012
    Date of Patent: June 23, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yu-Ling Chen, James Austin Besley, Steven Richard Irrgang
  • Publication number: 20140177941
    Abstract: A method of determining a coordinate transform between a first image and a second image, said method comprising the steps of: determining a rate of change of pixel values for locations on the first image to identify candidate alignment patches in the first image; specifying subsets of patches from the set of candidate alignment patches based on an error metric, selecting a subset of candidate alignment patches from said plurality of subsets of candidate alignment patches based upon a predetermined criterion; estimating, for each patch in the selected subset, a shift between the patch and a corresponding patch in the second image, the location of the corresponding patch in the second image being determined from the location of the patch in the first image; and determining the coordinate transform between the first image and second image based on the estimated shifts.
    Type: Application
    Filed: December 18, 2013
    Publication date: June 26, 2014
    Applicant: Canon Kabushiki Kaisha
    Inventors: Andrew DOCHERTY, James Austin BESLEY