Patents by Inventor Christopher Zach

Christopher Zach has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11710309
    Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: July 25, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
  • Patent number: 10769744
    Abstract: An image processing method for segmenting an image, the method comprising: receiving first image; producing a second image from said first image, wherein said second image is a lower resolution representation of said first image; processing said first image with a first processing stage to produce a first feature map; processing said second image with a second processing stage to produce a second feature map; and combining the first feature map with the second feature map to produce a semantic segmented image; wherein the first processing stage comprises a first neural network comprising at least one separable convolution module configured to perform separable convolution and said second processing stage comprises a second neural network comprising at least one separable convolution module configured to perform separable convolution; the number of layers in the first neural network being smaller than the number of layers in the second neural network.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: September 8, 2020
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Rudra Prasad Poudel Karmatha, Ujwal Bonde, Stephan Liwicki, Christopher Zach
  • Publication number: 20200134772
    Abstract: An image processing method for segmenting an image, the method comprising: receiving first image; producing a second image from said first image, wherein said second image is a lower resolution representation of said first image; processing said first image with a first processing stage to produce a first feature map; processing said second image with a second processing stage to produce a second feature map; and combining the first feature map with the second feature map to produce a semantic segmented image; wherein the first processing stage comprises a first neural network comprising at least one separable convolution module configured to perform separable convolution and said second processing stage comprises a second neural network comprising at least one separable convolution module configured to perform separable convolution; the number of layers in the first neural network being smaller than the number of layers in the second neural network.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Rudra Prasad POUDEL KARMATHA, Ujwal BONDE, Stephan LIWICKI, Christopher ZACH
  • Patent number: 10460471
    Abstract: A camera pose estimation method determines the translation and rotation between a first camera pose and a second camera pose. Features are extracted from a first image captured at the first position and a second image captured at the second position, the extracted features comprising location, scale information and a descriptor, the descriptor comprising information that allows a feature from the first image to be matched with a feature from the second image. Features are matched between the first image and the second image. The depth ratio of matched features is determined from the scale information. n matched features are selected, where at least one of the matched features is selected with both the depth ratio and location. The translation and rotation are calculated between the first camera pose and the second camera pose using the selected matched features with depth ratio derived from the scale information.
    Type: Grant
    Filed: July 18, 2017
    Date of Patent: October 29, 2019
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Stephan Liwicki, Christopher Zach
  • Patent number: 10397479
    Abstract: A method of compensating for camera motion during capture of the image in a rolling shutter camera, the method comprising: receiving an image of a scene captured by a camera with a rolling shutter; extracting line segments in said image; estimating the movement of the camera during the capturing of the image from the received image; and producing an image compensated for the movement during capture of the image, wherein the scene is approximated by a horizontal plane and two vertical planes that intersect at a line at infinity and estimating the movement of the camera during the capture of the image comprises assuming that the extracted line segments are vertical and lie on the vertical planes.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: August 27, 2019
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Pulak Purkait, Christopher Zach
  • Publication number: 20190068884
    Abstract: A method of compensating for camera motion during capture of the image in a rolling shutter camera, the method comprising: receiving an image of a scene captured by a camera with a rolling shutter; extracting line segments in said image; estimating the movement of the camera during the capturing of the image from the received image; and producing an image compensated for the movement during capture of the image, wherein the scene is approximated by a horizontal plane and two vertical planes that intersect at a line at infinity and estimating the movement of the camera during the capture of the image comprises assuming that the extracted line segments are vertical and lie on the vertical planes.
    Type: Application
    Filed: February 13, 2018
    Publication date: February 28, 2019
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Pulak Purkait, Christopher Zach
  • Publication number: 20190026916
    Abstract: A camera pose estimation method for determining the translation and rotation between a first camera pose and a second camera pose, the method comprising: extracting features from a first image captured at the first position and a second image captured at the second position, the extracted features comprising location, scale information and a descriptor, the descriptor comprising information that allows a feature from the first image to be matched with a feature from the second image; matching features between the first image and the second image to produce matched features; determining the depth ratio of matched features from the scale information, wherein the depth ratio is the ratio of the depth of a matched feature from the first position to the depth of the matched feature from the second position; selecting n matched features, where at least one of the matched features is selected with both the depth ratio and location; and calculating the translation and rotation between the first camera pose and the
    Type: Application
    Filed: July 18, 2017
    Publication date: January 24, 2019
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Stephan Liwicki, Christopher Zach
  • Publication number: 20180285697
    Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.
    Type: Application
    Filed: February 13, 2018
    Publication date: October 4, 2018
    Inventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
  • Patent number: 10078886
    Abstract: A method for adjusting an image using message passing comprises associating each pixel of an image with a node of a graph and one or more cliques of nodes, determining for a node of the graph a respective set of possible pixel labels for which a unary potential is known, computing for that node a unary potential of a possible pixel label for which the unary potential is unknown, adjusting a clique potential associated with each clique to which that node belongs based on the unary potentials, and adjusting, based on the adjusted clique potential associated with each clique to which that node belongs, at least one of the messages between that node and the other nodes of each clique. Once a convergence criterion is met, an adjusted image is produced having pixel labels determined from the adjusted messages.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: September 18, 2018
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Christopher Zach
  • Patent number: 9940553
    Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.
    Type: Grant
    Filed: February 22, 2013
    Date of Patent: April 10, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
  • Patent number: 9818195
    Abstract: A method for use in estimating a pose of an imaged object comprises identifying candidate elements of an atlas that correspond to pixels in an image of the object, forming pairs of candidate elements, and comparing the distance between the members of each pair and with the distance between the corresponding pixels.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: November 14, 2017
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Christopher Zach
  • Patent number: 9779508
    Abstract: A combination of three computational components may provide memory and computational efficiency while producing results with little latency, e.g., output can begin with the second frame of video being processed. Memory usage may be reduced by maintaining key frames of video and pose information for each frame of video. Additionally, only one global volumetric structure may be maintained for the frames of video being processed. To be computationally efficient, only depth information may be computed from each frame. Through fusion of multiple depth maps from different frames into a single volumetric structure, errors may average out over several frames, leading to a final output with high quality.
    Type: Grant
    Filed: March 26, 2014
    Date of Patent: October 3, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vivek Pradeep, Christoph Rhemann, Shahram Izadi, Christopher Zach, Michael Bleyer, Steven Bathiche
  • Publication number: 20170278223
    Abstract: A method for adjusting an image using message passing comprises associating each pixel of an image with a node of a graph and one or more cliques of nodes, determining for a node of the graph a respective set of possible pixel labels for which a unary potential is known, computing for that node a unary potential of a possible pixel label for which the unary potential is unknown, adjusting a clique potential associated with each clique to which that node belongs based on the unary potentials, and adjusting, based on the adjusted clique potential associated with each clique to which that node belongs, at least one of the messages between that node and the other nodes of each clique. Once a convergence criterion is met, an adjusted image is produced having pixel labels determined from the adjusted messages.
    Type: Application
    Filed: March 10, 2017
    Publication date: September 28, 2017
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Christopher ZACH
  • Publication number: 20160275686
    Abstract: A method for use in estimating a pose of an imaged object comprises identifying candidate elements of an atlas that correspond to pixels in an image of the object, forming pairs of candidate elements, and comparing the distance between the members of each pair and with the distance between the corresponding pixels.
    Type: Application
    Filed: March 18, 2016
    Publication date: September 22, 2016
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Christopher ZACH
  • Publication number: 20160125258
    Abstract: A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described.
    Type: Application
    Filed: January 11, 2016
    Publication date: May 5, 2016
    Inventors: David Kim, Shahram Izadi, Christoph Rhemann, Christopher Zach
  • Patent number: 9269018
    Abstract: A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described.
    Type: Grant
    Filed: January 14, 2014
    Date of Patent: February 23, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Kim, Shahram Izadi, Christoph Rhemann, Christopher Zach
  • Publication number: 20150279083
    Abstract: A combination of three computational components may provide memory and computational efficiency while producing results with little latency, e.g., output can begin with the second frame of video being processed. Memory usage may be reduced by maintaining key frames of video and pose information for each frame of video. Additionally, only one global volumetric structure may be maintained for the frames of video being processed. To be computationally efficient, only depth information may be computed from each frame. Through fusion of multiple depth maps from different frames into a single volumetric structure, errors may average out over several frames, leading to a final output with high quality.
    Type: Application
    Filed: March 26, 2014
    Publication date: October 1, 2015
    Inventors: Vivek Pradeep, Christoph Rhemann, Shahram Izadi, Christopher Zach, Michael Bleyer, Steven Bathiche
  • Publication number: 20150199588
    Abstract: A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described.
    Type: Application
    Filed: January 14, 2014
    Publication date: July 16, 2015
    Inventors: David Kim, Shahram Izadi, Christoph Rhemann, Christopher Zach
  • Publication number: 20140241617
    Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.
    Type: Application
    Filed: February 22, 2013
    Publication date: August 28, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
  • Publication number: 20140241612
    Abstract: Real-time stereo matching is described, for example, to find depths of objects in an environment from an image capture device capturing a stream of stereo images of the objects. For example, the depths may be used to control augmented reality, robotics, natural user interface technology, gaming and other applications. Streams of stereo images, or single stereo images, obtained with or without patterns of illumination projected onto the environment are processed using a parallel-processing unit to obtain depth maps. In various embodiments a parallel-processing unit propagates values related to depth in rows or columns of a disparity map in parallel. In examples, the values may be propagated according to a measure of similarity between two images of a stereo pair; propagation may be temporal between disparity maps of frames of a stream of stereo images and may be spatial within a left or right disparity map.
    Type: Application
    Filed: February 23, 2013
    Publication date: August 28, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Christoph Rhemann, Carsten Curt Eckard Rother, Christopher Zach, Shahram Izadi, Adam Garnet Kirk, Oliver Whyte, Michael Bleyer