Patents by Inventor Christopher Zach
Christopher Zach has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11710309Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.Type: GrantFiled: February 13, 2018Date of Patent: July 25, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
-
Patent number: 10769744Abstract: An image processing method for segmenting an image, the method comprising: receiving first image; producing a second image from said first image, wherein said second image is a lower resolution representation of said first image; processing said first image with a first processing stage to produce a first feature map; processing said second image with a second processing stage to produce a second feature map; and combining the first feature map with the second feature map to produce a semantic segmented image; wherein the first processing stage comprises a first neural network comprising at least one separable convolution module configured to perform separable convolution and said second processing stage comprises a second neural network comprising at least one separable convolution module configured to perform separable convolution; the number of layers in the first neural network being smaller than the number of layers in the second neural network.Type: GrantFiled: October 31, 2018Date of Patent: September 8, 2020Assignee: Kabushiki Kaisha ToshibaInventors: Rudra Prasad Poudel Karmatha, Ujwal Bonde, Stephan Liwicki, Christopher Zach
-
Publication number: 20200134772Abstract: An image processing method for segmenting an image, the method comprising: receiving first image; producing a second image from said first image, wherein said second image is a lower resolution representation of said first image; processing said first image with a first processing stage to produce a first feature map; processing said second image with a second processing stage to produce a second feature map; and combining the first feature map with the second feature map to produce a semantic segmented image; wherein the first processing stage comprises a first neural network comprising at least one separable convolution module configured to perform separable convolution and said second processing stage comprises a second neural network comprising at least one separable convolution module configured to perform separable convolution; the number of layers in the first neural network being smaller than the number of layers in the second neural network.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Applicant: Kabushiki Kaisha ToshibaInventors: Rudra Prasad POUDEL KARMATHA, Ujwal BONDE, Stephan LIWICKI, Christopher ZACH
-
Patent number: 10460471Abstract: A camera pose estimation method determines the translation and rotation between a first camera pose and a second camera pose. Features are extracted from a first image captured at the first position and a second image captured at the second position, the extracted features comprising location, scale information and a descriptor, the descriptor comprising information that allows a feature from the first image to be matched with a feature from the second image. Features are matched between the first image and the second image. The depth ratio of matched features is determined from the scale information. n matched features are selected, where at least one of the matched features is selected with both the depth ratio and location. The translation and rotation are calculated between the first camera pose and the second camera pose using the selected matched features with depth ratio derived from the scale information.Type: GrantFiled: July 18, 2017Date of Patent: October 29, 2019Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Stephan Liwicki, Christopher Zach
-
Patent number: 10397479Abstract: A method of compensating for camera motion during capture of the image in a rolling shutter camera, the method comprising: receiving an image of a scene captured by a camera with a rolling shutter; extracting line segments in said image; estimating the movement of the camera during the capturing of the image from the received image; and producing an image compensated for the movement during capture of the image, wherein the scene is approximated by a horizontal plane and two vertical planes that intersect at a line at infinity and estimating the movement of the camera during the capture of the image comprises assuming that the extracted line segments are vertical and lie on the vertical planes.Type: GrantFiled: February 13, 2018Date of Patent: August 27, 2019Assignee: Kabushiki Kaisha ToshibaInventors: Pulak Purkait, Christopher Zach
-
Publication number: 20190068884Abstract: A method of compensating for camera motion during capture of the image in a rolling shutter camera, the method comprising: receiving an image of a scene captured by a camera with a rolling shutter; extracting line segments in said image; estimating the movement of the camera during the capturing of the image from the received image; and producing an image compensated for the movement during capture of the image, wherein the scene is approximated by a horizontal plane and two vertical planes that intersect at a line at infinity and estimating the movement of the camera during the capture of the image comprises assuming that the extracted line segments are vertical and lie on the vertical planes.Type: ApplicationFiled: February 13, 2018Publication date: February 28, 2019Applicant: Kabushiki Kaisha ToshibaInventors: Pulak Purkait, Christopher Zach
-
Publication number: 20190026916Abstract: A camera pose estimation method for determining the translation and rotation between a first camera pose and a second camera pose, the method comprising: extracting features from a first image captured at the first position and a second image captured at the second position, the extracted features comprising location, scale information and a descriptor, the descriptor comprising information that allows a feature from the first image to be matched with a feature from the second image; matching features between the first image and the second image to produce matched features; determining the depth ratio of matched features from the scale information, wherein the depth ratio is the ratio of the depth of a matched feature from the first position to the depth of the matched feature from the second position; selecting n matched features, where at least one of the matched features is selected with both the depth ratio and location; and calculating the translation and rotation between the first camera pose and theType: ApplicationFiled: July 18, 2017Publication date: January 24, 2019Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Stephan Liwicki, Christopher Zach
-
Publication number: 20180285697Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.Type: ApplicationFiled: February 13, 2018Publication date: October 4, 2018Inventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
-
Patent number: 10078886Abstract: A method for adjusting an image using message passing comprises associating each pixel of an image with a node of a graph and one or more cliques of nodes, determining for a node of the graph a respective set of possible pixel labels for which a unary potential is known, computing for that node a unary potential of a possible pixel label for which the unary potential is unknown, adjusting a clique potential associated with each clique to which that node belongs based on the unary potentials, and adjusting, based on the adjusted clique potential associated with each clique to which that node belongs, at least one of the messages between that node and the other nodes of each clique. Once a convergence criterion is met, an adjusted image is produced having pixel labels determined from the adjusted messages.Type: GrantFiled: March 10, 2017Date of Patent: September 18, 2018Assignee: KABUSHIKI KAISHA TOSHIBAInventor: Christopher Zach
-
Patent number: 9940553Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.Type: GrantFiled: February 22, 2013Date of Patent: April 10, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
-
Patent number: 9818195Abstract: A method for use in estimating a pose of an imaged object comprises identifying candidate elements of an atlas that correspond to pixels in an image of the object, forming pairs of candidate elements, and comparing the distance between the members of each pair and with the distance between the corresponding pixels.Type: GrantFiled: March 18, 2016Date of Patent: November 14, 2017Assignee: Kabushiki Kaisha ToshibaInventor: Christopher Zach
-
Patent number: 9779508Abstract: A combination of three computational components may provide memory and computational efficiency while producing results with little latency, e.g., output can begin with the second frame of video being processed. Memory usage may be reduced by maintaining key frames of video and pose information for each frame of video. Additionally, only one global volumetric structure may be maintained for the frames of video being processed. To be computationally efficient, only depth information may be computed from each frame. Through fusion of multiple depth maps from different frames into a single volumetric structure, errors may average out over several frames, leading to a final output with high quality.Type: GrantFiled: March 26, 2014Date of Patent: October 3, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Vivek Pradeep, Christoph Rhemann, Shahram Izadi, Christopher Zach, Michael Bleyer, Steven Bathiche
-
Publication number: 20170278223Abstract: A method for adjusting an image using message passing comprises associating each pixel of an image with a node of a graph and one or more cliques of nodes, determining for a node of the graph a respective set of possible pixel labels for which a unary potential is known, computing for that node a unary potential of a possible pixel label for which the unary potential is unknown, adjusting a clique potential associated with each clique to which that node belongs based on the unary potentials, and adjusting, based on the adjusted clique potential associated with each clique to which that node belongs, at least one of the messages between that node and the other nodes of each clique. Once a convergence criterion is met, an adjusted image is produced having pixel labels determined from the adjusted messages.Type: ApplicationFiled: March 10, 2017Publication date: September 28, 2017Applicant: Kabushiki Kaisha ToshibaInventor: Christopher ZACH
-
Publication number: 20160275686Abstract: A method for use in estimating a pose of an imaged object comprises identifying candidate elements of an atlas that correspond to pixels in an image of the object, forming pairs of candidate elements, and comparing the distance between the members of each pair and with the distance between the corresponding pixels.Type: ApplicationFiled: March 18, 2016Publication date: September 22, 2016Applicant: Kabushiki Kaisha ToshibaInventor: Christopher ZACH
-
Publication number: 20160125258Abstract: A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described.Type: ApplicationFiled: January 11, 2016Publication date: May 5, 2016Inventors: David Kim, Shahram Izadi, Christoph Rhemann, Christopher Zach
-
Patent number: 9269018Abstract: A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described.Type: GrantFiled: January 14, 2014Date of Patent: February 23, 2016Assignee: Microsoft Technology Licensing, LLCInventors: David Kim, Shahram Izadi, Christoph Rhemann, Christopher Zach
-
Publication number: 20150279083Abstract: A combination of three computational components may provide memory and computational efficiency while producing results with little latency, e.g., output can begin with the second frame of video being processed. Memory usage may be reduced by maintaining key frames of video and pose information for each frame of video. Additionally, only one global volumetric structure may be maintained for the frames of video being processed. To be computationally efficient, only depth information may be computed from each frame. Through fusion of multiple depth maps from different frames into a single volumetric structure, errors may average out over several frames, leading to a final output with high quality.Type: ApplicationFiled: March 26, 2014Publication date: October 1, 2015Inventors: Vivek Pradeep, Christoph Rhemann, Shahram Izadi, Christopher Zach, Michael Bleyer, Steven Bathiche
-
Publication number: 20150199588Abstract: A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described.Type: ApplicationFiled: January 14, 2014Publication date: July 16, 2015Inventors: David Kim, Shahram Izadi, Christoph Rhemann, Christopher Zach
-
Publication number: 20140241617Abstract: Camera or object pose calculation is described, for example, to relocalize a mobile camera (such as on a smart phone) in a known environment or to compute the pose of an object moving relative to a fixed camera. The pose information is useful for robotics, augmented reality, navigation and other applications. In various embodiments where camera pose is calculated, a trained machine learning system associates image elements from an image of a scene, with points in the scene's 3D world coordinate frame. In examples where the camera is fixed and the pose of an object is to be calculated, the trained machine learning system associates image elements from an image of the object with points in an object coordinate frame. In examples, the image elements may be noisy and incomplete and a pose inference engine calculates an accurate estimate of the pose.Type: ApplicationFiled: February 22, 2013Publication date: August 28, 2014Applicant: MICROSOFT CORPORATIONInventors: Jamie Daniel Joseph Shotton, Benjamin Michael Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, Andrew William Fitzgibbon
-
Publication number: 20140241612Abstract: Real-time stereo matching is described, for example, to find depths of objects in an environment from an image capture device capturing a stream of stereo images of the objects. For example, the depths may be used to control augmented reality, robotics, natural user interface technology, gaming and other applications. Streams of stereo images, or single stereo images, obtained with or without patterns of illumination projected onto the environment are processed using a parallel-processing unit to obtain depth maps. In various embodiments a parallel-processing unit propagates values related to depth in rows or columns of a disparity map in parallel. In examples, the values may be propagated according to a measure of similarity between two images of a stereo pair; propagation may be temporal between disparity maps of frames of a stream of stereo images and may be spatial within a left or right disparity map.Type: ApplicationFiled: February 23, 2013Publication date: August 28, 2014Applicant: MICROSOFT CORPORATIONInventors: Christoph Rhemann, Carsten Curt Eckard Rother, Christopher Zach, Shahram Izadi, Adam Garnet Kirk, Oliver Whyte, Michael Bleyer