Patents by Inventor Sudipta Narayan Sinha
Sudipta Narayan Sinha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200311396Abstract: Examples are disclosed that relate to representing recorded hand motion. One example provides a computing device comprising instructions executable by a logic subsystem to receive video data capturing hand motion relative to an object, determine a first pose of the object, and associate a first coordinate system with the object based on the first pose. The instructions are further executable to determine a representation of the hand motion in the first coordinate system, the representation having a time-varying pose relative to the first pose of the object, and configure the representation for display relative to a second instance of the object having a second pose in a second coordinate system, with a time-varying pose relative to the second pose that is spatially consistent with the time-varying pose relative to the first pose.Type: ApplicationFiled: March 25, 2019Publication date: October 1, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Marc Andre Leon POLLEFEYS, Sudipta Narayan SINHA, Harpreet Singh SAWHNEY, Bugra TEKIN, Federica BOGO
-
Publication number: 20200126256Abstract: A method for estimating a camera pose includes recognizing a three-dimensional (3D) map representing a physical environment, the 3D map including 3D map features defined as 3D points. An obfuscated image representation is received, the representation derived from an original unobfuscated image of the physical environment captured by a camera. The representation includes a plurality of obfuscated features, each including (i) a two-dimensional (2D) line that passes through a 2D point in the original unobfuscated image at which an image feature was detected, and (ii) a feature descriptor that describes the image feature associated with the 2D point that the 2D line of the obfuscated feature passes through. Correspondences are determined between the obfuscated features and the 3D map features of the 3D map of the physical environment. Based on the determined correspondences, a six degree of freedom pose of the camera in the physical environment is estimated.Type: ApplicationFiled: October 23, 2018Publication date: April 23, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sudipta Narayan SINHA, Marc Andre Leon POLLEFEYS, Sing Bing KANG, Pablo Alejandro SPECIALE
-
Patent number: 10602056Abstract: Examples of the present disclosure relate to generating optimal scanning trajectories for 3D scenes. In an example, a moveable camera may gather information about a scene. During an initial pass, an initial trajectory may be used to gather an initial dataset. In order to generate an optimal trajectory, a reconstruction of the scene may be generated based on the initial data set. Surface points and a camera position graph may be generated based on the reconstruction. A subgradient may be determined, wherein the subgradient provides an additive approximation for the marginal reward associated with each camera position node in the camera position graph. The subgradient may be used to generate an optimal trajectory based on the marginal reward of each camera position node. The optimal trajectory may then be used by to gather additional data, which may be iteratively analyzed and used to further refine and optimize subsequent trajectories.Type: GrantFiled: May 12, 2017Date of Patent: March 24, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mike Roberts, Debadeepta Dey, Sudipta Narayan Sinha, Shital Shah, Ashish Kapoor, Neel Suresh Joshi
-
Patent number: 10535156Abstract: Examples of the present disclosure describe systems and methods for scene reconstruction from bursts of image data. In an example, an image capture device may gather information from multiple positions within the scene. At each position, a burst of image data may be captured, such that other images within the burst may be used to identify common image features, anchor points, and geometry, in order to generate a scene reconstruction as observed from the position. Thus, as a result of capturing bursts from multiple positions in a scene, multiple burst reconstructions may be generated. Each burst may be oriented within the scene by identifying a key frame for each burst and using common image features and anchor points among the key frames to determine a camera position for each key frame. The burst reconstructions may then be combined into a unified reconstruction, thereby generating a high-quality reconstruction of the scene.Type: GrantFiled: April 4, 2017Date of Patent: January 14, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Neel Suresh Joshi, Sudipta Narayan Sinha, Minh Phuoc Vo
-
Publication number: 20200005486Abstract: Computing devices and methods for estimating a pose of a user computing device are provided. In one example a 3D map comprising a plurality of 3D points representing a physical environment is obtained. Each 3D point is transformed into a 3D line that passes through the point to generate a 3D line cloud. A query image of the environment captured by a user computing device is received, the query image comprising query features that correspond to the environment. Using the 3D line cloud and the query features, a pose of the user computing device with respect to the environment is estimated.Type: ApplicationFiled: July 2, 2018Publication date: January 2, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sudipta Narayan SINHA, Pablo Alejandro SPECIALE, Sing Bing KANG, Marc Andre Leon POLLEFEYS
-
Publication number: 20190362514Abstract: Stereo image reconstruction can be achieved by fusing a plurality of proposal cost volumes computed from a pair of stereo images, using a predictive model operating on pixelwise feature vectors that include disparity and cost values sparsely sampled form the proposal cost volumes to compute disparity estimates for the pixels within the image.Type: ApplicationFiled: May 25, 2018Publication date: November 28, 2019Inventors: Sudipta Narayan Sinha, Marc André Léon Pollefeys, Johannes Lutz Schönberger
-
Publication number: 20180367728Abstract: Examples of the present disclosure relate to generating optimal scanning trajectories for 3D scenes. In an example, a moveable camera may gather information about a scene. During an initial pass, an initial trajectory may be used to gather an initial dataset. In order to generate an optimal trajectory, a reconstruction of the scene may be generated based on the initial data set. Surface points and a camera position graph may be generated based on the reconstruction. A subgradient may be determined, wherein the subgradient provides an additive approximation for the marginal reward associated with each camera position node in the camera position graph. The subgradient may be used to generate an optimal trajectory based on the marginal reward of each camera position node. The optimal trajectory may then be used by to gather additional data, which may be iteratively analyzed and used to further refine and optimize subsequent trajectories.Type: ApplicationFiled: May 12, 2017Publication date: December 20, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Mike Roberts, Debadeepta Dey, Sudipta Narayan Sinha, Shital Shah, Ashish Kapoor, Neel Suresh Joshi
-
Publication number: 20180225836Abstract: Examples of the present disclosure describe systems and methods for scene reconstruction from bursts of image data. In an example, an image capture device may gather information from multiple positions within the scene. At each position, a burst of image data may be captured, such that other images within the burst may be used to identify common image features, anchor points, and geometry, in order to generate a scene reconstruction as observed from the position. Thus, as a result of capturing bursts from multiple positions in a scene, multiple burst reconstructions may be generated. Each burst may be oriented within the scene by identifying a key frame for each burst and using common image features and anchor points among the key frames to determine a camera position for each key frame. The burst reconstructions may then be combined into a unified reconstruction, thereby generating a high-quality reconstruction of the scene.Type: ApplicationFiled: April 4, 2017Publication date: August 9, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Neel Suresh Joshi, Sudipta Narayan Sinha, Minh Phuoc Vo
-
Publication number: 20180220072Abstract: One or more techniques and/or systems are provided for ordering images for panorama stitching and/or for providing a focal point indicator for image capture. For example, one or more images, which may be stitched together to create a panorama of a scene, may be stored within an image stack according to one or more ordering preferences, such as where manually captured images are stored within a first/higher priority region of the image stack as compared to automatically captured images. One or more images within the image stack may be stitched according to a stitching order to create the panorama, such as using images in the first region for a foreground of the panorama. Also, a current position of a camera may be tracked and compared with a focal point of a scene to generate a focal point indicator to assist with capturing a new/current image of the scene.Type: ApplicationFiled: March 28, 2018Publication date: August 2, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Blaise Aguera y ARCAS, Markus UNGER, Donald A. BARNETT, David Maxwell GEDYE, Sudipta Narayan SINHA, Eric Joel STOLLNITZ, Johannes KOPF
-
Patent number: 9973697Abstract: One or more techniques and/or systems are provided for ordering images for panorama stitching and/or for providing a focal point indicator for image capture. For example, one or more images, which may be stitched together to create a panorama of a scene, may be stored within an image stack according to one or more ordering preferences, such as where manually captured images are stored within a first/higher priority region of the image stack as compared to automatically captured images. One or more images within the image stack may be stitched according to a stitching order to create the panorama, such as using images in the first region for a foreground of the panorama. Also, a current position of a camera may be tracked and compared with a focal point of a scene to generate a focal point indicator to assist with capturing a new/current image of the scene.Type: GrantFiled: May 24, 2017Date of Patent: May 15, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Blaise Aguera y Arcas, Markus Unger, Donald A. Barnett, David Maxwell Gedye, Sudipta Narayan Sinha, Eric Joel Stollnitz, Johannes Kopf
-
Publication number: 20170257565Abstract: One or more techniques and/or systems are provided for ordering images for panorama stitching and/or for providing a focal point indicator for image capture. For example, one or more images, which may be stitched together to create a panorama of a scene, may be stored within an image stack according to one or more ordering preferences, such as where manually captured images are stored within a first/higher priority region of the image stack as compared to automatically captured images. One or more images within the image stack may be stitched according to a stitching order to create the panorama, such as using images in the first region for a foreground of the panorama. Also, a current position of a camera may be tracked and compared with a focal point of a scene to generate a focal point indicator to assist with capturing a new/current image of the scene.Type: ApplicationFiled: May 24, 2017Publication date: September 7, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Blaise Aguera y ARCAS, Markus UNGER, Donald A. BARNETT, David Maxwell GEDYE, Sudipta Narayan SINHA, Eric Joel STOLLNITZ, Johannes KOPF
-
Patent number: 9712746Abstract: One or more techniques and/or systems are provided for ordering images for panorama stitching and/or for providing a focal point indicator for image capture. For example, one or more images, which may be stitched together to create a panorama of a scene, may be stored within an image stack according to one or more ordering preferences, such as where manually captured images are stored within a first/higher priority region of the image stack as compared to automatically captured images. One or more images within the image stack may be stitched according to a stitching order to create the panorama, such as using images in the first region for a foreground of the panorama. Also, a current position of a camera may be tracked and compared with a focal point of a scene to generate a focal point indicator to assist with capturing a new/current image of the scene.Type: GrantFiled: March 14, 2013Date of Patent: July 18, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Blaise Aguera y Arcas, Markus Unger, Donald A. Barnett, David Maxwell Gedye, Sudipta Narayan Sinha, Eric Joel Stollnitz, Johannes Kopf
-
Patent number: 9305371Abstract: Among other things, one or more techniques and/or systems are provided for defining transition zones for navigating a visualization. The visualization may be constructed from geometry of a scene and one or more texture images depicted the scene from various viewpoints. A transition zone may correspond to portions of the visualization that do not have a one-to-one correspondence with a single texture image, but are generated from textured geometry (e.g., a projection of texture imagery onto the geometry). Because a translated view may have visual error (e.g., a portion of the translated view is not correctly represented by the textured geometry), one or more transition zones, specifying translated view experiences (e.g., unrestricted view navigation, restricted view navigation, etc.), may be defined. For example, a snapback force may be applied when a current view corresponds to a transition zone having a relatively higher error.Type: GrantFiled: March 14, 2013Date of Patent: April 5, 2016Assignee: Uber Technologies, Inc.Inventors: Blaise Aguera y Arcas, Markus Unger, Donald A. Barnett, Sudipta Narayan Sinha, Eric Joel Stollnitz, Johannes Peter Kopf, Timo Pekka Pylvaenaeinen, Christopher Stephen Messer
-
Patent number: 9111349Abstract: The claimed subject matter provides for systems and/or methods for identification of instances of an object of interest in 2D images by creating a database of 3D curve models of each desired instance and comparing an image of an object of interest against such 3D curve models of instances. The present application describes identifying and verifying the make and model of a car from a possibly single image—after the models have been populated with training data of test images of many makes and models of cars. In one embodiment, an identification system may be constructed by generating a 3D curve model by back-projecting edge points onto a visual hull reconstruction from silhouettes of an instance. The system and methods employ chamfer distance and orientation distance provides reasonable verification performance, as well as an appearance model for the taillights of the car to increase the robustness of the system.Type: GrantFiled: December 16, 2011Date of Patent: August 18, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Richard Stephan Szeliski, Edward Hsiao, Sudipta Narayan Sinha, Krishnan Ramnath, Charles Lawrence Zitnick, III, Simon John Baker
-
Patent number: 8933925Abstract: Methods, systems, and computer-readable media for reconstruction a three-dimensional scene from a collection of two-dimensional images are provided. A computerized reconstruction system executes computer vision algorithms on the collection of two-dimensional images to identify candidate planes that are used to model visual characteristics of the environment depicted in the two-dimensional images. The computer vision algorithms may minimize an energy function that represents the relationships and similarities among features of the two-dimensional images to assign pixels of the two dimensional images to planes in the three dimensional scene. The three-dimensional scene is navigable and depicts viewpoint transitions between multiple two-dimensional images.Type: GrantFiled: June 15, 2009Date of Patent: January 13, 2015Assignee: Microsoft CorporationInventors: Sudipta Narayan Sinha, Drew Edward Steedly, Richard Stephen Szeliski
-
Publication number: 20140267343Abstract: Among other things, one or more techniques and/or systems are provided for defining transition zones for navigating a visualization. The visualization may be constructed from geometry of a scene and one or more texture images depicted the scene from various viewpoints. A transition zone may correspond to portions of the visualization that do not have a one-to-one correspondence with a single texture image, but are generated from textured geometry (e.g., a projection of texture imagery onto the geometry). Because a translated view may have visual error (e.g., a portion of the translated view is not correctly represented by the textured geometry), one or more transition zones, specifying translated view experiences (e.g., unrestricted view navigation, restricted view navigation, etc.), may be defined. For example, a snapback force may be applied when a current view corresponds to a transition zone having a relatively higher error.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: Microsoft CorporationInventors: Blaise Aguera y Arcas, Markus Unger, Donald A. Barnett, Sudipta Narayan Sinha, Eric Joel Stollnitz, Johannes Peter Kopf, Timo Pekka Pylvaenaeinen, Christopher Stephen Messer
-
Publication number: 20140267587Abstract: One or more techniques and/or systems are provided for generating a panorama packet and/or for utilizing a panorama packet. That is, a panorama packet may be generated and/or consumed to provide an interactive panorama view experience of a scene depicted by one or more input images within the panorama packet (e.g., a user may explore the scene through multi-dimensional navigation of a panorama generated from the panorama packet). The panorama packet may comprise a set of input images may depict the scene from various viewpoints. The panorama packet may comprise a camera pose manifold that may define one or more perspectives of the scene that may be used to generate a current view of the scene. The panorama packet may comprise a coarse geometry corresponding to a multi-dimensional representation of a surface of the scene. An interactive panorama of the scene may be generated based upon the panorama packet.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Inventors: Blaise Aguera y Arcas, Markus Unger, Sudipta Narayan Sinha, Eric Joel Stollnitz, Matthew T. Uyttendaele, David Maxwell Gedye, Richard Stephen Szeliski, Johannes Peter Kopf, Donald A. Barnett
-
Publication number: 20140267588Abstract: One or more techniques and/or systems are provided for ordering images for panorama stitching and/or for providing a focal point indicator for image capture. For example, one or more images, which may be stitched together to create a panorama of a scene, may be stored within an image stack according to one or more ordering preferences, such as where manually captured images are stored within a first/higher priority region of the image stack as compared to automatically captured images. One or more images within the image stack may be stitched according to a stitching order to create the panorama, such as using images in the first region for a foreground of the panorama. Also, a current position of a camera may be tracked and compared with a focal point of a scene to generate a focal point indicator to assist with capturing a new/current image of the scene.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: Microsoft CorporationInventors: Blaise Aguera y Arcas, Markus Unger, Donald A. Barnett, David Maxwell Gedye, Sudipta Narayan Sinha, Eric Joel Stollnitz, Johannes Kopf
-
Publication number: 20140267600Abstract: One or more techniques and/or systems are provided for generating a synth packet and/or for providing an interactive view experience of a scene utilizing the synth packet. In particular, the synth packet comprises a set of input images depicting a scene from various viewpoints, a local graph comprising navigational relationships between input images, a coarse geometry comprising a multi-dimensional representation of a surface of the scene, and/or a camera pose manifold specifying view perspectives of the scene. An interactive view experience of the scene may be provided using the synth packet, such that a user may seamlessly navigate the scene in multi-dimensional space based upon navigational relationship information specified within the local graph.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: Microsoft CorporationInventors: Blaise Aguera y Arcas, Markus Unger, Sudipta Narayan Sinha, Matthew T. Uyttendaele, Richard Stephen Szeliski
-
Patent number: 8837811Abstract: Described is a linear structure from motion technique that is scalable, parallelizable, treats images equally, and is robust to outliers, without requiring intermediate bundle adjustment. Camera rotations for images are estimated using feature point correspondence and vanishing points matched across the images. The camera rotation data is fed into a linear system for structure and translation estimation that removes outliers and provides output data corresponding to structure from motion parameters. The data may be used in further optimization e.g. with a final non-linear optimization stage referred to as bundle adjustment to provide final refined structure from motion parameters.Type: GrantFiled: June 17, 2010Date of Patent: September 16, 2014Assignee: Microsoft CorporationInventors: Sudipta Narayan Sinha, Drew Edward Steedly, Richard S. Szeliski