Patents by Inventor Rafael Spring
Rafael Spring has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11915356Abstract: A method for controlling optional constraints to processing of multi-dimensional scene data via a user interface [UI] in an image management device is disclosed. The first step in this process is receiving a first data set of a scene having location information about a first location in an image wherein the first data set has a first performance metric. Next is activating a Constraint Manager having a plurality of constraint processes. The next step is selecting a first Constraint process from the plurality of constraint processes. Then receiving a new data set for the first constraint process to apply to the first data set, before finally activating the first Constraint process to incorporate the new data set to estimate a new location data set for the first location, wherein the new location data set has an improved performance metric as compared to the first performance metric.Type: GrantFiled: January 28, 2022Date of Patent: February 27, 2024Inventor: Rafael Spring
-
Publication number: 20220245882Abstract: A method for controlling optional constraints to processing of multi-dimensional scene data via a user interface [UI] in an image management device is disclosed. The first step in this process is receiving a first data set of a scene having location information about a first location in an image wherein the first data set has a first performance metric. Next is activating a Constraint Manager having a plurality of constraint processes. The next step is selecting a first Constraint process from the plurality of constraint processes. Then receiving a new data set for the first constraint process to apply to the first data set, before finally activating the first Constraint process to incorporate the new data set to estimate a new location data set for the first location, wherein the new location data set has an improved performance metric as compared to the first performance metric.Type: ApplicationFiled: January 28, 2022Publication date: August 4, 2022Inventor: Rafael SPRING
-
Patent number: 10964108Abstract: Augmentation of captured 3D scenes with contextual information is disclosed. A 3D capture device is used to capture a plurality of 3D images at a first resolution. A component on a mobile computing device is used to capture at least one piece of contextual information that includes a capture location data and a pose data. The mobile computing device receives, the plurality of 3D images from the 3D capture device, and renders the plurality of 3D images into a 3D model. In addition, the at least one piece of contextual information is embedded into a correct location in the 3D model. A user interactive version of the 3D model including the embedded at least one piece of contextual information is then displayed.Type: GrantFiled: June 12, 2020Date of Patent: March 30, 2021Assignee: DotProduct LLCInventors: Rafael Spring, Thomas Greaves
-
Publication number: 20200312026Abstract: Augmentation of captured 3D scenes with contextual information is disclosed. A 3D capture device is used to capture a plurality of 3D images at a first resolution. A component on a mobile computing device is used to capture at least one piece of contextual information that includes a capture location data and a pose data. The mobile computing device receives, the plurality of 3D images from the 3D capture device, and renders the plurality of 3D images into a 3D model. In addition, the at least one piece of contextual information is embedded into a correct location in the 3D model. A user interactive version of the 3D model including the embedded at least one piece of contextual information is then displayed.Type: ApplicationFiled: June 12, 2020Publication date: October 1, 2020Applicant: DotProduct LLCInventors: Rafael SPRING, Thomas GREAVES
-
Publication number: 20200267371Abstract: A system and method for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.Type: ApplicationFiled: May 5, 2020Publication date: August 20, 2020Applicant: DotProduct LLCInventors: Mark KLUSZA, Rafael SPRING
-
Patent number: 10699481Abstract: Augmentation of captured 3D scenes with contextual information is disclosed. A 3D capture device is used to capture a plurality of 3D images at a first resolution. A component on a mobile computing device is used to capture at least one piece of contextual information that includes a capture location data and a pose data. The mobile computing device receives, the plurality of 3D images from the 3D capture device, and renders the plurality of 3D images into a 3D model. In addition, the at least one piece of contextual information is embedded into a correct location in the 3D model. A user interactive version of the 3D model including the embedded at least one piece of contextual information is then displayed.Type: GrantFiled: May 16, 2018Date of Patent: June 30, 2020Assignee: DotProduct LLCInventors: Rafael Spring, Thomas Greaves
-
Patent number: 10674135Abstract: A system and method for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device including extracting gray values from the RGB-D image data, creating image pyramids from the grey values and the depth data, computing a scene fitness value using the image pyramids, predicting a camera pose and aligning the image with a first subset of selected key frames, computing a new camera pose estimate and creating a keyframe using the new camera pose estimate, after the keyframe is created, selecting a second subset of keyframes different from the first subset of keyframes and repeating the step of aligning with each keyframe of the second subset of keyframes, deciding whether new links are required to the keyframe in a keyframe pose graph and linking the keyframes.Type: GrantFiled: April 16, 2014Date of Patent: June 2, 2020Assignee: DotProduct LLCInventors: Mark Klusza, Rafael Spring
-
Patent number: 10448000Abstract: A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.Type: GrantFiled: March 24, 2016Date of Patent: October 15, 2019Assignee: DOTPRODUCT LLCInventors: Mark Klusza, Rafael Spring
-
Publication number: 20180336724Abstract: Augmentation of captured 3D scenes with contextual information is disclosed. A 3D capture device is used to capture a plurality of 3D images at a first resolution. A component on a mobile computing device is used to capture at least one piece of contextual information that includes a capture location data and a pose data. The mobile computing device receives, the plurality of 3D images from the 3D capture device, and renders the plurality of 3D images into a 3D model. In addition, the at least one piece of contextual information is embedded into a correct location in the 3D model. A user interactive version of the 3D model including the embedded at least one piece of contextual information is then displayed.Type: ApplicationFiled: May 16, 2018Publication date: November 22, 2018Applicant: DotProduct LLCInventors: Rafael SPRING, Thomas GREAVES
-
Publication number: 20160210753Abstract: A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.Type: ApplicationFiled: March 24, 2016Publication date: July 21, 2016Inventors: Mark Klusza, Rafael Spring
-
Patent number: 9332243Abstract: A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.Type: GrantFiled: March 15, 2013Date of Patent: May 3, 2016Assignee: DOTPRODUCT LLCInventors: Mark Klusza, Rafael Spring
-
Publication number: 20150170418Abstract: The present application discloses devices and methods for providing entry into and enabling interaction with a visual representation of an environment. In some implementations, a method is disclosed that includes obtaining an estimated global pose of a device in an environment. The method further includes providing on the device a user-interface including a visual representation of the environment that corresponds to the estimated global pose. The method still further includes receiving first data indicating an object in the visual representation, receiving second data indicating an action relating to the object, and applying the action in the visual representation. In other implementations, a head-mounted device is disclosed that includes a processor and data storage including logic executable by the processor to carry out the method described above.Type: ApplicationFiled: January 18, 2012Publication date: June 18, 2015Applicant: GOOGLE INC.Inventors: John Flynn, Rafael Spring, Dragomir Anguelov, Hartmut Neven
-
Patent number: 8933993Abstract: The present application discloses systems and methods for estimating a global pose of a device. In some implementations, a method is disclosed that includes causing a detector on a device to record an image of a view from the device, sending to a server a query based on the image, and receiving from the server an estimated global pose of the device. The method further includes determining an updated estimated global pose of the device by causing the detector to record an updated image of an updated view from the device, causing at least one sensor on the device to determine at least one sensor reading corresponding to movement of the device, determining a relative pose of the device based on the updated image and the at least one sensor reading, and, based on the relative pose and the estimated global pose, determining the updated estimated global pose.Type: GrantFiled: January 16, 2012Date of Patent: January 13, 2015Assignee: Google Inc.Inventors: John Flynn, Rafael Spring, Dragomir Anguelov, David Petrou
-
Publication number: 20140225985Abstract: A system and method for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.Type: ApplicationFiled: April 16, 2014Publication date: August 14, 2014Inventors: Mark Klusza, Rafael Spring
-
Publication number: 20140104387Abstract: A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.Type: ApplicationFiled: March 15, 2013Publication date: April 17, 2014Applicant: DotProduct LLCInventors: Mark Klusza, Rafael Spring
-
Patent number: 8661053Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.Type: GrantFiled: November 12, 2012Date of Patent: February 25, 2014Assignee: Google Inc.Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai
-
Patent number: 8332424Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.Type: GrantFiled: May 13, 2011Date of Patent: December 11, 2012Assignee: Google Inc.Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai
-
Publication number: 20120290591Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.Type: ApplicationFiled: May 13, 2011Publication date: November 15, 2012Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai