Patents by Inventor Rafael Spring

Rafael Spring has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11915356
    Abstract: A method for controlling optional constraints to processing of multi-dimensional scene data via a user interface [UI] in an image management device is disclosed. The first step in this process is receiving a first data set of a scene having location information about a first location in an image wherein the first data set has a first performance metric. Next is activating a Constraint Manager having a plurality of constraint processes. The next step is selecting a first Constraint process from the plurality of constraint processes. Then receiving a new data set for the first constraint process to apply to the first data set, before finally activating the first Constraint process to incorporate the new data set to estimate a new location data set for the first location, wherein the new location data set has an improved performance metric as compared to the first performance metric.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: February 27, 2024
    Inventor: Rafael Spring
  • Publication number: 20220245882
    Abstract: A method for controlling optional constraints to processing of multi-dimensional scene data via a user interface [UI] in an image management device is disclosed. The first step in this process is receiving a first data set of a scene having location information about a first location in an image wherein the first data set has a first performance metric. Next is activating a Constraint Manager having a plurality of constraint processes. The next step is selecting a first Constraint process from the plurality of constraint processes. Then receiving a new data set for the first constraint process to apply to the first data set, before finally activating the first Constraint process to incorporate the new data set to estimate a new location data set for the first location, wherein the new location data set has an improved performance metric as compared to the first performance metric.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 4, 2022
    Inventor: Rafael SPRING
  • Patent number: 10964108
    Abstract: Augmentation of captured 3D scenes with contextual information is disclosed. A 3D capture device is used to capture a plurality of 3D images at a first resolution. A component on a mobile computing device is used to capture at least one piece of contextual information that includes a capture location data and a pose data. The mobile computing device receives, the plurality of 3D images from the 3D capture device, and renders the plurality of 3D images into a 3D model. In addition, the at least one piece of contextual information is embedded into a correct location in the 3D model. A user interactive version of the 3D model including the embedded at least one piece of contextual information is then displayed.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: March 30, 2021
    Assignee: DotProduct LLC
    Inventors: Rafael Spring, Thomas Greaves
  • Publication number: 20200312026
    Abstract: Augmentation of captured 3D scenes with contextual information is disclosed. A 3D capture device is used to capture a plurality of 3D images at a first resolution. A component on a mobile computing device is used to capture at least one piece of contextual information that includes a capture location data and a pose data. The mobile computing device receives, the plurality of 3D images from the 3D capture device, and renders the plurality of 3D images into a 3D model. In addition, the at least one piece of contextual information is embedded into a correct location in the 3D model. A user interactive version of the 3D model including the embedded at least one piece of contextual information is then displayed.
    Type: Application
    Filed: June 12, 2020
    Publication date: October 1, 2020
    Applicant: DotProduct LLC
    Inventors: Rafael SPRING, Thomas GREAVES
  • Publication number: 20200267371
    Abstract: A system and method for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.
    Type: Application
    Filed: May 5, 2020
    Publication date: August 20, 2020
    Applicant: DotProduct LLC
    Inventors: Mark KLUSZA, Rafael SPRING
  • Patent number: 10699481
    Abstract: Augmentation of captured 3D scenes with contextual information is disclosed. A 3D capture device is used to capture a plurality of 3D images at a first resolution. A component on a mobile computing device is used to capture at least one piece of contextual information that includes a capture location data and a pose data. The mobile computing device receives, the plurality of 3D images from the 3D capture device, and renders the plurality of 3D images into a 3D model. In addition, the at least one piece of contextual information is embedded into a correct location in the 3D model. A user interactive version of the 3D model including the embedded at least one piece of contextual information is then displayed.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: June 30, 2020
    Assignee: DotProduct LLC
    Inventors: Rafael Spring, Thomas Greaves
  • Patent number: 10674135
    Abstract: A system and method for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device including extracting gray values from the RGB-D image data, creating image pyramids from the grey values and the depth data, computing a scene fitness value using the image pyramids, predicting a camera pose and aligning the image with a first subset of selected key frames, computing a new camera pose estimate and creating a keyframe using the new camera pose estimate, after the keyframe is created, selecting a second subset of keyframes different from the first subset of keyframes and repeating the step of aligning with each keyframe of the second subset of keyframes, deciding whether new links are required to the keyframe in a keyframe pose graph and linking the keyframes.
    Type: Grant
    Filed: April 16, 2014
    Date of Patent: June 2, 2020
    Assignee: DotProduct LLC
    Inventors: Mark Klusza, Rafael Spring
  • Patent number: 10448000
    Abstract: A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: October 15, 2019
    Assignee: DOTPRODUCT LLC
    Inventors: Mark Klusza, Rafael Spring
  • Publication number: 20180336724
    Abstract: Augmentation of captured 3D scenes with contextual information is disclosed. A 3D capture device is used to capture a plurality of 3D images at a first resolution. A component on a mobile computing device is used to capture at least one piece of contextual information that includes a capture location data and a pose data. The mobile computing device receives, the plurality of 3D images from the 3D capture device, and renders the plurality of 3D images into a 3D model. In addition, the at least one piece of contextual information is embedded into a correct location in the 3D model. A user interactive version of the 3D model including the embedded at least one piece of contextual information is then displayed.
    Type: Application
    Filed: May 16, 2018
    Publication date: November 22, 2018
    Applicant: DotProduct LLC
    Inventors: Rafael SPRING, Thomas GREAVES
  • Publication number: 20160210753
    Abstract: A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.
    Type: Application
    Filed: March 24, 2016
    Publication date: July 21, 2016
    Inventors: Mark Klusza, Rafael Spring
  • Patent number: 9332243
    Abstract: A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 3, 2016
    Assignee: DOTPRODUCT LLC
    Inventors: Mark Klusza, Rafael Spring
  • Publication number: 20150170418
    Abstract: The present application discloses devices and methods for providing entry into and enabling interaction with a visual representation of an environment. In some implementations, a method is disclosed that includes obtaining an estimated global pose of a device in an environment. The method further includes providing on the device a user-interface including a visual representation of the environment that corresponds to the estimated global pose. The method still further includes receiving first data indicating an object in the visual representation, receiving second data indicating an action relating to the object, and applying the action in the visual representation. In other implementations, a head-mounted device is disclosed that includes a processor and data storage including logic executable by the processor to carry out the method described above.
    Type: Application
    Filed: January 18, 2012
    Publication date: June 18, 2015
    Applicant: GOOGLE INC.
    Inventors: John Flynn, Rafael Spring, Dragomir Anguelov, Hartmut Neven
  • Patent number: 8933993
    Abstract: The present application discloses systems and methods for estimating a global pose of a device. In some implementations, a method is disclosed that includes causing a detector on a device to record an image of a view from the device, sending to a server a query based on the image, and receiving from the server an estimated global pose of the device. The method further includes determining an updated estimated global pose of the device by causing the detector to record an updated image of an updated view from the device, causing at least one sensor on the device to determine at least one sensor reading corresponding to movement of the device, determining a relative pose of the device based on the updated image and the at least one sensor reading, and, based on the relative pose and the estimated global pose, determining the updated estimated global pose.
    Type: Grant
    Filed: January 16, 2012
    Date of Patent: January 13, 2015
    Assignee: Google Inc.
    Inventors: John Flynn, Rafael Spring, Dragomir Anguelov, David Petrou
  • Publication number: 20140225985
    Abstract: A system and method for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.
    Type: Application
    Filed: April 16, 2014
    Publication date: August 14, 2014
    Inventors: Mark Klusza, Rafael Spring
  • Publication number: 20140104387
    Abstract: A system and methods for real-time or near-real time processing and post-processing of RGB-D image data using a handheld portable device and using the results for a variety of applications. The disclosure is based on the combination of off-the-shelf equipment (e.g. an RGB-D camera and a smartphone/tablet computer) in a self-contained unit capable of performing complex spatial reasoning tasks using highly optimized computer vision algorithms. New applications are disclosed using the instantaneous results obtained and the wireless connectivity of the host device for remote collaboration. One method includes steps of projecting a dot pattern from a light source onto a plurality of points on a scene, measuring distances to the points, and digitally reconstructing an image or images of the scene, such as a 3D view of the scene. A plurality of images may also be stitched together to re-position an orientation of the view of the scene.
    Type: Application
    Filed: March 15, 2013
    Publication date: April 17, 2014
    Applicant: DotProduct LLC
    Inventors: Mark Klusza, Rafael Spring
  • Patent number: 8661053
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Grant
    Filed: November 12, 2012
    Date of Patent: February 25, 2014
    Assignee: Google Inc.
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai
  • Patent number: 8332424
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Grant
    Filed: May 13, 2011
    Date of Patent: December 11, 2012
    Assignee: Google Inc.
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai
  • Publication number: 20120290591
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Application
    Filed: May 13, 2011
    Publication date: November 15, 2012
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai