Patents by Inventor Scott Paul Robertson

Scott Paul Robertson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10943370
    Abstract: Objects can be rendered in three dimensions and viewed and manipulated in an augmented reality environment. A number of object images, a number of segmentation masks, and an object mesh structure are used by a client device to render the object in three dimensions. The object images and segmentation masks can be sequenced into frames. The object images and segmentation masks can be partitioned into patches and sequenced, or ordered, within each patch, and a keyframe can be assigned in each patch. Then, the object images and segmentation masks can be encoded into video files and sent to a client device. The client device can quickly retrieve a requested object image and segmentation mask based at least in part on identifying the keyframe in the same patch as the object image and segmentation mask.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: March 9, 2021
    Assignee: A9.com, Inc.
    Inventors: Arnab Sanat Kumar Dhua, Neil Raj Kumar, Karl Hillesland, Radek Grzeszczuk, Scott Paul Robertson
  • Patent number: 10839605
    Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: November 17, 2020
    Assignee: A9.com, Inc.
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
  • Publication number: 20200334906
    Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.
    Type: Application
    Filed: December 12, 2018
    Publication date: October 22, 2020
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
  • Publication number: 20200202575
    Abstract: Objects can be rendered in three dimensions and viewed and manipulated in an augmented reality environment. A number of object images, a number of segmentation masks, and an object mesh structure are used by a client device to render the object in three dimensions. The object images and segmentation masks can be sequenced into frames. The object images and segmentation masks can be partitioned into patches and sequenced, or ordered, within each patch, and a keyframe can be assigned in each patch. Then, the object images and segmentation masks can be encoded into video files and sent to a client device. The client device can quickly retrieve a requested object image and segmentation mask based at least in part on identifying the keyframe in the same patch as the object image and segmentation mask.
    Type: Application
    Filed: March 3, 2020
    Publication date: June 25, 2020
    Inventors: Arnab Sanat Kumar Dhua, Neil Raj Kumar, Karl Hillesland, Radek Grzeszczuk, Scott Paul Robertson
  • Patent number: 10593066
    Abstract: Objects can be rendered in three dimensions and viewed and manipulated in an augmented reality environment. A number of object images, a number of segmentation masks, and an object mesh structure are used by a client device to render the object in three dimensions. The object images and segmentation masks can be sequenced into frames. The object images and segmentation masks can be partitioned into patches and sequenced, or ordered, within each patch, and a keyframe can be assigned in each patch. Then, the object images and segmentation masks can be encoded into video files and sent to a client device. The client device can quickly retrieve a requested object image and segmentation mask based at least in part on identifying the keyframe in the same patch as the object image and segmentation mask.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: March 17, 2020
    Assignee: A9.COM, INC.
    Inventors: Arnab Sanat Kumar Dhua, Neil Raj Kumar, Karl Hillesland, Radek Grzeszczuk, Scott Paul Robertson
  • Patent number: 10579134
    Abstract: Systems and methods for displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real environment including a marker. One or more virtual objects which do not exist in the real environment are displayed in the image based at least on the marker. The distance and orientation of the marker may be taken into account to properly size and place the virtual object in the image. Further, virtual lighting may be added to an image to indicate to a user how the virtual object would appear with the virtual lighting.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: March 3, 2020
    Assignee: A9.COM, INC.
    Inventors: Nityananda Jayadevaprakash, William Brendel, David Creighton Mott, Scott Paul Robertson
  • Publication number: 20190333478
    Abstract: Approaches enable images submitted by users, owner, and/or authorized person of a point of interest (e.g., a place, a scene, an object, etc.) to be used as a fiducial to assist recognition and tracking of the point of interest in an augmented reality environment. Multiple images (e.g., crowd-sourced images) of a point of interest taken from different points of view can be dynamically used. For example, as a user with a user device moves through a point of interest, a different image can be chosen from a set of stored candidate images of the point of interest based at least upon GPS locations, IMU orientations, or compass data of the user device. In this way, instead of relying on artificial fiducial images for various detection and tracking approaches, approaches enable images submitted by users and/or an owner or other authorized person of a point of interest to be used as fiducials to assist recognition and tracking of the point of interest.
    Type: Application
    Filed: December 3, 2018
    Publication date: October 31, 2019
    Inventors: David Creighton Mott, Scott Paul Robertson, Arnab Sanat Kumar Dhua, William Brendel, Nityananda Jayadevaprakash
  • Publication number: 20190114839
    Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.
    Type: Application
    Filed: December 12, 2018
    Publication date: April 18, 2019
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
  • Patent number: 10163267
    Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: December 25, 2018
    Assignee: A9.com, Inc.
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
  • Patent number: 10147399
    Abstract: Approaches enable images submitted by users, owner, and/or authorized person of a point of interest (e.g., a place, a scene, an object, etc.) to be used as a fiducial to assist recognition and tracking of the point of interest in an augmented reality environment. Multiple images (e.g., crowd-sourced images) of a point of interest taken from different points of view can be dynamically used. For example, as a user with a user device moves through a point of interest, a different image can be chosen from a set of stored candidate images of the point of interest based at least upon GPS locations, IMU orientations, or compass data of the user device. In this way, instead of relying on artificial fiducial images for various detection and tracking approaches, approaches enable images submitted by users and/or an owner or other authorized person of a point of interest to be used as fiducials to assist recognition and tracking of the point of interest.
    Type: Grant
    Filed: September 2, 2014
    Date of Patent: December 4, 2018
    Assignee: A9.COM, INC.
    Inventors: David Creighton Mott, Scott Paul Robertson, Arnab Sanat Kumar Dhua, William Brendel, Nityananda Jayadevaprakash
  • Patent number: 9881084
    Abstract: Various embodiments may obtain an image representation of an object for use in image matching and content retrieval. For example, an image matching system processes video content items to determine one or more scenes for one or more video content items. The image matching system can extract, from at least one video frame for a scene, feature descriptors relating to one or more objects represented in the at least one video frame. The image matching system indexes the feature descriptors into a feature index storing information for each of the feature descriptors and respective corresponding video frame. The image matching system correlates the feature descriptors of the feature index to determine one or more groups having similar feature descriptors. The image matching system indexes the one or more groups into a correlation index storing information for each of the one or more groups and respective corresponding feature descriptors.
    Type: Grant
    Filed: June 24, 2014
    Date of Patent: January 30, 2018
    Assignee: a9.com, Inc.
    Inventors: Scott Paul Robertson, Sunil Ramesh
  • Publication number: 20170168559
    Abstract: Systems and methods for displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real environment including a marker. One or more virtual objects which do not exist in the real environment are displayed in the image based at least on the marker. The distance and orientation of the marker may be taken into account to properly size and place the virtual object in the image. Further, virtual lighting may be added to an image to indicate to a user how the virtual object would appear with the virtual lighting.
    Type: Application
    Filed: January 20, 2017
    Publication date: June 15, 2017
    Inventors: Nityananda Jayadevaprakash, William Brendel, David Creighton Mott, Scott Paul Robertson
  • Publication number: 20170053451
    Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.
    Type: Application
    Filed: August 26, 2016
    Publication date: February 23, 2017
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
  • Patent number: 9552674
    Abstract: Systems and methods for displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real environment including a marker. One or more virtual objects which do not exist in the real environment are displayed in the image based at least on the marker. The distance and orientation of the marker may be taken into account to properly size and place the virtual object in the image. Further, virtual lighting may be added to an image to indicate to a user how the virtual object would appear with the virtual lighting.
    Type: Grant
    Filed: March 26, 2014
    Date of Patent: January 24, 2017
    Assignee: A9.com, Inc.
    Inventors: Nityananda Jayadevaprakash, William Brendel, David Creighton Mott, Scott Paul Robertson
  • Patent number: 9432421
    Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: August 30, 2016
    Assignee: A9.com, Inc.
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma