Patents by Inventor Scott Paul Robertson
Scott Paul Robertson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10943370Abstract: Objects can be rendered in three dimensions and viewed and manipulated in an augmented reality environment. A number of object images, a number of segmentation masks, and an object mesh structure are used by a client device to render the object in three dimensions. The object images and segmentation masks can be sequenced into frames. The object images and segmentation masks can be partitioned into patches and sequenced, or ordered, within each patch, and a keyframe can be assigned in each patch. Then, the object images and segmentation masks can be encoded into video files and sent to a client device. The client device can quickly retrieve a requested object image and segmentation mask based at least in part on identifying the keyframe in the same patch as the object image and segmentation mask.Type: GrantFiled: March 3, 2020Date of Patent: March 9, 2021Assignee: A9.com, Inc.Inventors: Arnab Sanat Kumar Dhua, Neil Raj Kumar, Karl Hillesland, Radek Grzeszczuk, Scott Paul Robertson
-
Patent number: 10839605Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.Type: GrantFiled: December 12, 2018Date of Patent: November 17, 2020Assignee: A9.com, Inc.Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
-
Publication number: 20200334906Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.Type: ApplicationFiled: December 12, 2018Publication date: October 22, 2020Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
-
Publication number: 20200202575Abstract: Objects can be rendered in three dimensions and viewed and manipulated in an augmented reality environment. A number of object images, a number of segmentation masks, and an object mesh structure are used by a client device to render the object in three dimensions. The object images and segmentation masks can be sequenced into frames. The object images and segmentation masks can be partitioned into patches and sequenced, or ordered, within each patch, and a keyframe can be assigned in each patch. Then, the object images and segmentation masks can be encoded into video files and sent to a client device. The client device can quickly retrieve a requested object image and segmentation mask based at least in part on identifying the keyframe in the same patch as the object image and segmentation mask.Type: ApplicationFiled: March 3, 2020Publication date: June 25, 2020Inventors: Arnab Sanat Kumar Dhua, Neil Raj Kumar, Karl Hillesland, Radek Grzeszczuk, Scott Paul Robertson
-
Patent number: 10593066Abstract: Objects can be rendered in three dimensions and viewed and manipulated in an augmented reality environment. A number of object images, a number of segmentation masks, and an object mesh structure are used by a client device to render the object in three dimensions. The object images and segmentation masks can be sequenced into frames. The object images and segmentation masks can be partitioned into patches and sequenced, or ordered, within each patch, and a keyframe can be assigned in each patch. Then, the object images and segmentation masks can be encoded into video files and sent to a client device. The client device can quickly retrieve a requested object image and segmentation mask based at least in part on identifying the keyframe in the same patch as the object image and segmentation mask.Type: GrantFiled: January 9, 2018Date of Patent: March 17, 2020Assignee: A9.COM, INC.Inventors: Arnab Sanat Kumar Dhua, Neil Raj Kumar, Karl Hillesland, Radek Grzeszczuk, Scott Paul Robertson
-
Patent number: 10579134Abstract: Systems and methods for displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real environment including a marker. One or more virtual objects which do not exist in the real environment are displayed in the image based at least on the marker. The distance and orientation of the marker may be taken into account to properly size and place the virtual object in the image. Further, virtual lighting may be added to an image to indicate to a user how the virtual object would appear with the virtual lighting.Type: GrantFiled: January 20, 2017Date of Patent: March 3, 2020Assignee: A9.COM, INC.Inventors: Nityananda Jayadevaprakash, William Brendel, David Creighton Mott, Scott Paul Robertson
-
Publication number: 20190333478Abstract: Approaches enable images submitted by users, owner, and/or authorized person of a point of interest (e.g., a place, a scene, an object, etc.) to be used as a fiducial to assist recognition and tracking of the point of interest in an augmented reality environment. Multiple images (e.g., crowd-sourced images) of a point of interest taken from different points of view can be dynamically used. For example, as a user with a user device moves through a point of interest, a different image can be chosen from a set of stored candidate images of the point of interest based at least upon GPS locations, IMU orientations, or compass data of the user device. In this way, instead of relying on artificial fiducial images for various detection and tracking approaches, approaches enable images submitted by users and/or an owner or other authorized person of a point of interest to be used as fiducials to assist recognition and tracking of the point of interest.Type: ApplicationFiled: December 3, 2018Publication date: October 31, 2019Inventors: David Creighton Mott, Scott Paul Robertson, Arnab Sanat Kumar Dhua, William Brendel, Nityananda Jayadevaprakash
-
Publication number: 20190114839Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.Type: ApplicationFiled: December 12, 2018Publication date: April 18, 2019Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
-
Patent number: 10163267Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.Type: GrantFiled: August 26, 2016Date of Patent: December 25, 2018Assignee: A9.com, Inc.Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
-
Patent number: 10147399Abstract: Approaches enable images submitted by users, owner, and/or authorized person of a point of interest (e.g., a place, a scene, an object, etc.) to be used as a fiducial to assist recognition and tracking of the point of interest in an augmented reality environment. Multiple images (e.g., crowd-sourced images) of a point of interest taken from different points of view can be dynamically used. For example, as a user with a user device moves through a point of interest, a different image can be chosen from a set of stored candidate images of the point of interest based at least upon GPS locations, IMU orientations, or compass data of the user device. In this way, instead of relying on artificial fiducial images for various detection and tracking approaches, approaches enable images submitted by users and/or an owner or other authorized person of a point of interest to be used as fiducials to assist recognition and tracking of the point of interest.Type: GrantFiled: September 2, 2014Date of Patent: December 4, 2018Assignee: A9.COM, INC.Inventors: David Creighton Mott, Scott Paul Robertson, Arnab Sanat Kumar Dhua, William Brendel, Nityananda Jayadevaprakash
-
Patent number: 9881084Abstract: Various embodiments may obtain an image representation of an object for use in image matching and content retrieval. For example, an image matching system processes video content items to determine one or more scenes for one or more video content items. The image matching system can extract, from at least one video frame for a scene, feature descriptors relating to one or more objects represented in the at least one video frame. The image matching system indexes the feature descriptors into a feature index storing information for each of the feature descriptors and respective corresponding video frame. The image matching system correlates the feature descriptors of the feature index to determine one or more groups having similar feature descriptors. The image matching system indexes the one or more groups into a correlation index storing information for each of the one or more groups and respective corresponding feature descriptors.Type: GrantFiled: June 24, 2014Date of Patent: January 30, 2018Assignee: a9.com, Inc.Inventors: Scott Paul Robertson, Sunil Ramesh
-
Publication number: 20170168559Abstract: Systems and methods for displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real environment including a marker. One or more virtual objects which do not exist in the real environment are displayed in the image based at least on the marker. The distance and orientation of the marker may be taken into account to properly size and place the virtual object in the image. Further, virtual lighting may be added to an image to indicate to a user how the virtual object would appear with the virtual lighting.Type: ApplicationFiled: January 20, 2017Publication date: June 15, 2017Inventors: Nityananda Jayadevaprakash, William Brendel, David Creighton Mott, Scott Paul Robertson
-
Publication number: 20170053451Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.Type: ApplicationFiled: August 26, 2016Publication date: February 23, 2017Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
-
Patent number: 9552674Abstract: Systems and methods for displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real environment including a marker. One or more virtual objects which do not exist in the real environment are displayed in the image based at least on the marker. The distance and orientation of the marker may be taken into account to properly size and place the virtual object in the image. Further, virtual lighting may be added to an image to indicate to a user how the virtual object would appear with the virtual lighting.Type: GrantFiled: March 26, 2014Date of Patent: January 24, 2017Assignee: A9.com, Inc.Inventors: Nityananda Jayadevaprakash, William Brendel, David Creighton Mott, Scott Paul Robertson
-
Patent number: 9432421Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.Type: GrantFiled: March 28, 2014Date of Patent: August 30, 2016Assignee: A9.com, Inc.Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma