Patents by Inventor Colin Jon Taylor
Colin Jon Taylor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11922489Abstract: A camera is used to capture image data of representations of a physical environment. Planes and surfaces are determined from a representation. The planes and the surfaces are analyzed using relationships there between to obtain shapes and depth information for available spaces within the physical environment. Locations of the camera with respect to the physical environment are determined. The shapes and the depth information are analyzed using a trained neural network to determine items fitting the available spaces. A live camera view is overlaid with a selection from the items to provide an augmented reality (AR) view of the physical environment from an individual location of the locations. The AR view is enabled so that a user can port to a different location than the individual location by an input received to the AR view while the selection from the items remains anchored to the individual location.Type: GrantFiled: February 11, 2019Date of Patent: March 5, 2024Assignee: A9.com, Inc.Inventors: Rupa Chaturvedi, Xing Zhang, Frank Partalis, Yu Lou, Colin Jon Taylor, Simon Fox
-
Patent number: 11093748Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.Type: GrantFiled: January 27, 2020Date of Patent: August 17, 2021Assignee: A9.COM, INC.Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
-
Patent number: 10924676Abstract: Visual effects for element of interest can be displayed within a live camera view in real time or substantially using a processing pipeline that does not immediately display an acquired image until it has been updated with the effects. In various embodiments, software-based approaches, such as fast convolution algorithms, and/or hardware-based approaches, such as using a graphics processing unit (GPU), can be used reduce the time between acquiring an image and displaying the image with various visual effects. These visual effects can include automatically highlighting elements, augmenting the color, style, and/or size of elements, casting a shadow on elements, erasing elements, substituting elements, or shaking and jumbling elements, among other effects.Type: GrantFiled: February 7, 2018Date of Patent: February 16, 2021Assignee: A9.com, Inc.Inventors: Adam Wiggen Kraft, Colin Jon Taylor
-
Patent number: 10839605Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.Type: GrantFiled: December 12, 2018Date of Patent: November 17, 2020Assignee: A9.com, Inc.Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
-
Publication number: 20200334906Abstract: Various embodiments provide methods and systems for users and business owners to share content and/or links to visual elements of a place at a physical location, and, in response to a user device pointing at a tagged place, causing the content and/or links to the visual elements of the place to be presented on the user device. In some embodiments, content and links are tied to specific objects at a place based at least in part upon one of Global Positioning System (GPS) locations, Inertial Measurement Unit (IMU) orientations, compass data, or one or more visual matching algorithms. Once the content and links are attached to the specific objects of the place, they can be discovered by a user with a portable device pointing at the specific objects in the real world.Type: ApplicationFiled: December 12, 2018Publication date: October 22, 2020Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Scott Paul Robertson, William Brendel, Nityananda Jayadevaprakash, Kathy Wing Lam Ma
-
Publication number: 20200311126Abstract: Techniques for providing recommended keywords in response to an image-based query are disclosed herein. In particular, various embodiments utilize an image matching service to identify recommended search keywords associated with image data received from a user. The search keywords can be used to perform a keyword search to identify content associated with an image input that may be relevant. For example, an image search query can be received from a user. The image search query may result in multiple different types of content that are associated with the image. The system may present keywords associated with matching images to allow a user to further refine their search and/or find other related products that may not match with the particular image. This enables users to quickly refine a search using keywords that may be difficult to identify otherwise and to find the most relevant content for the user.Type: ApplicationFiled: June 12, 2020Publication date: October 1, 2020Inventors: Sunil Ramesh, Shruti Sheorey, Colin Jon Taylor
-
Patent number: 10755485Abstract: Systems and methods for displaying 3D containers in a computer generated environment are described. A computing device may provide a user with a catalog of objects which may be purchased. In order to view what an object may look like prior to purchasing the object, a computing device may show a 3D container that has the same dimensions as the object. As discussed herein, the 3D container may be located and oriented based on a two-dimensional marker. Moreover, some 3D containers may contain a representation of an object, which may be a 2D image of the object.Type: GrantFiled: January 28, 2019Date of Patent: August 25, 2020Assignee: A9.com, Inc.Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Yu Lou, Chun-Kai Wang, Sudeshna Pantham, Himanshu Arora, Xi Zhang
-
Publication number: 20200258144Abstract: A camera is used to capture image data of representations of a physical environment. Planes and surfaces are determined from a representation. The planes and the surfaces are analyzed using relationships there between to obtain shapes and depth information for available spaces within the physical environment. Locations of the camera with respect to the physical environment are determined. The shapes and the depth information are analyzed using a trained neural network to determine items fitting the available spaces. A live camera view is overlaid with a selection from the items to provide an augmented reality (AR) view of the physical environment from an individual location of the locations. The AR view is enabled so that a user can port to a different location than the individual location by an input received to the AR view while the selection from the items remains anchored to the individual location.Type: ApplicationFiled: February 11, 2019Publication date: August 13, 2020Inventors: Rupa Chaturvedi, Xing Zhang, Frank Partalis, Yu Lou, Colin Jon Taylor, Simon Fox
-
Patent number: 10733801Abstract: Systems and methods for a markerless approach to displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real-world environment; for example including a feature-rich planar surface. One or more virtual objects which do not exist in the real-world environment are displayed in the image, such as by being positioned in a manner that they appear to be resting on the planar surface, based at least on a sensor bias value and scale information obtained by capturing multiple image views of the real-world environment.Type: GrantFiled: April 15, 2019Date of Patent: August 4, 2020Assignee: A9.com. Inc.Inventors: Nicholas Corso, Michael Patrick Cutter, Yu Lou, Sean Niu, Shaun Michael Post, Colin Jon Taylor, Mark Scott Waldo
-
Patent number: 10706098Abstract: Techniques for providing recommended keywords in response to an image-based query are disclosed herein. In particular, various embodiments utilize an image matching service to identify recommended search keywords associated with image data received from a user. The search keywords can be used to perform a keyword search to identify content associated with an image input that may be relevant. For example, an image search query can be received from a user. The image search query may result in multiple different types of content that are associated with the image. The system may present keywords associated with matching images to allow a user to further refine their search and/or find other related products that may not match with the particular image. This enables users to quickly refine a search using keywords that may be difficult to identify otherwise and to find the most relevant content for the user.Type: GrantFiled: March 29, 2016Date of Patent: July 7, 2020Assignee: A9.COM, INC.Inventors: Sunil Ramesh, Shruti Sheorey, Colin Jon Taylor
-
Patent number: 10664140Abstract: A user can select an object represented in video content in order to set a magnification level with respect to that object. A portion of the video frames containing a representation of the object is selected to maintain a presentation size of the representation corresponding to the magnification level. The selection provides for a “smart zoom” feature enabling an object of interest, such as a face of an actor, to be used in selecting an appropriate portion of each frame to magnify, such that the magnification results in a portion of the frame being selected that includes the one or more objects of interest to the user. Pre-generated tracking data can be provided for some objects, which can enable a user to select an object and then have predetermined portion selections and magnifications applied that can provide for a smoother user experience than for dynamically-determined data.Type: GrantFiled: March 7, 2017Date of Patent: May 26, 2020Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Charles Benjamin Franklin Waggoner, Colin Jon Taylor, Jeffrey P. Bezos, Douglas Ryan Gray
-
Publication number: 20200160058Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.Type: ApplicationFiled: January 27, 2020Publication date: May 21, 2020Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
-
Patent number: 10558857Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.Type: GrantFiled: March 5, 2018Date of Patent: February 11, 2020Assignee: A9.COM, INC.Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
-
Patent number: 10528821Abstract: A video segmentation system can be utilized to automate segmentation of digital video content. Features corresponding to visual, audio, and/or textual content of the video can be extracted from frames of the video. The extracted features of adjacent frames are compared according to a similarity measure to determine boundaries of a first set of shots or video segments distinguished by abrupt transitions. The first set of shots is analyzed according to certain heuristics to recognize a second set of shots distinguished by gradual transitions. Key frames can be extracted from the first and second set of shots, and the key frames can be used by the video segmentation system to group the first and second set of shots by scene. Additional processing can be performed to associate metadata, such as names of actors or titles of songs, with the detected scenes.Type: GrantFiled: August 29, 2017Date of Patent: January 7, 2020Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Adam Carlson, Douglas Ryan Gray, Ashutosh Vishwas Kulkarni, Colin Jon Taylor
-
Patent number: 10466955Abstract: Various embodiments provide methods and systems for providing a recommended volume level in presentation of media content. In some embodiments, volume adjustment events made by a user and/or similar users while watching media content can be detected and automatically recorded. The media content may include a plurality of segments. A normalized volume level for at least one segment of the media content can be determined by aggregating the recorded volume adjustment events corresponding to the at least one segment of the media content. When the media content is played back on a user device, at least some embodiments cause the at least one segment of the media content to be played back at a recommended volume level determined based at least in part upon one of the normalized audio level of the corresponding segment, the audio system of the user device, or historical data and personal profile of the user.Type: GrantFiled: June 24, 2014Date of Patent: November 5, 2019Assignee: A9.COM, INC.Inventors: Douglas Ryan Gray, Colin Jon Taylor, Ming Du, Wei-Hong Chuang
-
Patent number: 10469918Abstract: Techniques are described for providing functionality to allow a viewer of a television show to watch a “previously on” segment of an episode of the television show and be able to watch the scenes from prior episodes referenced in the “previously on” segment.Type: GrantFiled: September 21, 2017Date of Patent: November 5, 2019Assignees: A9.com, Inc., IMDb.com, Inc.Inventors: Adam Carlson, Jeromey Russell Goetz, Ashutosh Vishwas Kulkarni, Douglas Ryan Gray, Danny Ryan Stephens, Colin Jon Taylor, Ismet Zeki Yalniz
-
Publication number: 20190272425Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.Type: ApplicationFiled: March 5, 2018Publication date: September 5, 2019Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
-
Publication number: 20190236846Abstract: Systems and methods for a markerless approach to displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real-world environment; for example including a feature-rich planar surface. One or more virtual objects which do not exist in the real-world environment are displayed in the image, such as by being positioned in a manner that they appear to be resting on the planar surface, based at least on a sensor bias value and scale information obtained by capturing multiple image views of the real-world environment.Type: ApplicationFiled: April 15, 2019Publication date: August 1, 2019Inventors: Nicholas Corso, Michael Patrick Cutter, Yu Lou, Sean Niu, Shaun Michael Post, Colin Jon Taylor, Mark Scott Waldo
-
Patent number: 10339714Abstract: Systems and methods for a markerless approach to displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real-world environment; for example including a feature-rich planar surface. One or more virtual objects which do not exist in the real-world environment are displayed in the image, such as by being positioned in a manner that they appear to be resting on the planar surface, based at least on a sensor bias value and scale information obtained by capturing multiple image views of the real-world environment.Type: GrantFiled: May 9, 2017Date of Patent: July 2, 2019Assignee: A9.COM, INC.Inventors: Nicholas Corso, Michael Patrick Cutter, Yu Lou, Sean Niu, Shaun Michael Post, Colin Jon Taylor, Mark Scott Waldo
-
Publication number: 20190156585Abstract: Systems and methods for displaying 3D containers in a computer generated environment are described. A computing device may provide a user with a catalog of objects which may be purchased. In order to view what an object may look like prior to purchasing the object, a computing device may show a 3D container that has the same dimensions as the object. As discussed herein, the 3D container may be located and oriented based on a two-dimensional marker. Moreover, some 3D containers may contain a representation of an object, which may be a 2D image of the object.Type: ApplicationFiled: January 28, 2019Publication date: May 23, 2019Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Yu Lou, Chun-Kai Wang, Sudeshna Pantham, Himanshu Arora, Xi Zhang